text
stringlengths
100
500k
subset
stringclasses
4 values
Labels and Captions From GeoGebra Manual Revision as of 08:17, 30 September 2011 by Birgit Lachner (talk | contribs) (→‎Caption) GeoGebra Objects Geometric Objects Action Objects Free, Dependent and Auxiliary Objects Naming Objects Change Values In GeoGebra, each object has its unique label. For labeling you can choose one or more letters, possibly with subscript. For details see Naming Objects. Show and Hide Labels You can show or hide the objects labels in the Graphics View in different ways: Select the Show / Hide Label Tool and click on the object whose label you would like to show or hide. Open the Context Menu for the desired object and select Show Label. Open the Properties Dialog for the desired object and check or un-check the checkbox Show Label on tab Basic. Name and Value In GeoGebra, every object has a unique name that can be used to label the object in the Graphics View. In addition, an object can also be labeled using its value or its name and value. You can change this label setting in the Properties Dialog on tab Basic by selecting the corresponding option Name, Value, or Name & Value from the drop down menu next to the checkbox Show Label. Note: The value of a point is its coordinates, while the value of a function is its equation. However, sometimes you might want to give several objects the same label, for example, to label the four edges of a square a. In this case, GeoGebra offers captions for all objects, in addition to the three labeling options mentioned above. You can set the caption of an object on tab Basic of the Properties Dialog by entering the desired caption into the text field called "Caption". Afterwards, you can select the labeling option "Caption" from the drop down menu next to the checkbox "Show Label". You can use following placeholders in captions: %v Value %n Name %x x coordinate (or x coefficient for the line a x + b y + c = 0) %y y coordinate (or x coefficient for the line a x + b y + c = 0) %z the 'c' term for the line a x + b y + c = 0 (also: z-coordinate, ready for a 3D View) Example: Let A be a point and (1,2) be its coordinates. Setting the caption to "Point %n has coordinates %v" results in caption "Point A has coordinates (1,2)" LaTeX in Captions You can also use LaTeX in your labels, enclosing the desired LaTeX command in dollar characters (eg $ x^{2} $), and choose from a list of most commonly used Greek letters and operators, just clicking on the little box displayed at the end of the Caption field. Example: If you want to display a nicely formatted math text, use LaTeX in captions, e.g. to display a fraction, in the caption field type "$\frac{a}{b}$" Note: LaTeX Captions don't work for Textfields, Buttons and Checkboxes Retrieved from "http://wiki.geogebra.org/s/en/index.php?title=Labels_and_Captions&oldid=14492" Manual (official)
CommonCrawl
Degenerate with respect to parameters fold-Hopf bifurcations DCDS Home Almost global existence for cubic nonlinear Schrödinger equations in one space dimension April 2017, 37(4): 2103-2113. doi: 10.3934/dcds.2017090 On the local C1, α solution of ideal magneto-hydrodynamical equations Shu-Guang Shao 1,2, , Shu Wang 1, , Wen-Qing Xu 1, and Yu-Li Ge 2,, College of Applied Sciences, Beijing University of Technology, Beijing 100124, China School of Mathematics and Statistics, Nanyang Normal University, Nanyang 473061, China * Corresponding author: Yu-Li Ge, Email:[email protected] Received June 2016 Revised November 2016 Published December 2016 This paper is devoted to the study of the two-dimensional andthree-dimensional ideal incompressible magneto-hydrodynamic (MHD)equations in which the Faraday law is inviscid. We consider thelocal existence and uniqueness of classical solutions for the MHDsystem in Hölder space when the general initial data belongs to$C^{1,α}(\mathbb{R}^n)$ for $n=2$ and $n=3$. Keywords: Ideal MHD equations, local C1, α solution, Hölder space. Mathematics Subject Classification: Primary:35Q35, 35Q60;Secondary:35Q30. Citation: Shu-Guang Shao, Shu Wang, Wen-Qing Xu, Yu-Li Ge. On the local C1, α solution of ideal magneto-hydrodynamical equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2103-2113. doi: 10.3934/dcds.2017090 H. Alfvén, Existence of electromagnetic-hydrodynamic waves, Nature, 150 (1942), 405. Google Scholar C. Bardos, C. Sulem and P. L. Sulem, Long time dynamics of a conductive fluid in the presence of a strong magnetic field, Trans. Amer. Math. Soc., 305 (1988), 175-191. doi: 10.1090/S0002-9947-1988-0920153-5. Google Scholar J. T. Beale, T. Kato and A. Majda, Remarks on the breakdown of smooth solutions for the 3D Euler equations, Comm. Math. Phys., 94 (1984), 61-66. doi: 10.1007/BF01212349. Google Scholar [4] D. Biskamp, Nonlinear Magnetohydrodynamics, Cambridge University Press, Cambridge, 1993. doi: 10.1017/CBO9780511599965. Google Scholar [5] H. Cabannes, Theoretical Magneto-Fluid Dynamics, Academic Press, New York, London, 1970. Google Scholar Y. Cai and Z. Lei, Global well-posedness of the incompressible magnetohydrodynamics, arXiv: 1605.00439v1 [math. AP], 2 May 2016. Google Scholar R. Caflish, I. Klapper and G. Steele, Remarks on singularities, dimension and energy dissipation for ideal hydrodynamics and MHD, Comm. Math. Phys., 184 (1997), 443-455. doi: 10.1007/s002200050067. Google Scholar Q. Chen, C. Miao and Z. Zhang, On the well-posedness of the ideal MHD equations in the Triebel-Lizorkin spaces, Arch. Rational Mech. Anal., 195 (2010), 561-578. doi: 10.1007/s00205-008-0213-6. Google Scholar C. Cao and J. Wu, Global regularity for the 2D MHD equations with mixed partial dissipation and magnetic diffusion, Adv. Math., 226 (2011), 1803-1822. doi: 10.1016/j.aim.2010.08.017. Google Scholar C. Cao and J. Wu, Global regularity theory for the incompressible magnetohydrodynamic type equations, In Lectures on the Analysis of Nonlinear Partial Differential Equations, Morningside Lectures in Mathematics, Edited by Fang-Hua Lin and Ping Zhang, Higher Education Press, Beijing, China, 2 (2012), 19–45. Google Scholar T. G. Cowling and D. Phil, Magnetohydrodynamics, the Institute of Physics, 1976. Google Scholar [12] P. A. Davidson, An Introduction to Magnetohydrodynamics, Cambridge University Press, Cambridge, 2001. doi: 10.1017/CBO9780511626333. Google Scholar G. Duvaut and J. L. Lions, Inéquations en thermoélasticité et magnétohydrodynamique, Arch. Rational Mech. Anal., 46 (1972), 241-279. Google Scholar U. Frisch, A. Pouquet, P. L. Sulem and M. Meneguzzi, Special issue on two dimensional turbulence, J. Méc. Théor. Appl., 46 (1983), 191-216. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Differential Equations of Second Order Springer-Verlag, Berlin, 1983. doi: 10.1007/978-3-642-61798-0. Google Scholar B. Han, Global strong solution for the density dependent incompressible viscoelastic fluids in the critical $L^p$ framework, Nonlinear Anal., 132 (2016), 337-358. doi: 10.1016/j.na.2015.11.011. Google Scholar B. Han and C. H. Wei, Global well-posedness for the inhomogeneous Navier-Stokes equations with logarithmical hyper-deissipation, Discrete Continuous Dynam. Systems -A, 36 (2016), 6921-6941. doi: 10.3934/dcds.2016101. Google Scholar X. Hu, Z. Lei and F. H. Lin, On magnetohydrodynamics with partial magnetic dissipation near equilibrium, Recent developments in geometry and analysis, 155–164, Adv. Lect. Math. (ALM), 23, Int. Press, Somerville, MA, 2012. Google Scholar R. H. Kraichnan, Lagrangian-history closure approximation for turbulence, Phys. Fluids, 8 (1965), 575-598. doi: 10.1063/1.1761271. Google Scholar L. D. Laudau and E. M. Lifshitz, Electrodynamics of Continuous Media, 2nd ed, Pergamon, New York, 1984. Google Scholar Z. Lei, On axially symmetric incompressible magnetohydrodynamics in three dimension, J. Diff. Equations., 259 (2015), 3202-3215. doi: 10.1016/j.jde.2015.04.017. Google Scholar Z. Lei and Y. Zhou, BKM's criterion and global weak solutions for magnetohydrodynamics with zero viscosity, Discrete and Continuous Dynamical Systems., 25 (2009), 575-583. doi: 10.3934/dcds.2009.25.575. Google Scholar A. E. Lifschitz, Magnetohydrodynamics and Spectral Theory. Developments in Electromagnetic Theory and Applications, 4, Kluwer Academic Publishers Group, Dordrecht, 1989. doi: 10.1007/978-94-009-2561-8. Google Scholar F. H. Lin, C. Liu and P. Zhang, On hydrodynamics of viscoelastic fluids, Comm. Pure Appl. Math., 58 (2005), 1437-1471. doi: 10.1002/cpa.20074. Google Scholar F. H. Lin, L. Xu and P. Zhang, Global small solutions to 2-D incompressible MHD system, J. Diff. Equations., 258 (2015), 5440-5485. doi: 10.1016/j.jde.2015.06.034. Google Scholar F. H. Lin and P. Zhang, Global small solutions to an MHD-type system: The three-dimensional case, Comm. Pure Appl. Math., 67 (2014), 531-580. doi: 10.1002/cpa.21506. Google Scholar F. H. Lin and T. Zhang, Global small solutions to a complex fluid model in three dimensional, Arch. Rational Mech. Anal., 216 (2015), 905-920. doi: 10.1007/s00205-014-0822-1. Google Scholar A. J. Majda and A. L. Bertozzi, Vorticity and Incompressible Flow, Cambridge Texts in Applied Mathematics, 27. Cambridge University Press, Cambridge, 2002. Google Scholar [29] E. Priest and T. Forbes, Magnetic Reconnection: MHD Theory and Applications, Cambridge University Press, Cambridge, 2000. doi: 10.1017/CBO9780511525087. Google Scholar M. Sermange and R. Temam, Some mathematical questions related to the MHD equations, Comm. Pure Appl. Math., 36 (1983), 635-664. doi: 10.1002/cpa.3160360506. Google Scholar J. Wu, Bounds and new approaches for the 3D MHD equations, J. Nonlinear Sci., 12 (2002), 395-413. doi: 10.1007/s00332-002-0486-0. Google Scholar L. Xu and P. Zhang, Global Small Solutions to Three-Dimensional Incompressible Magnetohydrodynamical System, SIAM J. Math. Anal., 47 (2015), 26-65. doi: 10.1137/14095515X. Google Scholar X. G. Yang and Y. M. Qin, A Beale-Kato-Majda criterion for the 3D viscous magnetohydrodynamic equations, Math. Methods Appl. Sci., 38 (2015), 701-707. doi: 10.1002/mma.3101. Google Scholar T. Zhang, An elementary proof of the global existence and uniqueness theorem to 2-D incompressible non-resistive MHD system, arXiv: 1404.5681v2 [math. AP], 23 October, 2014. Google Scholar Rodrigo P. Pacheco, Rafael O. Ruggiero. On C1, β density of metrics without invariant graphs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 247-261. doi: 10.3934/dcds.2018012 Eduardo Hernández, Donal O'Regan. $C^{\alpha}$-Hölder classical solutions for non-autonomous neutral differential equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 241-260. doi: 10.3934/dcds.2011.29.241 Kyudong Choi. Persistence of Hölder continuity for non-local integro-differential equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1741-1771. doi: 10.3934/dcds.2013.33.1741 A. Damlamian, Nobuyuki Kenmochi. Evolution equations generated by subdifferentials in the dual space of $(H^1(\Omega))$. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 269-278. doi: 10.3934/dcds.1999.5.269 Feng Cheng, Chao-Jiang Xu. On the Gevrey regularity of solutions to the 3D ideal MHD equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6485-6506. doi: 10.3934/dcds.2019281 Luca Lorenzi. Optimal Hölder regularity for nonautonomous Kolmogorov equations. Discrete & Continuous Dynamical Systems - S, 2011, 4 (1) : 169-191. doi: 10.3934/dcdss.2011.4.169 Lucio Boccardo, Alessio Porretta. Uniqueness for elliptic problems with Hölder--type dependence on the solution. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1569-1585. doi: 10.3934/cpaa.2013.12.1569 Yunhua Zhou. The local $C^1$-density of stable ergodicity. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2621-2629. doi: 10.3934/dcds.2013.33.2621 María J. Garrido–Atienza, Kening Lu, Björn Schmalfuss. Local pathwise solutions to stochastic evolution equations driven by fractional Brownian motions with Hurst parameters $H\in (1/3,1/2]$. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2553-2581. doi: 10.3934/dcdsb.2015.20.2553 Yong Zhou, Jishan Fan. Local well-posedness for the ideal incompressible density dependent magnetohydrodynamic equations. Communications on Pure & Applied Analysis, 2010, 9 (3) : 813-818. doi: 10.3934/cpaa.2010.9.813 Charles Pugh, Michael Shub, Amie Wilkinson. Hölder foliations, revisited. Journal of Modern Dynamics, 2012, 6 (1) : 79-120. doi: 10.3934/jmd.2012.6.79 Jinpeng An. Hölder stability of diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 315-329. doi: 10.3934/dcds.2009.24.315 Zaiyun Peng, Xinmin Yang, Kok Lay Teo. On the Hölder continuity of approximate solution mappings to parametric weak generalized Ky Fan Inequality. Journal of Industrial & Management Optimization, 2015, 11 (2) : 549-562. doi: 10.3934/jimo.2015.11.549 Mark Pollicott. Local Hölder regularity of densities and Livsic theorems for non-uniformly hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2005, 13 (5) : 1247-1256. doi: 10.3934/dcds.2005.13.1247 Wenning Wei. On the Cauchy-Dirichlet problem in a half space for backward SPDEs in weighted Hölder spaces. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5353-5378. doi: 10.3934/dcds.2015.35.5353 Ming He, Jianwen Zhang. Global cylindrical solution to the compressible MHD equations in an exterior domain. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1841-1865. doi: 10.3934/cpaa.2009.8.1841 Mikko Salo. Stability for solutions of wave equations with $C^{1,1}$ coefficients. Inverse Problems & Imaging, 2007, 1 (3) : 537-556. doi: 10.3934/ipi.2007.1.537 Yong Chen, Hongjun Gao, María J. Garrido–Atienza, Björn Schmalfuss. Pathwise solutions of SPDEs driven by Hölder-continuous integrators with exponent larger than $1/2$ and random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 79-98. doi: 10.3934/dcds.2014.34.79 Angelo Favini, Rabah Labbas, Stéphane Maingot, Hiroki Tanabe, Atsushi Yagi. Necessary and sufficient conditions for maximal regularity in the study of elliptic differential equations in Hölder spaces. Discrete & Continuous Dynamical Systems - A, 2008, 22 (4) : 973-987. doi: 10.3934/dcds.2008.22.973 Luis Silvestre. Hölder continuity for integro-differential parabolic equations with polynomial growth respect to the gradient. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1069-1081. doi: 10.3934/dcds.2010.28.1069 HTML views (13) Shu-Guang Shao Shu Wang Wen-Qing Xu Yu-Li Ge Article outline
CommonCrawl
Ing. Václav Blažej, Ph.D. [email protected] TH:A-1222 Všechny publikace Polynomial kernels for tracking shortest paths Blažej, V.; Choudhary, P.; Knop, D.; Křišťan, J.; Suchý, O.; Valla, T. Information Processing Letters. 2023, 179(1), ISSN 1872-6119. 10.1016/j.ipl.2022.106315 Katedra teoretické informatiky Given an undirected graph G = (V , E), vertices s,t ∈ V , and an integer k, Tracking Shortest Paths requires deciding whether there exists a set of k vertices T ⊆ V such that for any two distinct shortest paths between s and t, say P1 and P2, we have T ∩ V (P1) != T ∩ V (P2). In this paper, we give the first polynomial size kernel for the problem. Specifically we show the existence of a kernel with O(k2) vertices and edges in general graphs and a kernel with O(k) vertices and edges in planar graphs for the Tracking Paths in DAG problem. This problem admits a polynomial parameter transformation to Tracking Shortest Paths, and this implies a kernel with O(k4) vertices and edges for Tracking Shortest Paths in general graphs and a kernel with O(k2) vertices and edges in planar graphs. Based on the above we also give a single exponential algorithm for Tracking Shortest Paths in planar graphs. Controlling the Spread of Two Secrets in Diverse Social Networks (Student Abstract) Blažej, V.; Knop, D.; Schierreich, Š. Proceedings of the 36th AAAI Conference on Artificial Intelligence. Menlo Park: AAAI Press, 2022. p. 12919-12920. ISSN 2159-5399. ISBN 978-1-57735-876-3. Stať ve sborníku nformation diffusion in social networks is a well-studied concept in social choice theory. We propose the study of the diffusion of two secrets in a heterogeneous environment from the complexity perspective, that is, there are two different networks with the same set of agents (e.g., the structure of the set of followers might be different in two distinct social networks). Formally, our model combines two group identification processes for which we do have independent desiderata---either constructive, where we would like a given group of agents to be exposed to a secret, or destructive, where a given group of agents should not be exposed to a secret. To be able to reach these targets, we can either delete an agent or introduce a previously latent agent. Our results are mostly negative---all of the problems are NP-hard. Therefore, we propose a parameterized study with respect to the natural parameters, the number of influenced agents, the size of the required/protected agent sets, and the duration of the diffusion process. Most of the studied problems remain W[1]-hard even for a combination of these parameters. We complement these results with nearly optimal XP algorithms. On Edge-Length Ratios of Partial 2-Trees Blažej, V.; Fiala, J.; Liotta, G. International Journal of Computational Geometry & Applications. 2022, 31(02n03), 141-162. ISSN 0218-1959. The edge-length ratio of a planar straight-line drawing of a graph is the maximum ratio between the lengths of any two of its edges. When the edges to be considered in the ratio are required to be adjacent, the ratio is called local edge-length ratio. The (local) edge-length ratio of a graph 𝐺 is the infimum over all (local) edge-length ratios in the planar straight-line drawings of 𝐺. We prove that the edge-length ratio of the 𝑛-vertex 2-trees is Ω(log 𝑛), which proves a conjecture by Lazard et al. [TCS 770, 2019, pp. 88–94] and complements an upper bound by Borrazzo and Frati [JoCG 11(1), 2020, pp. 137–155]. We also prove that every partial 2-tree admits a planar straight-line drawing whose local edge-length ratio is at most 4+𝜀 for any arbitrarily small 𝜀>0. On Polynomial Kernels for Traveling Salesperson Problem and Its Generalizations Blažej, V.; Choudhary, P.; Knop, D.; Schierreich, Š.; Suchý, O.; Valla, T. 30th Annual European Symposium on Algorithms (ESA 2022). Dagstuhl: Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, 2022. p. 22:1-22:16. Leibniz International Proceedings in Informatics (LIPIcs). vol. 244. ISSN 1868-8969. ISBN 978-3-95977-247-1. 10.4230/LIPIcs.ESA.2022.22 For many problems, the important instances from practice possess certain structure that one should reflect in the design of specific algorithms. As data reduction is an important and inextricable part of today's computation, we employ one of the most successful models of such precomputation - the kernelization. Within this framework, we focus on Traveling Salesperson Problem (TSP) and some of its generalizations. We provide a kernel for TSP with size polynomial in either the feedback edge set number or the size of a modulator to constant-sized components. For its generalizations, we also consider other structural parameters such as the vertex cover number and the size of a modulator to constant-sized paths. We complement our results from the negative side by showing that the existence of a polynomial-sized kernel with respect to the fractioning number, the combined parameter maximum degree and treewidth, and, in the case of {Subset TSP}, modulator to disjoint cycles (i.e., the treewidth two graphs) is unlikely. Bears with Hats and Independence Polynomials Blažej, V.; Dvořák, P.; Opler, M. Graph-Theoretic Concepts in Computer Science. Cham: Springer, 2021. p. 283-295. Lecture Notes in Computer Science. vol. 12911. ISSN 0302-9743. ISBN 978-3-030-86837-6. Consider the following hat guessing game. A bear sits on each vertex of a graph G, and a demon puts on each bear a hat colored by one of h colors. Each bear sees only the hat colors of his neighbors. Based on this information only, each bear has to guess g colors and he guesses correctly if his hat color is included in his guesses. The bears win if at least one bear guesses correctly for any hat arrangement. We introduce a new parameter—fractional hat chromatic number 𝜇̂, arising from the hat guessing game. The parameter 𝜇̂ is related to the hat chromatic number which has been studied before. We present a surprising connection between the hat guessing game and the independence polynomial of graphs. This connection allows us to compute the fractional hat chromatic number of chordal graphs in polynomial time, to bound fractional hat chromatic number by a function of maximum degree of G, and to compute the exact value of 𝜇̂ of cliques, paths, and cycles. Constant Factor Approximation for Tracking Paths and Fault Tolerant Feedback Vertex Set Approximation and Online Algorithms - 19th International Workshop, WAOA 2021, Lisbon, Portugal, September 6-10, 2021, Revised Selected Papers. Springer, Cham, 2021. p. 23-38. Lecture Notes in Computer Science (LNCS). vol. 12982. ISSN 0302-9743. ISBN 978-3-030-92701-1. Consider a vertex-weighted graph G with a source s and a target t. Tracking Paths requires finding a minimum weight set of vertices (trackers) such that the sequence of trackers in each path from s to t is unique. In this work, we derive a factor 66-approximation algorithm for Tracking Paths in weighted graphs and a factor 4-approximation algorithm if the input is unweighted. This is the first constant factor approximation for this problem. While doing so, we also study approximation of the closely related r -Fault Tolerant Feedback Vertex Set problem. There, for a fixed integer r and a given vertex-weighted graph G, the task is to find a minimum weight set of vertices intersecting every cycle of G in at least r+1 vertices. We give a factor O(r^2) approximation algorithm for r -Fault Tolerant Feedback Vertex Set if r is a constant. Non-homotopic Loops with a Bounded Number of Pairwise Intersections Blažej, V.; Opler, M.; Šileikis, M.; Valtr, P. Graph Drawing and Network Visualization. Cham: Springer, 2021. p. 210-222. Lecture Notes in Computer Science. vol. 12868. ISSN 1611-3349. ISBN 978-3-030-92931-2. Let 𝑉𝑛 be a set of n points in the plane and let 𝑥∉𝑉𝑛. An x-loop is a continuous closed curve not containing any point of 𝑉𝑛. We say that two x-loops are non-homotopic if they cannot be transformed continuously into each other without passing through a point of 𝑉𝑛. For 𝑛=2, we give an upper bound 𝑒𝑂(𝑘√) on the maximum size of a family of pairwise non-homotopic x-loops such that every loop has fewer than k self-intersections and any two loops have fewer than k intersections. The exponent 𝑂(𝑘‾‾√) is asymptotically tight. The previous upper bound 2(2𝑘)4 was proved by Pach et al. [6]. We prove the above result by proving the asymptotic upper bound 𝑒𝑂(𝑘√) for a similar problem when 𝑥∈𝑉𝑛, and by proving a close relation between the two problems . On the Edge-Length Ratio of 2-Trees Graph Drawing and Network Visualization. Cham: Springer, 2021. p. 85-98. Lecture Notes in Computer Science. vol. 12590. ISSN 0302-9743. ISBN 978-3-030-68765-6. We study planar straight-line drawings of graphs that minimize the ratio between the length of the longest and the shortest edge. We answer a question of Lazard et al. [Theor. Comput. Sci. {\bf 770} (2019), 88--94] and, for any given constant $r$, we provide a $2$-tree which does not admit a planar straight-line drawing with a ratio bounded by $r$. When the ratio is restricted to adjacent edges only, we prove that any $2$-tree admits a planar straight-line drawing whose edge-length ratio is at most $4 + \varepsilon$ for any arbitrarily small $\varepsilon > 0$, hence the upper bound on the local edge-length ratio of partial $2$-trees is $4$. On the Intersections of Non-homotopic Loops Algorithms and Discrete Applied Mathematics. Cham: Springer, 2021. p. 196-205. Lecture Notes in Computer Science. vol. 12601. ISSN 0302-9743. ISBN 978-3-030-67898-2. Let $V = \{v_1, \dots, v_n\}$ be a set of $n$ points in the plane and let $x \in V$. An \emph{$x$-loop} is a continuous closed curve not containing any point of $V$, except of passing exactly once through the point $x$. We say that two $x$-loops are \emph{non-homotopic} if they cannot be transformed continuously into each other without passing through a point of $V$. For $n=2$, we give an upper bound $2^{O(k)}$ on the maximum size of a family of pairwise non-homotopic $x$-loops such that every loop has fewer than $k$ self-intersections and any two loops have fewer than $k$ intersections. This result is inspired by a very recent result of Pach, Tardos, and T\'oth who proved the upper bounds $2^{16k^4}$ for the slightly different scenario when $x\not\in V$. On Induced Online Ramsey Number of Paths, Cycles, and Trees Blažej, V.; Dvořák, P.; Valla, T. The 14th International Computer Science Symposium in Russia. Springer, Cham, 2019. p. 60-69. Lecture Notes in Computer Science. vol. 11532. ISSN 0302-9743. ISBN 978-3-030-19954-8. An online Ramsey game is a game between Builder and Painter, alternating in turns. They are given a fixed graph $H$ and a an infinite set of independent vertices $G$. In each round Builder draws a new edge in $G$ and Painter colors it either red or blue. Builder wins if after some finite round there is a monochromatic copy of the graph $H$, otherwise Painter wins. The online Ramsey number $\widetilde{r}(H)$ is the minimum number of rounds such that Builder can force a monochromatic copy of $H$ in $G$. This is an analogy to the size-Ramsey number $\overline{r}(H)$ defined as the minimum number such that there exists graph $G$ with $\overline{r}(H)$ edges where for any edge two-coloring $G$ contains a monochromatic copy of $H$. In this extended abstract, we introduce the concept of induced online Ramsey numbers: the induced online Ramsey number $\overline{r}_{ind}(H)$ is the minimum number of rounds Builder can force an induced monochromatic copy of $H$ in $G$. We prove asymptotically tight bounds on the induced online Ramsey numbers of paths, cycles and two families of trees. Moreover, we provide a result analogous to Conlon [On-line Ramsey Numbers, SIAM J. Discr. Math. 2009], showing that there is an infinite family of trees $T_1,T_2,\dots$, $|T_i|<|T_{i+1}|$ for $i\ge1$, such that \[ \lim_{i\to\infty} \frac{\widetilde{r}(T_i)}{\overline{r}(T_i)} = 0. \] On the m-eternal Domination Number of Cactus Graphs Blažej, V.; Křišťan, J.; Valla, T. Reachability Problems. Springer, Cham, 2019. p. 33-47. ISSN 0302-9743. ISBN 978-3-030-30805-6. Given a graph $G$, guards are placed on vertices of $G$. Then vertices are subject to an infinite sequence of attacks so that each attack must be defended by a guard moving from a neighboring vertex. The m-eternal domination number is the minimum number of guards such that the graph can be defended indefinitely. In this paper we study the m-eternal domination number of cactus graphs, that is, connected graphs where each edge lies in at most two cycles, and we consider three variants of the m-eternal domination number: first variant allows multiple guards to occupy a single vertex, second variant does not allow it, and in the third variant additional ``eviction'' attacks must be defended. We provide a new upper bound for the m-eternal domination number of cactus graphs, and for a subclass of cactus graphs called Christmas cactus graphs, where each vertex lies in at most two cycles, we prove that these three numbers are equal. Moreover, we present a linear-time algorithm for computing them. A Simple Streaming Bit-parallel Algorithm for Swap Pattern Matching Blažej, V.; Suchý, O.; Valla, T. Mathematical Aspects of Computer and Information Sciences. Cham: Springer International Publishing, 2017. p. 333-348. Lecture Notes in Computer Science. vol. 10693. ISSN 0302-9743. ISBN 978-3-319-72452-2. The pattern matching problem with swaps is to find all occurrences of a pattern in a text while allowing the pattern to swap adjacent symbols. The goal is to design fast matching algorithm that takes advantage of the bit parallelism of bitwise machine instructions and has only streaming access to the input. We introduce a new approach to solve this problem based on the graph theoretic model and compare its performance to previously known algorithms. We also show that an approach using deterministic finite automata cannot achieve similarly efficient algorithms. Furthermore, we describe a fatal flaw in some of the previously published algorithms based on the same model. Finally, we provide experimental evaluation of our algorithm on real-world data.
CommonCrawl
Two constructions of low-hit-zone frequency-hopping sequence sets The differential spectrum of a class of power functions over finite fields doi: 10.3934/amc.2020074 The $[46, 9, 20]_2$ code is unique Sascha Kurz Mathematisches Institut, Universität Bayreuth, D-95440 Bayreuth, Germany Received September 2019 Revised January 2020 Published April 2020 Table(3) The minimum distance of all binary linear codes with dimension at most eight is known. The smallest open case for dimension nine is length $ n = 46 $ with known bounds $ 19\le d\le 20 $. Here we present a $ [46,9,20]_2 $ code and show its uniqueness. Interestingly enough, this unique optimal code is asymmetric, i.e., it has a trivial automorphism group. Additionally, we show the non-existence of $ [47,10,20]_2 $ and $ [85,9,40]_2 $ codes. Keywords: Binary linear codes, optimal codes, divisible codes, non-existence of codes, computer search. Mathematics Subject Classification: Primary: 94B65; Secondary: 94B05. Citation: Sascha Kurz. The $[46, 9, 20]_2$ code is unique. Advances in Mathematics of Communications, doi: 10.3934/amc.2020074 L. D. Baumert and R. J. McEliece, A note on the Griesmer bound, IEEE Transactions on Information Theory, IT-19 (1973), 134-135. doi: 10.1109/tit.1973.1054939. Google Scholar I. Bouyukliev, D. B. Jaffe and V. Vavrek, The smallest length of eight-dimensional binary linear codes with prescribed minimum distance, IEEE Transactions on Information Theory, 46 (2000), 1539-1544. doi: 10.1109/18.850690. Google Scholar I. G. Bouyukliev, What is $Q$-extension?, Serdica Journal of Computing, 1 (2007), 115-130. Google Scholar I. Bouyukliev and D. B. Jaffe, Optimal binary linear codes of dimension at most seven, Discrete Mathematics, 226 (2001), 51-70. doi: 10.1016/S0012-365X(00)00125-4. Google Scholar S. Dodunekov, S. Guritman and J. Simonis, Some new results on the minimum length of binary linear codes of dimension nine, IEEE Transactions on Information Theory, 45 (1999), 2543-2546. doi: 10.1109/18.796403. Google Scholar M. Grassl, Bounds on the minimum distance of linear codes and quantum codes, Online available at: http://www.codetables.de, (2007), Accessed on 2019-04-04. Google Scholar J. H. Griesmer, A bound for error-correcting codes, IBM Journal of Research and Development, 4 (1960), 532-542. doi: 10.1147/rd.45.0532. Google Scholar S. Kurz, Lincode - computer classification of linear codes, arXiv: 1912.09357. Google Scholar F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes. II, North-Holland Mathematical Library, Vol. 16. North-Holland Publishing Co., Amsterdam-New York-Oxford, 1977. Google Scholar J. Simonis, Restrictions on the weight distribution of binary linear codes imposed by the structure of reed-muller codes, IEEE transactions on Information Theory, 40 (1994), 194-196. doi: 10.1109/18.272480. Google Scholar J. Simonis, The $[23, 14, 5]$ Wagner code is unique, Discrete Mathematics, 213 (2000), 269-282. doi: 10.1016/S0012-365X(99)00187-9. Google Scholar H. C. A. van Tilborg, The smallest length of binary $7$-dimensional linear codes with prescribed minimum distance, Discrete Mathematics, 33 (1981), 197-207. doi: 10.1016/0012-365X(81)90166-7. Google Scholar H. N. Ward, Divisible codes - a survey, Serdica Mathematical Journal, 27 (2001), 263-278. Google Scholar Table 1. Number of $ [n,k,\{20,24,28,32\}]_2 $ codes k/n 20 24 28 30 32 34 35 36 37 38 39 40 41 42 43 44 Table 2. Number of $ [n,k,\{40,48,56\}]_2 $ codes k/n 40 48 56 60 64 68 70 72 74 75 76 77 78 79 80 81 82 83 1 1 1 1 0 0 0 0 0 0 0 0 0 Table 3. Number of $ [84,8,\{40,48,56\}]_2 $ codes per $ A_{56} $ $ A_{56} $ 3 4 5 6 7 8 9 25773 48792 26091 5198 450 17 1 Vito Napolitano, Ferdinando Zullo. Codes with few weights arising from linear sets. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020129 Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065 Jong Yoon Hyun, Boran Kim, Minwon Na. Construction of minimal linear codes from multi-variable functions. Advances in Mathematics of Communications, 2021, 15 (2) : 227-240. doi: 10.3934/amc.2020055 Shanding Xu, Longjiang Qu, Xiwang Cao. Three classes of partitioned difference families and their optimal constant composition codes. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020120 Fengwei Li, Qin Yue, Xiaoming Sun. The values of two classes of Gaussian periods in index 2 case and weight distributions of linear codes. Advances in Mathematics of Communications, 2021, 15 (1) : 131-153. doi: 10.3934/amc.2020049 Hongming Ru, Chunming Tang, Yanfeng Qi, Yuxiao Deng. A construction of $ p $-ary linear codes with two or three weights. Advances in Mathematics of Communications, 2021, 15 (1) : 9-22. doi: 10.3934/amc.2020039 Agnaldo José Ferrari, Tatiana Miguel Rodrigues de Souza. Rotated $ A_n $-lattice codes of full diversity. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020118 San Ling, Buket Özkaya. New bounds on the minimum distance of cyclic codes. Advances in Mathematics of Communications, 2021, 15 (1) : 1-8. doi: 10.3934/amc.2020038 Karan Khathuria, Joachim Rosenthal, Violetta Weger. Encryption scheme based on expanded Reed-Solomon codes. Advances in Mathematics of Communications, 2021, 15 (2) : 207-218. doi: 10.3934/amc.2020053 Sabira El Khalfaoui, Gábor P. Nagy. On the dimension of the subfield subcodes of 1-point Hermitian codes. Advances in Mathematics of Communications, 2021, 15 (2) : 219-226. doi: 10.3934/amc.2020054 Tingting Wu, Li Liu, Lanqiang Li, Shixin Zhu. Repeated-root constacyclic codes of length $ 6lp^s $. Advances in Mathematics of Communications, 2021, 15 (1) : 167-189. doi: 10.3934/amc.2020051 Saadoun Mahmoudi, Karim Samei. Codes over $ \frak m $-adic completion rings. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020122 Ivan Bailera, Joaquim Borges, Josep Rifà. On Hadamard full propelinear codes with associated group $ C_{2t}\times C_2 $. Advances in Mathematics of Communications, 2021, 15 (1) : 35-54. doi: 10.3934/amc.2020041 Yuan Cao, Yonglin Cao, Hai Q. Dinh, Ramakrishna Bandi, Fang-Wei Fu. An explicit representation and enumeration for negacyclic codes of length $ 2^kn $ over $ \mathbb{Z}_4+u\mathbb{Z}_4 $. Advances in Mathematics of Communications, 2021, 15 (2) : 291-309. doi: 10.3934/amc.2020067 Denis Serre. Non-linear electromagnetism and special relativity. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 435-454. doi: 10.3934/dcds.2009.23.435
CommonCrawl
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About Rocky Mountain Journal of Mathematics Rocky Mountain J. Math. Volume 48, Number 6 (2018), 1963-1982. The expected number of elements to generate a finite group with $d$-generated Sylow subgroups Andrea Lucchini and Mariapia Moscatiello More by Andrea Lucchini More by Mariapia Moscatiello Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text Article info and citation Given a finite group $G,$ let $e(G)$ be the expected number of elements of $G$ which have to be drawn at random, with replacement, before a set of generators is found. If all of the Sylow subgroups of $G$ can be generated by $d$ elements, then $e(G)\leq d+\kappa $, where $\kappa $ is an absolute constant that is explicitly described in terms of the Riemann zeta function and is the best possible in this context. Approximately, $\kappa $ equals 2.752394. If $G$ is a permutation group of degree $n,$ then either $G={Sym} (3)$ and $e(G)=2.9$ or $e(G)\leq \lfloor n/2\rfloor +\kappa ^*$ with $\kappa ^* \sim 1.606695.$ These results improve weaker bounds recently obtained by Lucchini. Rocky Mountain J. Math., Volume 48, Number 6 (2018), 1963-1982. First available in Project Euclid: 24 November 2018 Permanent link to this document https://projecteuclid.org/euclid.rmjm/1543028448 doi:10.1216/RMJ-2018-48-6-1963 Mathematical Reviews number (MathSciNet) MR3879312 Zentralblatt MATH identifier Primary: 20P05: Probabilistic methods in group theory [See also 60Bxx] Groups generation waiting time Sylow subgroups permutation groups Lucchini, Andrea; Moscatiello, Mariapia. The expected number of elements to generate a finite group with $d$-generated Sylow subgroups. Rocky Mountain J. Math. 48 (2018), no. 6, 1963--1982. doi:10.1216/RMJ-2018-48-6-1963. https://projecteuclid.org/euclid.rmjm/1543028448 H. Cohen, High precision computation of Hardy-Littlewood constants, preprint, https://www.math.u-bordeaux.fr/~hecohen/. URL: Link to item D. Easdown and C. Praeger, On minimal faithful permutation representations of finite groups, Bull. Australian Math. Soc. 38 (1988), 207–220. Zentralblatt MATH: 0661.20012 Digital Object Identifier: doi:10.1017/S0004972700027489 The GAP Group, GAP–Groups, algorithms, and programming, version 4.7.7, http://www.gap-system.org (2015). W. Gaschütz, Die Eulersche Funktion endlicher auflösbarer Gruppen, Illinois J. Math. 3 (1959), 469–476. R. Guralnick, On the number of generators of a finite group, Arch. Math. 53 (1989), 521–523. Digital Object Identifier: doi:10.1007/BF01199809 D. Holt, Representing quotients of permutation groups, Quart. J. Math. Oxford 48 (1997), 347–350. Digital Object Identifier: doi:10.1093/qmath/48.3.347 L.G. Kovács and C.E. Praeger, Finite permutation groups with large abelian quotients, Pacific J. Math. 136 (1989), 283–292. A. Lubotzky, The expected number of random elements to generate a finite group, J. Algebra 257 (2002), 452–Â-495. Digital Object Identifier: doi:10.1016/S0021-8693(02)00528-8 A. Lubotzky and D. Segal, Subgroup growth, Progr. Math. 212 (2003). Mathematical Reviews (MathSciNet): MR1978431 A. Lucchini, A bound on the number of generators of a finite group, Arch. Math. 53 (1989), 313–317. ––––, The expected number of random elements to generate a finite group, Monatsh. Math. 181 (2016), 123–142. ––––, A bound on the expected number of random elements to generate a finite group all of whose Sylow subgroups are $d$-generated, Arch. Math. 107 (2016), 1–8. Digital Object Identifier: doi:10.1007/s00013-016-0917-z N.E. Menezes, Random generation and chief length of finite groups, Ph.D. dissertation, http://hdl.handle.net/10023/3578. The PARI Group, PARI/GP version 2.9.0, University of Bordeaux, 2016, http://pari.math.u-bordeaux.fr/. C. Pomerance, The expected number of random elements to generate a finite abelian group, Period. Math. Hungar. 43 (2001), 191–198. Digital Object Identifier: doi:10.1023/A:1015250102792 D. Robinson, A course in the theory of groups, Grad. Texts Math. 80 (1993). Rocky Mountain Mathematics Consortium New content alerts Email RSS ToC RSS Article Turn Off MathJax What is MathJax? On $\mathcal{M}$-permutable sylow subgroups of finite groups Miao, Long and Lempken, Wolfgang, Illinois Journal of Mathematics, 2009 Groups in which Sylow subgroups and subnormal subgroups permute Ballester-Bolinches, A., Beidleman, J. C., and Heineken, H., Illinois Journal of Mathematics, 2003 Isomorphism types of maximal cofinitary groups Kastermans, Bart, Bulletin of Symbolic Logic, 2009 Finite groups with some weakly $s$-permutably embedded and weakly $s$-supplemented subgroups Zhong, Guo, Ma, XuanLong, Lin, Shixun, Xia, Jiayi, and Jin, Jianxing, Involve: A Journal of Mathematics, 2015 On the number of conjugacy classes in a finite p-group VERA-LÓPEZ, Antonio, Hokkaido Mathematical Journal, 1989 Products of $\scr N$-connected groups Hauck, P., Martínez-Pastor, A., and Pérez-Ramos, M. D., Illinois Journal of Mathematics, 2003 The Smith sets of finite groups with normal Sylow 2-subgroups and small nilquotients Koto, Akihiro, Morimoto, Masaharu, and Qi, Yan, Journal of Mathematics of Kyoto University, 2008 Finite linear groups and bounded generation Liebeck, Martin W. and Pyber, L., Duke Mathematical Journal, 2001 On conjugacy classes of maximal subgroups of finite simple groups, and a related zeta function Liebeck, Martin W., M. S. Martin, Benjamin, and Shalev, Aner, Duke Mathematical Journal, 2005 Subgroup S-commutativity degrees of finite groups Otera, Daniele Ettore and Russo, Francesco G., Bulletin of the Belgian Mathematical Society - Simon Stevin, 2012 euclid.rmjm/1543028448
CommonCrawl
Research article | Open | Open Peer Review | Published: 31 July 2015 Using information theory to identify redundancy in common laboratory tests in the intensive care unit Joon Lee1 & David M. Maslove2 Clinical workflow is infused with large quantities of data, particularly in areas with enhanced monitoring such as the Intensive Care Unit (ICU). Information theory can quantify the expected amounts of total and redundant information contained in a given clinical data type, and as such has the potential to inform clinicians on how to manage the vast volumes of data they are required to analyze in their daily practice. The objective of this proof-of-concept study was to quantify the amounts of redundant information associated with common ICU lab tests. We analyzed the information content of 11 laboratory test results from 29,149 adult ICU admissions in the MIMIC II database. Information theory was applied to quantify the expected amount of redundant information both between lab values from the same ICU day, and between consecutive ICU days. Most lab values showed a decreasing trend over time in the expected amount of novel information they contained. Platelet, blood urea nitrogen (BUN), and creatinine measurements exhibited the most amount of redundant information on days 2 and 3 compared to the previous day. The creatinine-BUN and sodium-chloride pairs had the most redundancy. Information theory can help identify and discourage unnecessary testing and bloodwork, and can in general be a useful data analytic technique for many medical specialties that deal with information overload. Modern medical practice is increasingly a data-driven enterprise. Conventional data based on clinical observation and blood tests are collected in large quantities, with newer modalities such as high-resolution imaging studies and genome-wide assays proliferating rapidly. Strategies are needed both to limit the use of unhelpful tests, and to identify from among the resulting data those that are relevant, and those that are merely distractions. In the Intensive Care Unit (ICU), likely the most data-rich environment in the hospital, enhanced monitoring and frequent testing are common. Repeated bloodwork can lead to patient harm in a number of ways. First, frequent phelobotomy can cause anemia, and increase the need for blood transfusions. In one observational study in the ICU, patients were phlebotomized between 40 and 70 mL per day, for an average total volume of more than 1 L over the course of their stay [1]. While conservative blood sampling strategies have been shown to reduce daily phlebotomy volumes to as little as 8 mL per day [2], such strategies are not widely used. Second, frequent blood draws can cause patient discomfort, especially for those patients who do not have indwelling catheters and who therefore require venipuncture for sample collection. Blood samples are often collected very early in the morning, which can disrupt sleep and lead to delirium. Lastly, increased frequency of testing enhances the risk of false positive results, and may increase the risk that clinically irrelevant findings lead to changes in management. We know of no objective approach to determining which blood tests are most informative on a system-wide level. As such, there is a need to develop strategies for the rational use of blood tests both in the ICU, and beyond. One approach to refining testing strategies, popularized by the Choosing Wisely campaign [3], has been to discourage clinicians and patients from undertaking tests that are unnecessary or unhelpful. A laboratory test utilization management toolbox has also been developed to guide clinicians on appropriate utilization of needed lab tests [4]. In addition, Cismondi et al. have applied artificial intelligence to study the predictability of future lab test results in a patient cohort with gastrointestinal bleeding, extracted from the same public ICU database that the present study analyzed [5]. In this study, we explored an alternative approach in which the science of information theory is used to identifying areas of overlap or redundancy between clinical tests. Information theory has long been used in medical research in numerous domains including neuroscience [6], molecular biology [7], cell signaling [8], genomics [9], and medical imaging [10]. In the area of medical decision making, information theory has been applied to the evaluation of diagnostic tests, including laboratory tests [11–13]. In these formulations, the quantity of information gained by performing a test is related to the difference in the uncertainty in outcome (i.e., entropy) associated with the probability distribution of the diagnosis both prior to and following the test. Information theory provides a fundamental framework for quantifying information [14]. The central idea is that the amount of information contained in a variable is related to its randomness. For example, the tossing of a biased coin with 90 % chance of turning up heads contains less information than a fair coin toss because the result is more predictable. In general, less randomness means more predictability, and vice versa. We used constructs from information theory, namely entropy, conditional entropy, and mutual information, to analyze lab test results from a large ICU database. Entropy, conditional entropy, and mutual information are fundamental metrics that comprise information theory, and can, by definition, describe the information contents of a single variable as well as of the association between two variables. We hypothesized that in addition to quantifying the relationship between diagnostic tests and disease states, information theory could be useful in addressing clinical information overload by identifying redundancies between laboratory tests done concurrently, as well as those done sequentially for the purpose of daily monitoring. We further hypothesized that specific pairs of laboratory values with shared physiologic mechanisms would exhibit greater redundancy than pairs that do not. The objective of this proof-of-concept study was to employ information theory to quantify the amount of information contained in common laboratory tests, the extent of redundancy between consecutive days of sampling, and the redundancy associated with pre-specified pairs of ICU lab tests. These pairs included creatinine and blood urea nitrogen (BUN) (both related to renal function), bicarbonate and lactate (related physiological because the former acts as a buffer for the latter), sodium and chloride (both constituents of a common resuscitation fluid), and white blood cell count (WBC) and platelet count (hypothesized to contain less redundancy). We extracted laboratory test results from the MIMIC II database that contains data from over 25,000 adult ICU patients [15]. The ICU admissions in MIMIC II typically have 1–4 lab measurements per day. The following lab tests were investigated: hematocrit, platelet count, white blood cell count (WBC), glucose, HCO3, potassium, sodium, chloride, BUN, creatinine, and lactate. These variables were chosen for their pervasiveness in and high relevance to intensive care. A total of 29,149 adult ICU admissions were analyzed and missing data were excluded. Because MIMIC II is a fully de-identified public database, the need for informed consent and research ethics review was waived. MIMIC II data were extracted in Oracle SQL Developer (version 3.2.09). For each ICU admission, the median value of each lab test from each ICU day was used for analysis. For each variable, values less than the 1st percentile and greater than the 99th percentile were regarded as outliers and discarded. The use of median instead of mean as well as the 1st and 99th percentile cutoffs reduced the effects of outliers which commonly exist in raw clinical data such as MIMIC II due to recording and measurement errors. Also, median was particular appropriate since most lab tests exhibited skewed distributions as shown in Fig. 1. Following the exclusion of outliers, the remaining values were discretized into 20 bins of equal width. The choice of 20 bins was informed by visual inspection of the distributions of the lab test values to ensure adequate bin size; Fig. 1 shows the histogram of each lab test using 20 bins. The expected amount of information that each lab test (X) contains, known as the entropy of X and represented as H(X), was calculated as: $$ H(X)=-{\displaystyle \sum_i}P\left({x}_i\right){ \log}_2P\left({x}_i\right) $$ Histograms of lab test values from ICU day 1. Hct hematocrit, WBC white blood cell count, K potassium, Na sodium, Cl chloride, BUN blood urea nitrogen, Cr creatinine, Lac lactate where P(xi) is the probability of X resulting in the possible value xi and the summation enumerates all possible values of X. When the logarithm with base 2 is used as above, H(X) is expressed in bits as a unit of measurement. The maximum attainable entropy (i.e., when all 20 bins are equally likely) in each lab test was therefore 20∗(1/20)∗[−log2(1/20)] = 4.32 bits. We also calculated the expected amount of redundant information between two lab tests X and Y, termed mutual information and denoted as I(X;Y), as follows: $$ I\left(X;Y\right)={\displaystyle \sum_j}{\displaystyle \sum_i}P\left({x}_i,{y}_j\right){ \log}_2\left(\frac{P\left({x}_i,{y}_j\right)}{P\left({x}_i\right)P\left({y}_j\right)}\right) $$ where P(xi,yj) is the joint probability of X and Y resulting in values xi and yj, respectively. Lastly, we quantified the expected amount of novel information in X after knowing the value of Y, known as conditional entropy and denoted as H(X|Y), as follows: $$ H\left(X\Big|Y\right)=H(X)-I\left(X;Y\right) $$ Figure 2 visually illustrates H(X), H(Y), I(X;Y), H(X|Y), and H(Y|X). Interested readers are directed to additional references for further details on information theory [16, 17]. An illustration of information theory applied to two variables. This Venn diagram illustrates: the entropies in the two variables X and Y, represented as H(X) and H(Y), respectively; the mutual information (i.e., redundant information) between X and Y, represented as I(X;Y); and the expected amounts of novel information in X and Y, represented as H(X|Y) and H(Y|X), respectively. H(X) is greater than H(Y), which signifies that X is associated with more randomness and less predictability than Y. H(X|Y) represents the expected amount of novel information left in X when Y is known, and vice versa for H(Y|X) We first examined the mutual information between measurements from the first three consecutive days in the ICU. We next analyzed the following pairs of variables for their redundant information on each ICU day: creatinine-BUN, HCO3-lactate, sodium-chloride, and platelet-WBC. While all final results were from an ICU-wide analysis, the coupling information required in the mutual information calculation, i.e., the joint probability P(xi,yj), was computed at the individual patient level. Analyses were conducted in R (version 3.1.1). Table 1 summarizes the descriptive statistics of the analyzed lab tests, stratified by ICU days. Although almost all ICU admissions had at least one measurement of each lab test on day 1 (except lactate), progressively fewer ICU admissions with lab test results were available in MIMIC II on days 2 and 3. This is expected due to discharged patients, and changes in patient acuity. Across all three ICU days, lactate was the least frequently tested lab variable. Table 1 Descriptive statistics of lab tests Figure 3 depicts the entropy of each lab test on each ICU day, as well as how much of the day 2 and 3 information was redundant compared to the previous day. Creatinine was the most predictable variable. Across ICU days, most variables were stable in terms of entropy but resulted in decreasing amounts of novel information. Platelet count, BUN, and creatinine were the most redundant variables on days 2 and 3 given their values on days 1 and 2, respectively. Information contained in daily ICU lab test results. The total and novel amounts of information are plotted as a function of ICU day. The horizontal dashed lines at 4.32 bits represent the maximum achievable entropy. The redundant portions (i.e., total minus novel) of day 2 and 3 are the mutual information between days 1 and 2 and between days 2 and 3, respectively. Although all lab tests contain redundant information on days 2 and 3 compared to the previous day, platelet, BUN, and Cr have the least novel information on days 2 and 3. Hct hematocrit, WBC white blood cell count, K potassium, Na sodium, Cl chloride, BUN blood urea nitrogen, Cr creatinine, Lac lactate The decreasing entropy in lactate was likely due to the increasing number of normal lactate values (i.e., less random) over time. As shown in Table 1, the daily mean lactate values and corresponding standard deviations (in square brackets) were (in mmol/L): day 1: 2.5 [2.0]; day 2: 2.2 [2.3]; day 3: 2.1 [2.2]. In the pairwise analysis, the creatinine-BUN and sodium-chloride pairs contained more redundant information than others (Fig. 4). While the absolute amount of redundant information is identical to both variables in the pair, it represents different relative portions with respect to the entropy in each variable. For example, BUN reduces the unpredictability in creatinine more than creatinine does for BUN, relatively speaking. Redundant information in selected ICU lab test pairs. Information contents are plotted as a function of ICU day. The horizontal dashed lines at 4.32 bits represent the maximum achievable entropy. Each lab test pair is shown asymmetrically side by side; e.g., BUN|Cr and Cr|BUN refer to the information in BUN given creatinine and that in creatinine given BUN, respectively. Because Cr has a smaller entropy than BUN, the redundant information between the two variables represents a greater percentage of the entropy in Cr than that in BUN. WBC white blood cell count, Na sodium, Cl chloride, BUN blood urea nitrogen, Cr creatinine, Lac lactate Intensive care clinicians face a deluge of data on morning rounds, which are typically processed heuristically based on previous experience. Our findings provide objective confirmation of what heuristic methods devise implicitly, namely that the values of some lab tests are more easily predicted than others, and that the information gain between subsequent ICU days is greater for some tests than others. Specifically, we found that the creatinine-BUN pair, the bicarbonate-lactate pair, and the sodium-chloride pair contained a greater degree of redundancy than the WBC-platelet pair. These relationships likely reflect shared physiologic mechanisms, and suggest ways in which policies on lab testing might be developed to capitalize on redundancy, and avoid unnecessary testing. Our findings have direct implications for medical decision making in the ICU, specifically related to the ordering of daily bloodwork. The ordering of routine bloodwork is pervasive in the ICU [18]. While estimates of inappropriate testing in general may vary, one view is that a significant proportion of bloodwork done in the ICU setting is unnecessary [19]. A number of strategies have been implemented to limit unnecessary bloodwork, including education programs, changes in ordering protocols, clinical decision supports, quota systems, and cost transparency [18]. To our knowledge, ours is the first study to quantify the redundant information content between various blood tests. We believe our results provide an empirical basis upon which rational blood test ordering strategies can be developed. The redundancy in information between BUN and creatinine mentioned above suggests that if one is known, the other can be inferred with reasonable confidence. If a hospital were to restrict one of these tests, our results suggest it may be better to reduce testing of creatinine, and to order BUN alone instead, reflecting the asymmetrical relationship between these two tests. While the addition of creatinine to BUN values may not on average add much novel information, clinical judgment is still needed to assess the value added from this test on a case-by-case basis (such as when upper gastrointestinal bleeding is suspected, in which case the BUN data may provide information beyond its reflection of renal function) [20]. Similarly, while the platelet count on day 2 of the ICU admission may on average contain little novel information in light of the day 1 value, ongoing measurement of platelet count may be warranted if the day 1 value was abnormal, if the value was changing significantly from day to day, or if the value had to be known precisely for the purpose of planning invasive procedures. Our study has limitations that merit further discussion. First, our main claim on unnecessary testing is based on the theoretical predictability of lab tests with respect to joint probability distributions. The actual achievable predictive performance should be investigated further in a follow-up study. Second, with respect to the clinical applicability of our findings, we note that our results apply to the information content of laboratory measurements, rather than the biological processes they represent. As such, limited conclusions can be drawn about the natural history of critical illness from a physiologic standpoint. Our results do, however, have bearing on common practices in the ICU related to the frequency of laboratory testing. Frequent blood draws remain commonplace in most ICUs, especially within the first few days of admission to ICU. Our results suggest that even in these periods during which physiologic changes may be most pronounced, the amount of novel information gleaned from repeated blood testing may be limited. Third, the present study did not incorporate other clinical factors that might influence the information contents in lab tests (e.g., clinical outcomes might be associated with the predictability of certain lab tests). Lastly, as a retrospective data analysis, our findings should not be used to support immediate widespread changes in laboratory practices, but rather lend support to the notion that over-testing may be occurring and warrants further study. Prior applications of information theory to medical decision making have largely focused on the evaluation of diagnostic tests, by determining the conditional probability of a disease state given the result of the test. Using similar methods, we defined common information theory parameters including entropy and mutual information for common lab tests done concurrently or sequentially, without reference to a final diagnosis. Our findings suggest that this approach may have utility in analyzing the increasing abundance of biomedical Big Data. Future work will focus on adapting these methods to developing strategies to limit excessive testing in the ICU, and measuring redundancy among three or more lab tests. Information theory can be a useful tool in objectively quantifying the amount of redundancy associated with lab tests. As this proof-of-concept study showed, information theory can help clinicians decide which lab tests are truly needed. Availability of supporting data Although MIMIC II is a public database, gaining access to MIMIC II requires completing human subjects training and signing a data use agreement (visit http://physionet.org/mimic2 for instructions). The clinical part of MIMIC II is a relational database and interested researchers are free to write their own SQL queries to extract data from it after becoming an approved MIMIC II user. However, for those who are interested in replicating or extending our particular research results, we are able to share our SQL query that was used to extract the raw data for this study from MIMIC II. Corwin HL, Parsonnet KC, Gettinger A. RBC transfusion in the ICU is there a reason? Chest. 1995;108:767–71. Harber CR, Sosnowski KJ, Hegde RM. Highly conservative phlebotomy in adult intensive care–a prospective randomized controlled trial. Anaesth Intensive Care. 2006;34:434–7. Choosing Wisely. http://www.choosingwisely.org/ Baird G. The laboratory test utilization management toolbox. Biochem Med. 2014;24:223–34. Cismondi F, Celi LA, Fialho AS, Vieira SM, Reti SR, Sousa JMC, et al. Reducing unnecessary lab testing in the ICU with artificial intelligence. Int J Med Inform. 2013;82:345–58. Borst A, Theunissen FE. Information theory and neural coding. Nat Neurosci. 1999;2:947–57. Ciulla MM, De Marco F, Montelatici E, Lazzari L, Perrucci GL, Magrini F. Journal of theoretical biology. J Theor Biol. 2014;343(C):25–31. Brennan MD, Cheong R, Levchenko A. How information theory handles cell signaling and uncertainty. Science. 2012;338:334. Reverter A, Chan EKF. Combining partial correlation and an information theory approach to the reversed engineering of gene co-expression networks. Bioinformatics. 2008;24:2491–7. Wu Y, Alagoz O, Ayvaci MUS, Munoz del Rio A, Vanness DJ, Woods R, et al. A comprehensive methodology for determining the most informative mammographic features. J Digit Imaging. 2013;26:941–7. Benish WA. Relative entropy as a measure of diagnostic information. Diagnostic Information. 1999;19:202–6. Benish WA. Intuitive and axiomatic arguments for quantifying diagnostic test performance in units of information. Methods Inf Med. 2009;48:552–7. Vollmer R. Entropy and information content of laboratory test results. Am J Clin Pathol. 2007;127:60–5. Shannon CE. The mathematical theory of communication. 1949. Saeed M, Villarroel M, Reisner AT, Clifford G, Lehman L-W, Moody G, et al. Multiparameter intelligent monitoring in intensive care II: a public-access intensive care unit database. Crit Care Med. 2011;39:952–60. Cover TM, Thomas JA. Elements of information theory. John Wiley & Sons. 2012. MacKay DJC. Information theory, inference and learning algorithms. 2003. Flabouris A, Bishop G, Williams L, Cunningham M. Routine blood test ordering for patients in intensive care. Anaesth Intensive Care. 2000;28:562–5. van Walraven C, Naylor CD. Do we know what inappropriate laboratory utilization is? A systematic review of laboratory clinical audits. JAMA. 1998;280:550–8. Srygley FD, Gerardo CJ, Tran T, Fisher DA. Does this patient have a severe upper gastrointestinal bleed? JAMA. 2012;307:1072–9. The authors would like to thank the University of Waterloo and Queen's University for general research support. JL was supported by a Discovery Grant (RGPIN-2014-04743) from the Natural Sciences and Engineering Research Council of Canada (NSERC), as well as the University of Waterloo as a faculty member. DM was supported by Queen's University as a faculty member, as well as by the Southeastern Ontario Academic Medical Association (SEAMO) New Clinician Scientist Program. School of Public Health and Health Systems, University of Waterloo, Waterloo, Canada Joon Lee Department of Medicine & Critical Care Program, Queen's University, Kingston, Canada David M. Maslove Search for Joon Lee in: Search for David M. Maslove in: Correspondence to Joon Lee. JL designed the study, compiled the dataset, carried out the analyses, produced the results, and wrote parts of the manuscript. DM designed the study, interpreted the results, and wrote parts of the manuscript. Both authors critically revised the manuscript. Both authors read and approved the final manuscript. https://doi.org/10.1186/s12911-015-0187-x Mutual Information Blood Urea Nitrogen Intensive Care Unit Admission Redundant Information
CommonCrawl
Genome-wide CRISPR screen identifies host dependency factors for influenza A virus infection Bo Li1,2, Sara M. Clohisey ORCID: orcid.org/0000-0001-7489-98463, Bing Shao Chia1,2, Bo Wang ORCID: orcid.org/0000-0002-1580-797X3, Ang Cui2,4, Thomas Eisenhaure2, Lawrence D. Schweitzer2, Paul Hoover2, Nicholas J. Parkinson3, Aharon Nachshon ORCID: orcid.org/0000-0003-2283-52205, Nikki Smith3, Tim Regan ORCID: orcid.org/0000-0002-0979-78753, David Farr3, Michael U. Gutmann6, Syed Irfan Bukhari7, Andrew Law ORCID: orcid.org/0000-0003-1868-23643, Maya Sangesland8, Irit Gat-Viks2,5, Paul Digard ORCID: orcid.org/0000-0002-0872-94403, Shobha Vasudevan7, Daniel Lingwood8, David H. Dockrell9, John G. Doench ORCID: orcid.org/0000-0002-3707-98892, J. Kenneth Baillie ORCID: orcid.org/0000-0001-5258-793X3,10 & Nir Hacohen ORCID: orcid.org/0000-0002-2349-26562,11 Nature Communications volume 11, Article number: 164 (2020) Cite this article CRISPR-Cas9 genome editing Host dependency factors that are required for influenza A virus infection may serve as therapeutic targets as the virus is less likely to bypass them under drug-mediated selection pressure. Previous attempts to identify host factors have produced largely divergent results, with few overlapping hits across different studies. Here, we perform a genome-wide CRISPR/Cas9 screen and devise a new approach, meta-analysis by information content (MAIC) to systematically combine our results with prior evidence for influenza host factors. MAIC out-performs other meta-analysis methods when using our CRISPR screen as validation data. We validate the host factors, WDR7, CCDC115 and TMEM199, demonstrating that these genes are essential for viral entry and regulation of V-type ATPase assembly. We also find that CMTR1, a human mRNA cap methyltransferase, is required for efficient viral cap snatching and regulation of a cell autonomous immune response, and provides synergistic protection with the influenza endonuclease inhibitor Xofluza. Influenza A Virus (IAV) causes acute respiratory infections in humans and poses a major threat to public health and the global economy. The 2009 H1N1 pandemic resulted in over 60 million infected cases in the United States1 and more than 120,000 deaths worldwide, the majority of which were in young people (<65 years old)2. Avian influenza strains like the H5N1 and H7N9 have also crossed the species barrier and caused lethal infections in humans in recent years3,4,5, raising concerns for future pandemics. Although vaccination against seasonal influenza is an essential part of the public health strategy, its efficacy is variable, and there are few therapeutic options for people who become infected. Conventional antiviral therapies including neuraminidase inhibitors (e.g., oseltamivir, zanamivir) and M2 channel blockers (e.g., amantadine) have limited efficacy and are vulnerable to the rapid selection of resistant virus in treated patients6,7,8. A new class of endonuclease inhibitor (Xofluza) has been approved recently9, but faces similar issues with emergence of resistance viral strains10. Like most viruses, IAV has a relatively small genome and limited repertoire of encoded proteins11 and relies on the host machinery to replicate and complete its life cycle. Identification of host dependency factors (HDFs) that are necessary for IAV replication thus provides an attractive strategy for discovering new therapeutic targets, since the evolution of resistance to host-targeted therapeutics is expected to be slower12,13,14. To achieve this end, numerous large-scale RNA interference (RNAi) screens have been performed in the past, reporting a total of 1362 HDFs that are important for IAV replication15,16,17,18,19,20,21. While these screens provided valuable insights into viral-host interactions22,23,24, overlap in the identified hits has been limited25, a result that likely stemmed from differences in experimental conditions as well as intrinsic limitations in the RNAi technology. A similar inconsistency is evident among screens for HDFs required for HIV infection26,27,28. In recent years, many groups have successfully utilized CRISPR/Cas9 as an alternative screening strategy for HDFs in viral infections29,30,31,32,33. A recently published genome-wide CRISPR/Cas9 screen based on cell survival after IAV infection uncovered a number of new HDFs involved in early IAV infection, but shared few hits with previous RNAi screens. This raises the question whether CRISPR- and RNAi-based screens are confined to identifying mutually exclusive targets due to technological biases. To more comprehensively identify IAV-host interactions, we perform pooled genome-wide CRISPR/Cas9 screens and use IAV hemagglutinin (HA) protein expression on the cell surface as a phenotypic readout. We identify an extensive list of IAV HDFs, including new and previously known factors, involved in various stages of the IAV life cycle. We focus on the less understood host factors and discover that loss of WDR7, CCDC115, and TMEM199 results in lysosomal biogenesis and over-acidification of the endo-lysosomal compartments, which blocks IAV entry and increases degradation of incoming virions. We also identify the human 2′O-ribose cap methyltransferase, CMTR1 as an important host factor for IAV cap snatching and regulator of cell autonomous immune surveillance. To link our findings to previously identified IAV HDFs, we devise a new approach, meta-analysis by information content (MAIC), to combine data from diverse sources of unknown quality, in the form of ranked and unranked gene lists. MAIC performs better than other algorithms for both synthetic data and in an experimental test, and provides a comprehensive ranked list of host genes necessary for IAV infection. Influenza host dependency factors identified in a CRISPR screen To identify HDFs that are necessary for IAV infection, we performed two independent rounds of pooled genome-wide CRISPR screens in A549-Cas9 cells using the well-established AVANA4 lentivirus library34, which encodes 74,700 sgRNAs targeting 18,675 annotated protein-coding genes (with 4 sgRNAs per gene), as well as 1000 non-targeting sgRNAs as controls. On day 9 post-transduction with the library, we infected ~300 million puromycin-resistant cells with influenza A/Puerto Rico/8/1934 (PR8) virus at multiplicity of infection (MOI) 5 for 16 h. Cells were sorted by FACS into different bins based on their levels of surface viral HA (Fig. 1a), which should reflect the efficiency of the viral life cycle from entry to HA export. Roughly ~5% of the cells were sorted into the uninfected bin (low HA expression); these were compared to a control population of cells (comprising the mode for HA expression +/− 20% of the population). Cells that harbor genetic alterations restricting influenza virus replication (i.e., sgRNAs that target host genes important for infection) are expected to be enriched in the uninfected bin. For analysis of the screen data, we combined the empirical p-values which sums the evidence in support of over-representation of sgRNAs targeting a given gene. This method optimizes the discovery power of the screen and is more reproducible than two other common analysis approaches—STARS and MAGeCK (Supplementary Fig. 1, Supplementary Note 1). In this initial screen, sgRNAs targeting 41 genes were significantly enriched in the uninfected bin relative to control bin (FDR < 0.05) (Supplementary Data 1, Fig. 1b). Fig. 1: Genome-wide CRISPR screens identify IAV host dependency factors. a Schematics of the genome-wide CRISPR/Cas9 screening strategy. b, c Spatial distribution of CRISPR knockout signals on the genome for the primary and secondary screens. b Gene-level p-values in primary screen. A selection of top hits is highlighted. c Gene-level FDRs for top hits in the secondary screen. The core set of 121 robust hits (FDR < 0.05) is shown highlighted in red. d Representation of shared information content between each data source after MAIC analysis. Size of data source blocks is proportional to the summed information content (MAIC scores) of input list. Lines are colored according to the dominant data source. Data source categories share the same color; the largest categories and data sources are labeled (see supplementary information for full source data). CRISPR screen performed in this study is labeled as CRISPR (viral production). e Example of validation of MAIC screen using synthetic data (see methods). Y-axes show overlap ratio compared to a known truth (see methods; narrow line = average of 100 replicates; shading = 95% confidence interval). MAIC consistently out-performs existing methods (robust rank aggregation, RRA; vote counting; VC) when presented realistic inputs. Example shown is for a mixture of ranked and unranked data sources, and moderate variation in data quality (heterogeneity) between data sources. See supplementary Figs. 2 for full evaluation. f Experimental validation of MAIC (computed without CRISPR data) against an unseen gold standard (CRISPR screen). Plot shows number of overlaps with the top 1000 hits in CRISPR screen for a given position in each ranked dataset. g Contribution of different data categories to predictive value of the MAIC. As in (f), graph shows number of overlaps with the top 1000 CRISPR screen hits. Results are shown with all RNAi data from previous studies removed from the input set, and with all proteomic or protein interaction data removed. The results of the systematic meta-analysis by Tripathi et al.36 are shown for comparison. To validate identified hits (and thus reduce false positives) and recover additional hits (minimize false negatives), we performed a secondary pooled screen targeting the top 1000 ranked hits from the primary screens but with 10 sgRNAs per gene. We re-identified 37 out of the 41 hits that scored with FDR < 0.05 in the primary screen, as well as recovering additional hits that failed to meet the original FDR cutoff (Fig. 1c). Combining data from the primary and secondary screens yielded a final list of 121 genes (FDR < 0.05) whose roles have been shown or predicted in different stages of the IAV life cycle (Fig. 2, Supplementary Data 1). Amongst these, 78 genes showed significant enrichment of two or more sgRNAs in the uninfected bin, while 43 genes had enrichment for only a single sgRNA. We included the latter in our analysis in order to maximize the discovery power for subsequent validation, and because many of these genes have also been identified in previous RNAi screens and proteomics studies. Fig. 2: CRISPR screens hits and their predicted roles in the IAV life cycle. Diagram showing all 121 hits from our CRISPR screen (FDR < 0.05) and their predicted roles in the IAV life cycle. Hits are marked by a colored dot if they had also been identified in previous RNAi screens (Magenta), proteomic studies (Black) or CRISPR screen (Green). Hits unique to our CRISPR screen are highlighted in yellow. The significantly-enriched genes from the primary and secondary screens included both known HDFs from previous RNAi screens such as ATPase subunits, components of the vesicular transport pathway, signal recognition particles, and genes involved in sialic acid synthesis, as well as unknown ones like components of the TRAPP and TREX2 complexes, genes involved in protein prenylation and co-factors of V-type ATPases. Unlike the previous RNAi screens, we found relatively few ribosomal subunits and genes involved in translation and splicing among our top ranked hits, suggesting that CRISPR-mediated editing of essential host factors potently reduces cell survival, such that cells bearing these edits did not survive the 8 days between editing and influenza virus challenge. Meta-analysis by information content (MAIC) To incorporate these findings into the existing evidence base, which include annotated pathways23,35, genetic perturbation screens16,33,36, and protein–protein interactions, we devised the MAIC approach to evaluate the information content in each data source by comparing it to other data sources. MAIC takes a simple and intuitive approach to quantify the information content in a given list of genes, for example the results of a single experiment, by comparing it to the results of other experiments that might reasonably be expected to find some of the same genes. In this way MAIC produces a weighting factor for each experiment, and then calculates a score for each gene. Our analysis then produced a final ranked list of HDFs based on this score, which summarizes the composite evidence from all input sources of a particular gene being involved in IAV infection (Supplementary Data 2). We found that our CRISPR/Cas9 screen provides the most information (11.4% of total information content) when compared with individual genetic perturbation screens and proteomics studies performed in the past (Fig. 1d). We performed extensive in silico validation of the MAIC method using synthetic data designed to test MAIC when presented with combinations of ranked and unranked data, varying levels of noise, and varying levels of heterogeneity of data quality in the input data sets. We compared MAIC to two existing approaches: (1) a simple count of the number of occurrences of each gene in each data set, and (2) robust rank aggregation (RRA), a powerful method for aggregating ranked data (such as screen results), which does not allow for the inclusion of unranked data (such as a pathway or coexpression cluster)37. MAIC performs better than both methods under most conditions; in the absence of noise (when every single item in the input dataset is correct), MAIC performs similarly to other methods (Fig. 1e, Supplementary Fig. 2). In order to provide an experimental test of the MAIC algorithm, we used MAIC to combine relevant data sources from the literature (Supplementary Note 2), with the exception of the new data from our CRISPR screen. We then used the CRISPR screen as an unseen "gold standard", against which to test other siRNA screens and meta-analysis. Both MAIC and RRA successfully prioritize highly-ranked genes in the top 50 CRISPR hits, but RRA fails to identify hits below this level (Fig. 1f). In contrast, a simple count of the occurrences of each gene in each category (vote counting) fails to prioritize the top candidates, but is more effective at identifying many genes in the top 1000 ranks. This is in part due to the dominance of protein interaction data in the MAIC results (Fig. 1g). The MAIC algorithm outperforms both RRA and vote-counting, and prioritizes more genes that overlap with CRISPR results than any other data source (RNAi screens and protein interaction studies), including a previous gene-level meta-analysis36 (Fig. 1g). MAIC thus identifies a unique set of host factors based on multiple lines of evidence, and distinct from the ranked list of any individual screen (Supplementary Data 2). Ribosomal genes feature heavily in the MAIC results because of strong support from several datasets. As expected, genes with a variety of other functions, including host antiviral response, RNA processing and proteasome function are also highly supported. Gene set enrichment analysis (GSEA) highlights afferent signaling pathways, including Toll-like receptor signaling (KEGG; Supplementary Data 2), and EGF and MAPK signaling and related pathways (BioCarta; Supplementary Data 2). Validation of influenza host factor dependencies We selected 28 genes for further validation based on their top ranking in our screen and not being previously implicated in IAV infection. A549 cells were transduced with the top 2 sgRNAs from the secondary screen (based on fold change of sgRNA in uninfected bin relative to control bin) and genome editing was confirmed by sequencing of the predicted target sites. Polyclonal KO cells were then infected with Influenza A PR8 virus at MOI 5 on day 9 post-sgRNA transduction and stained for surface HA. We found 21 out of the 28 polyclonal KO cell lines to be partially protected against IAV infection for both sgRNAs (Supplementary Fig. 3), while three polyclonal KO cell lines were protected for only one of the two tested sgRNAs. The degree of protection varied between the cell lines despite their sgRNAs having comparable genome editing efficiency (Supplementary Fig. 4), suggesting the roles of these genes differ depending on the cell context. Deletion of four of the hits—WDR7, CCDC115, TMEM199, and CMTR1—conferred strong protection against PR8 virus infection in both A549 cells and normal human lung fibroblasts (NHLFs) (>40% reduction in percentage of HA-positive cells) (Fig. 3a, b). To test if the four genes are required for efficient virus production, we infected WDR7, CCDC115, TMEM199, and CMTR1 polyclonal KO cells with H1N1 PR8 virus and H3N2 Udorn virus at MOI 0.1 and monitored virus production at 24, 48, and 72 h post-infection by plaque assay. Virus production peaked after 48 hours post-infection for PR8 virus and 24 h post-infection for Udorn virus. At these time points, we observed >2 log reduction in virus titer in all four polyclonal KO cell lines for PR8 virus and >1 log reduction in for Udorn virus compared to wild type cells (Fig. 3c). The greater magnitude of reduction in viral infection rate observed at low MOI is likely due to cumulative effects of multiple replication cycles. We also compared the phenotype of knocking out WDR7, CCDC115, TMEM199, and CMTR1 to SLC35A1, a known IAV host factor that was both highly-ranked in ours and previous published CRISPR/Cas9 screens33. SLC35A1 is a CMP-sialic acid transporter that is required for surface sialic acid expression and IAV entry. We observed similar reduction in percentage of HA-positive cells and virus titer produced by infected WDR7, CCDC115, TMEM199, and CMTR1 KO cells compared to SLC35A1 KO cells (Supplementary Fig. 5A-E). The degree of protection conferred by CRISPR deletion of these genes is also consistent with what is previously published for polyclonal SLC35A1 KO cells33, suggesting that these genes may indeed serve as important IAV HDFs. Fig. 3: Validation of screen hits in A549 cells and normal human lung fibroblasts (NHLF). a A549 cells and b primary normal human lung fibroblasts were transduced with either gene-specific or non-targeting sgRNA, followed by infection with PR8 virus at MOI 5 for 16 h. Y-axis shows percentage of HA-positive cells normalized to non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. c Virus titer in plaque forming units (PFU)/ml at 24, 48, and 72 h post-infection with either PR8 (Left) or Udorn (right) virus at MOI 0.1. Supernatant used for the plaque assay was collected from infected A549 cells transduced with either gene-specific or non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. d Fold change in viral HA RNA level relative to GAPDH measured by qRT-PCR. A549 cells transduced with either gene-specific or non-targeting sgRNA were infected with H1N1 California/2009, H1N1/New Caledonia/1999 or H5N1/Vietnam/2004/PR8-recombinant IAV strains at high MOI for 16 h. Fold change was normalized to HA RNA level in A549 cells transduced with non-targeting sgRNA. A549 cells transduced with SLC35A1 sgRNA was used for comparison. Error bars represent standard deviations from three independent experimental replicates. e A549 cells transduced with either gene-specific or non-targeting sgRNA were infected with VSV-GFP virus at MOI 1 for 16 h. Y-axis shows percentage of GFP-positive cells normalized to non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. f Proliferation curve of transduced A549 cells up to 9 days post sgRNA-transduction. Error bars represent standard deviations from three independent experimental replicates. ****P = 0.0001 ***P < 0.001 and *P < 0.05, by one-way ANOVA test. Since PR8 and Udorn are lab-adapted IAV strains, we also tested the infectivity of WDR7, CCDC115, TMEM199, and CMTR1 polyclonal KO cells by more recent clinical isolates of IAV including the 1999 New Caledonia and 2009 California H1N1 pandemic strains, as well as a H5N1 IAV strain (Vietnam/2004) that has been re-engineered with PR8 internal genes. Similar to previous observation with PR8 and Udorn virus, we showed that A549 cells lacking WDR7, CCDC115, TMEM199 or CMTR1 again displayed lower levels of IAV HA RNA at 16 h post-infection compared to control cells by qRT-PCR (Fig. 3d). To confirm that the observed phenotype is not due to off-target effects, we expressed codon-mutated versions of these genes in the polyclonal KO cells and observed restoration of normal IAV infection levels (Supplementary Fig. 6A, Supplementary Fig. 6B) in all the KO cells with the exception of WDR7 KO cells, which is only partially rescued. We speculated that this could be due to the large protein size of WDR7 which makes it difficult to express (173kDA). To confirm that the phenotype type observed for WDR7 is not due to off-target effects, we tested two additional sgRNAs against WDR7 and observed similar reduction in IAV infection rate (Supplementary Fig. 6C). To test if the functions of these genes apply to other viruses, we infected the four polyclonal KO cell lines with vesicular stomatitis virus (VSV) and showed that WDR7, CCDC115, and TMEM199, but not CMTR1 were also required for efficient VSV infection (Fig. 3e). To test if these genes are essential for cell survival, we monitored the proliferation rate of A549 cells up to 9 days post-transduction with WDR7, CCDC115, TMEM199, and CMTR1 sgRNAs. We observed no significant difference in proliferation rate between these cells compared to those transduced with non-targeting sgRNA (Fig. 3f). In contrast, majority of the cells transduced with sgRNA against ATP6V1A, a V-type ATPase subunit and a known IAV host factor, died by day 7 post-transduction. Annexin V straining also confirmed a similar number of live cells between A549 cells transduced with WDR7, CCDC115, TMEM199, and CMTR1 sgRNAs and non-targeting sgRNA on day 9 post-transduction (Supplementary Fig. 6D). Thus, the four identified genes were critical for IAV infection but not observed to impact cell viability. WDR7, CCDC115, TMEM199, and CMTR1 involved in early infection To better understand how loss of these genes conferred resistance against IAV infection, we first determined which steps of the IAV life cycle they play a role in. We found significant reduction in viral nucleoprotein (NP) RNA and protein levels at 4 h post-infection in WDR7, CCDC115, TMEM199, and CMTR1 polyclonal KO cells compared to wild type cells, suggesting that all 4 genes are important during early infection (Fig. 4a)38. To test if the genes are required for IAV entry, we infected polyclonal KO cells with MLV-GFP retrovirus pseudotyped with H1N1 PR8 HA and NA proteins. This allows the retrovirus to enter the cell in a HA dependent manner that is akin to IAV entry39. We then monitored GFP expression in the cells 48 h post-infection. We found that WDR7, CCDC115 and TMEM199 KO cells, but not CMTR1 KO cells had lower percentage of GFP-expressing cells compared to wild type (Fig. 4b). In contrast, all four polyclonal KO cell lines had comparable GFP expression to wild type cells when infected with an MLV-GFP retrovirus pseudotyped with amphotropic MLV-envelope protein, suggesting that WDR7, CCDC115 and TMEM199 are specifically required for IAV entry in a HA/NA dependent manner. We next asked if the three genes were required for IAV entry by allowing virus attachment to the cell surface membrane. To test this, we incubated polyclonal KO cells with PR8 virus at 4 °C for 30 min (to prevent viral fusion), followed by washing and staining for surface bound HA. We found no difference in HA staining between the KO and wild type cells, suggesting that the genes are not essential for virus attachment (Fig. 4c). This is also supported by the observation that WDR7, CCDC115, and TMEM199 did not affect expression of cell surface sialic acids (Fig. 4c), which serve as entry receptors for IAV40. In contrast, A549 cells that have undergone CRISPR deletion of SLC35A1 have both reduced levels of surface sialic acid and bound virions. Fig. 4: WDR7, CCDC115, TMEM199 and CMTR1 are involved in early infection. a Flow cytometry and fluorescent in situ hybridization (FISH) for influenza PR8 NP protein and RNA. A549 cells were transduced with either gene-specific or non-targeting sgRNA and infected with PR8 virus for 4 h. Infected cells were stained with anti-H1N1 NP antibody or NP FISH RNA probes. Bar graphs show quantification for the percentage of NP-positive cells for both protein and RNA (cells with positive FISH staining) normalized to non-targeting sgRNA. Scale bar = 20 μm. Error bars represent standard deviation from three independent experimental replicates for NP protein staining and four randomly chosen frames for NP RNA FISH. b A549 cells were transduced with either gene-specific or non-targeting sgRNA and infected with MLV-GFP reporter virus pseudo-typed with either influenza PR8 HA and NA or MLV envelop proteins. Y-axis shows percentage of GFP-positive cells normalized to non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. c Histogram showing level of surface-bound HA (left) and sialic acid (right). A549 cells transduced with either gene-specific or non-targeting sgRNA were incubated with PR8 virus for 30 min at 4 °C for monitoring level of surface-bound HA or stained with fluorescein-labeled Sambucus Nigra Lectin for 1 h at 4 °C for surface sialic acid. Cells transduced with sgRNA targeting SLC35A1 was used as positive control. Histograms shown is representative of three independent experimental replicates. ****P = 0.0001 ***P < 0.001 and **P < 0.01, by one-way ANOVA test. WDR7, CCDC115, and TMEM199 regulate endo-lysosomal pH Recent studies have reported WDR7, CCDC115, and TMEM199 as factors associated with mammalian V-type ATPases41,42, but their functions remain unclear. To test if these genes are required for IAV entry by regulating endo-lysosomal acidification, we stained WDR7, CCDC115, TMEM199, and CMTR1 polyclonal KO cells with lysotracker red, fluorescent-labeled anti-Rab7 and anti-LAMP1 antibodies. Unexpectedly, we observed an increase in lysotracker red staining in WDR7, CCDC115, and TMEM199 KO cells, which co-stained partly with Rab7 (late endosome) and LAMP1 (lysosome). (Fig. 5a). To determine if the increase in lysotracker red staining is solely due to expansion of the endo-lysosomal compartments or actual reduction in pH, we also stained the cells with the more pH-sensitive lysosensor blue dye and Oregon Green Dextran. As in the case with lysotracker red, we observed an increase in lysosensor blue staining and reduction in Oregon Green signal (Oregon Green fluorescence becomes quenched at lower pH) in WDR7, CCDC115 and TMEM199 KO cells, indicating that both endo-lysosomal expansion and reduction in pH were taking place (Supplementary Fig. 7A). A similar increase in lysotracker staining is observed in NHLFs transduced with WDR7, CCDC115, and TMEM199 sgRNAs (Supplementary Fig. 7B). We next asked if this reduction in endo-lysosomal pH could be restored in WDR7, CCDC115, and TMEM199 polyclonal KO cells by treating the cells with Bafilomycin A (BafA), a known inhibitor of V-type ATPase activity43 and IAV infection44. We observed a reduction in lysotracker staining in KO cells treated with BafA treatment (Fig. 5b), suggesting that these genes function upstream of V-type ATPases. However, we found that BafA treatment, even at low concentrations, further protected the KO cells against IAV infection (Fig. 5c, Supplementary Fig. 8A). We speculated that this may be due to disruption of the fine pH gradient in the endocytic pathway that is required for efficient IAV uncoating and replication45. WDR7, CCDC115 and TMEM199 appeared to play non-redundant roles as over-expression of WDR7 in CCDC115 or TMEM199 polyclonal KO cells and vice versa did not rescue IAV infection rate (Supplementary Fig. 8B). Over-expression of WDR7, CCDC115 and TMEM199 in wild type A549 cells also did not have an effect on lysotracker staining or IAV infectivity, suggesting that their effects on function may already be saturated at the steady state (Supplementary Fig. 8C, Supplementary Fig. 8D). Fig. 5: WDR7, CCDC115 and TMEM199 regulate endo-lysosomal pH. a Immunofluorescence of A549 cells transduced with either gene-specific or non-targeting sgRNA and stained with Lysotracker red, anti-LAMP1 and anti-Rab7 antibodies. Scale bar = 20 μm. b Lysotracker red staining of A549 cells transduced with either gene-specific or non-targeting sgRNA. Cells were mock-treated or treated with 100 nM Bafilomycin A (BafA). Scale bar = 20 μm. c A549 cells transduced with either gene-specific or non-targeting sgRNA were treated with different concentrations of BafA followed by PR8 virus infection. Error bars represent standard deviations from two independent experimental replicates. d A549 cells transduced with either gene-specific or non-targeting sgRNA were fractioned into membrane and cytosolic components. The two fractions were then subjected to SDS-PAGE and Western blotting using antibodies against subunit A of the V1 domain and subunit D of the V0 domain as measure of V-ATPase assembly. Western blotting was also performed for HSP90 (cytosolic) and COX-IV (membrane) as positive controls. e Cytosolic and nuclear proteins were extracted from A549 cells transduced with either gene-specific or non-targeting sgRNA using the NE-PER Nuclear and Cytoplasmic extraction kit. Samples in each fraction were subjected to SDS-PAGE and western blotting using antibodies against TFEB. Western blotting for TATA-binding protein (TBP) was used as positive control. f Whole cell lysates were extracted from A549 cells transduced with either gene-specific or non-targeting sgRNA. Samples were subjected to SDS-PAGE and western blotting using antibodies against TFEB and Phospho-TFEB (ser211). To understand how loss of WDR7, CCDC115, and TMEM199 resulted in expansion and over-acidification of the endo-lysosomal compartments, we extracted cytosolic and membranous proteins from the polyclonal KO cells and measured the relative abundance of the cytosolic V1A and transmembrane V0D domain subunits of the V-type ATPases via western blot46. We observed an enrichment of V1A subunit in the cytosolic fraction of WDR7, CCDC115 and TMEM199 KO cells and a corresponding reduction in the membrane fraction, indicating that at least a subset of V-type ATPases are in a dis-assembled and less active state in these cells (Fig. 5d). This was unexpected as inactivation of the V-type ATPase should in theory lead to less endo-lysosomal acidification. It has been reported that prolonged treatment of cells with lysosomotropic compounds such as chloroquine and tamoxifen could lead to increased lysotracker red staining due to lysosome adaptation and biogenesis caused by nuclear translocation of transcription factor EB (TFEB)47,48 (Supplementary Fig. 9A). To test if absence of WDR7, CCDC115 or TMEM199 leads to TFEB translocation and lysosomal biogenesis, we extracted cytosolic and nuclear proteins from WDR7, CCDC115, and TMEM199 polyclonal KO cells and measured the relative abundance of TFEB in each fraction. We observed an enrichment of TFEB in the nuclear fraction of WDR7, CCDC115 and TMEM199 KO cells but not in CMTR1 KO or wild-type cells (Fig. 5e). There was also de-phosphorylation of TFEB at Ser211 in WDR7, CCDC115, and TMEM199 KO cells, which is required for TFEB dissociation from the lysosomal surface and subsequent nuclear translocation49 (Fig. 5f). Sequencing of bulk RNA from the KO cells also showed an increase in expression of lysosomal genes including ASAH1, NPC2, Cathepsin B, and Cathepsin L (Supplementary Fig. 9B)50. These led us to conclude that the loss of WDR7, CCDC115, and TMEM199 results in V-type ATPase inactivation which in turn triggers compensatory lysosomal adaptation and biogenesis. Since Bafilomycin A treatment reduced lysotracker red staining in WDR7, CCDC115 and TMEM199 KO cells, we speculated that different isoforms of V-type ATPase or other ATPases (P- or F-type) may play a compensatory role in these cells when one or more V-type ATPases become inactivated. Loss of WDR7, CCDC115 or TMEM199 prevents IAV nuclear entry While it is known that an acidic endo-lysosomal environment is required for IAV entry51, we showed that expansion of the endo-lysosomal compartment and reduction in pH also block IAV infection in WDR7, CCDC115, and TMEM199 KO cells. To assess the functional effect of lysosomal adaptation, we incubated WDR7, CCDC115 and TMEM199 polyclonal KO cells with DQ-Green BSA, a derivative of bovine serum albumin (BSA) that is heavily labeled with green fluorescent BODIPY FL dye. The dye is usually self-quenched but produces a bright fluorescence when DQ-Green BSA is cleaved by hydrolases in the acidic endo-lysosomal compartments52. We found that WDR7, CCDC115, and TMEM199 polyclonal KO cells exhibited brighter DQ-BSA staining than CMTR1 KO cells and wild type cells, which co-stained with the increased lysotracker red signal. This suggested that there is increased endo-lysosomal trafficking and degradation of incoming endocytic cargo in cells lacking WDR7, CCDC115 or TMEM199 (Fig. 6a). Fig. 6: Loss of WDR7, CCDC115 and TMEM199 prevents IAV nuclear entry. a Immunofluorescence staining of A549 cells transduced with either gene-specific or non-targeting sgRNA with DQ-green bovine serum albumin (BSA) and Lysotracker red. Cells were treated with 20ug/ml DQ-BSA and Lysotracker red for 1 h at 37 °C followed by fixation to monitor uptake and hydrolysis of the DQ-BSA. DQ-BSA staining was quantified by dividing total fluorescence intensity by the number of cells in each frame. Bar graph shows the relative fluorescence intensity between different sgRNAs. Scale bar = 20 μm. Error bars represent standard deviation from three randomly chosen frames. b Immunofluorescent staining of A549 cells transduced with either gene-specific or non-targeting sgRNA with FITC-conjugated anti-H1N1 NP antibody. Cells were infected with PR8 virus at MOI 500 for 2 h at 37 °C. Scale bar = 15 μm. c Fold change in viral NP RNA level relative to GAPDH measured by qRT-PCR. A549 cells transduced with gene-specific or non-targeting sgRNA were infected with either influenza H3N2 Udorn or X31 virus at MOI 5 for 16 h. Fold change was normalized to viral NP RNA level in A549 cells transduced with non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. ***P < 0.001 and *P < 0.05, by one-way ANOVA test. To test if incoming IAV virions are being trafficked to and degraded in the endo-lysosome compartments, we infected polyclonal KO cells with PR8 virus at MOI 500 and stained for intracellular NP protein at 2 h post-infection. In wild type cells, NP staining was bright and primarily observed in the nuclei, where viral replication takes place. In contrast, NP staining was largely absent in the nuclei of WDR7, CCDC115, and TMEM199 KO cells and was instead concentrated in punctate structures near the peri-nuclear regions (Fig. 6b). The NP punctate structures co-stained partly with LAMP1, suggesting that at least a sub-fraction of the incoming virions are retained in the lysosomes. In contrast, NP staining was also reduced in CMTR1 KO cells but found in the nuclei like in wild type cells. Taken together, these suggested that incoming virions are blocked prior to nuclear entry and are likely retained in the endo-lysosomal compartments due to lack of viral fusion. Although an acidic endo-lysosomal environment is required for viral fusion, studies have shown that exposure to pH lower than the optimal fusion pH may cause HA inactivation and coagulation of viral ribonucleoproteins (RNP)53,54,55. In addition, perturbation of V-type ATPase activity and localization can disrupt the pH gradient from early to late endosomes56 which IAV requires for efficient uncoating57. To test if the block in IAV infection is due to sub-optimal fusion pH, we compared the infectivity of two different H3N2 viral strains in the polyclonal KO cells. The X:31 strain has been shown to initiate membrane fusion at a lower pH and is more acid stable than the Udorn strain53. We thus hypothesized that X:31 virus will be less affected by the lower endo-lysosomal pH in WDR7, CCDC115, and TMEM199 polyclonal KO cells than Udorn virus. Consistent with this, we observed comparable viral NP RNA levels in KO and wild type cells at 16 h post-infection by X:31 virus. In contrast, KO cells have significantly lower viral RNA levels compared to wild type cells when infected with Udorn virus (Fig. 6c). CMTR1 is required for IAV cap snatching CMTR1 was recently discovered as the human 2′-O-ribose cap methyltransferase58,59, which adds a methyl-group to the 5′-7 methylguanosine cap of eukaryotic mRNA to form the Cap1 structure (methylation of the 2′-O ribose of the first transcribed nucleotide). Since 2′-O-methylation of the mRNA cap has been known to be important for IAV cap snatching60,61, we hypothesized that loss of CMTR1 would inhibit viral transcription by preventing efficient cap snatching. To test this, we transfected WDR7, CCDC115, TMEM199 and CMTR1 polyclonal KO cells with a vRNA luciferase reporter construct carrying PR8 promoter and UTR regions, as well as plasmids expressing PR8 polymerase subunits PA, PB1 and PB262. Twenty-four hours post-transfection, the cells were lysed and luciferase activity was measured. Consistent with our hypothesis, we observed lower luciferase activity in CMTR1 KO cells but not in WDR7, CCDC115, TMEM199 KO cells or wild-type cells (Fig. 7a). Fig. 7: Loss of CMTR1 inhibits viral replication and up-regulates anti-viral genes. a A549 cells transduced with gene-specific or non-targeting sgRNAs were transfected with PR8 PA, PB1, PB2, and NP plasmids together with green-Renilla and Luciferase reporter plasmids for 24 hours. Luciferase activity was measured and normalized to A549 cells transduced with non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. b A549 cells transduced with CMTR1 or non-targeting sgRNA were infected with PR8 virus, in vivo crosslinked, lysed and immunoprecipitated with either anti-eIF4E or anti-IgG antibody to extract capped cellular and viral RNA. qRT-PCR was performed to monitor the relative abundance of PR8 NP RNA. (Left): Fold change in NP RNA levels normalized to anti-IgG non-targeting sgRNA pulldown. (Right): Fold change in NP relative to GAPDH RNA levels normalized to non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. c Fold change in IFN- β relative to GAPDH mRNA levels measured by qRT-PCR. A549 cells transduced with CMTR1 or non-targeting sgRNA were either mocked treated, infected with PR8 virus or treated with 200 U/ml IFN- β for 16 h. Fold change was normalized to A549 cells transduced with non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. d Gene Ontology (GO) enrichment analysis showing the top 10 up-regulated gene categories in A549 cells transduced with CMTR1 sgRNA versus non-targeting sgRNA after PR8 virus infection. IFN-related and antiviral gene categories are colored in red. e Fold change in IFN- β relative to GAPDH mRNA levels measured by qRT-PCR. A549 cells were transduced with CMTR1 sgRNA alone or together with sgRNA targeting RIG-I, MAV or IRF3 and infected with PR8 virus. Fold change was normalized to cells transduced with non-targeting sgRNA. Error bars represent standard deviations from three independent experimental replicates. f A549 cells transduced with gene-specific or non-targeting sgRNA were pre-treated with 0, 1, 5 or 10 nM of Baloxavir followed by PR8 virus infection. Y-axis shows the percentage of HA-positive cells normalized to untreated cells. Error bars represent standard deviations from three independent experimental replicates. ****P = 0.0001, **P < 0.01, by one-way ANOVA test. To test the hypothesis that CMTR1 is involved in IAV cap snatching, we infected wild type and CMTR1 polyclonal KO cells with PR8 virus and immuno-precipitated the cell lysate with anti-eIF4E antibody to pull down capped viral and host RNA63. Relative abundance of pulled-down NP RNA in CMTR1 KO and wild type cells was then measured by qRT-PCR. Our results showed that while there was no difference in amount of NP RNA pulled down by anti-IgG control, there was significantly more NP RNA pulled down by anti-eIF4E antibody in wild type cells compared to CMTR1 KO cells, suggesting that the latter has less capped viral RNA (Fig. 7b). Since eIF4E binds to both cap0 and cap1 RNA, while IAV only efficiently cap snatch cap1 RNA, we normalized the amount of pulled-down NP RNA (cap1) against GAPDH (cap0 + cap1). We then compared this ratio between wild type and CMTR1 KO cells in anti-eIF4E and anti-IgG pulldown samples. We found that CMTR1 KO cells have a lower ratio of NP: GAPDH than wild type cells (Fold change = 0.29) in anti-eIF4E pulldown samples but not in anti-IgG control (Fig. 7b). Importantly, we found this difference to be more pronounced than that observed for un-precipitated input samples (fold change = 0.53), suggesting that the difference observed between CMTR1 KO cells and wild type cells is not just due to inhibition of viral replication. Together, these observations led us to conclude that CMTR1 is required for efficient IAV cap snatching. Loss of CMTR1 increases expression of anti-viral genes Although Cap1 is present on most eukaryotic mRNAs, its precise functions are poorly understood as the lack of CMTR1 does not seem to have a significant impact on global protein translation58. Recent studies have proposed that 2-O-ribose methylation of the mRNA cap acts as a mechanism by which the cell differentiates between self- and non-self RNA64,65, as siRNA knockdown of CMTR1 was shown to elevate Type I IFN response in A549 cells in the absence of additional stimulus. We hypothesized that the loss of CMTR1 may block IAV infection by both preventing efficient cap snatching and increasing cell autonomous antiviral responses. To test the latter, we measured the transcript levels of the anti-viral cytokine, IFN-β, in CMTR1 polyclonal KO cells and wild type cells in the presence and absence of PR8 infection via qRT-PCR. Interestingly, we observed an increase in IFN-β expression in CMTR1 KO cells but only when they were infected by PR8 virus (Fig. 7c), despite lower level of viral NP RNA detected in CMTR1 KO cells compared to wild type cells (Supplementary Fig. 10A). We also observed lower level of NS1 RNA in infected CMTR1 KO cells, which may help to explain the increase in IFN signatures in these cells. To confirm our results, we extracted RNA from CMTR1 KO cells and wild type cells with and without PR8 infection and performed bulk RNA sequencing. Principal component analysis (PCA) revealed significant differences in RNA expression profile between CMTR1 KO cells and wild type cells in the presence of PR8 infection but not at the resting state (Supplementary Fig. 10B). A closer inspection of the differentially-expressed genes showed an enrichment of Type I and II IFN-related genes as well as other antiviral genes in CMTR1 KO cells (Fig. 7d). To test if the increase in IFN signature is mediated by the RIG-I sensing pathway66, we transduced CMTR1 KO cells with sgRNA targeting RIG-I, MAV or IRF3 followed by infection with influenza PR8 virus. We found that the increase in IFN- β expression isield completely abrogated in the absence of RIG-I, MAV or IRF3, indicating that an intact RNA sensing pathway is required for the elevated IFN response in CMTR1 KO cells (Fig. 7e). Synergistic action between CTMR1 knockout and Xofluza The recent FDA-approved drug Xofluza (Baloxavir Marboxil) blocks IAV infection by inhibiting the endonuclease activity of IAV PA subunit and preventing cap snatching9,10. To test if CMTR1 has potential interactions with Xofluza, we pre-treated WDR7, CCDC115, TMEM199, and CMTR1 polyclonal KO cells with increasing doses of Baloxavir (active form of the drug) prior to PR8 virus infection and measured changes in infectivity. While all four KO cell lines and wild type cells displayed a dose-dependent reduction in viral infection rate, CMTR1 KO cells demonstrated the most drastic decrease in infectivity with increasing dose of Baloxavir treatment (Fig. 7f). At the lowest concentration of drug administered (5 nM), CMTR1 KO cells had a 85% reduction in infectivity compared to 40%, 45%, 23% and 6% achieved in WDR7, CCDC115, TMEM199 KO cells and wild-type cells respectively. This indicated that loss of CMTR1 may confer synergistic protection against IAV infection with Xofluza treatment. In this study, we identified 121 host genes that are required for IAV replication based on our CRISPR screen. In addition, we devised and applied the MAIC algorithm to put these discoveries in the context of extensive previous literature on this topic, generating a ranked list of all known HDFs for influenza. Unlike many earlier host factor screens that relied on cell survival as selection criterion, we adopted a different CRISPR/Cas9 screening strategy by using viral protein expression at an early time point post-infection as our phenotypic readout. Using such a continuous metric allowed us to identify a deeper set of HDFs (121 hits with FDR < 0.05) that play roles from early to late stages of the IAV life cycle. A significant fraction of our hits (77/121 hits) overlapped with those from previous RNAi screens and proteomics studies (Supplementary Data 3), including all six genes that were identified in at least four RNAi screens (ATP6AP1, ATP6V0C, ATP6V0D1, COPA, COPG, and NXF1)25. This differed from a previous published CRISPR/Cas9 cell survival screen which identified primarily early entry factors and shared few common hits with RNAi screens, suggesting that endpoint selection can strongly affect screening outcomes. Importantly, both our CRISPR/Cas9 screen and the recently published one33 identified shared HDFs absent from previous RNAi screens, indicating that our knowledge of IAV-host interactions had not yet reached saturation. By deriving an information content weighting directly from the overlap between input data sets, the MAIC algorithm provided a systematic meta-analyses of multiple experiments and other data sources of unknown quality, aggregated both ranked and unranked data sources, and outperformed other methods in realistic comparisons using synthetic data, and in an experimental comparison using our CRISPR screen as a validation dataset. The ability to combine ranked and unranked data, and to systematically weight input data by a data-driven quality metric, overcomes some limitations of previous work36,37. Interestingly, our meta-analysis highlighted many relevant hits found in the Drosophila RNAi screen16 compared with other RNAi screens. In contrast, we found that there was relatively little relevant information content detected among a set of human genes under recent positive selection67. The MAIC approach revealed many HDFs supported by CRISPR or siRNA evidence, with strong evidence supporting a direct interaction with viral proteins, but with no existing annotation in the KEGG35 or FluMap68 databases. Strongly-supported examples include the PRPF8 gene, which has recently been shown by another group to have a dose-dependent relationship with influenza virus expression69, as well as numerous genes, such as the splicing factor SRSF6 and the elongation factor EEF1A1 which have not, to our knowledge, been studied in influenza virus infection models. MAIC thus highlights genes that are strongly supported by evidence to play important roles in IAV infections, but have not been extensively studied previously. We focused on genes highly ranked in our screen but not previously investigated in the context of IAV infection for functional follow-up experiments. Three of our top ranked hits from the CRISPR screens, WDR7, CCDC115 and TMEM199, have been reported as putative V-type ATPase-associated co-factors41,42,70,71, but their functions in mammalian cells and especially in the context of viral infections are poorly understood. Here, we provide evidence that all three genes are required for efficient V-type ATPase assembly and IAV entry. Unlike V-type ATPase subunits, knocking out WDR7, CCDC115 or TMEM199 did not result in loss of cell viability, suggesting that these co-factors could serve as better therapeutic targets. The unexpected observation that WDR7, CCDC115, and TMEM199 polyclonal KO cells underwent expansion and over-acidification of the endo-lysosomal compartments led us to hypothesize that long-term inhibition of V-type ATPases may cause a compensatory increase in lysosomal function, a phenomenon that is observed in cells that were subjected to starvation or prolonged treatment with lysosomoptropic compounds47,72. In support of this, we observed increased nuclear translocation of TFEB and expression of lysosomal genes in WDR7, CCDC115, and TMEM199 KO cells. A previous study reported that knockdown of RNASEK, another V-type ATPase-associated factor, also led to increased endo-lysosomal acidification that was mediated by the P-type ATPase ATP13A273, suggesting that other ATPase proteins may over-compensate for the inactivation of specific V-type ATPases. TFEB over-expression has been postulated as potential treatment for a variety of human diseases74,75,76. Our observation that TFEB-mediated expansion and over-acidification of endo-lysosomal compartments block IAV and VSV infection opens the possibility of upregulating TFEB activity as a treatment option for reducing acid-dependent viral infections. While it has been established that an acidic endosomal environment is required for IAV entry51,77, we showed that depletion of WDR7, CCDC115, and TMEM199 increases endo-lysosomal acidification, yet reduces viral infection. We hypothesized that this could be due to two reasons: First, it has been reported that exposure to pH that is too acidic could lead to HA inactivation and inhibition of viral fusion53,55. In support of this, we showed that incoming virions were trapped in punctate structures around the perinuclear regions of WDR7, CCDC115, and TMEM199 polyclonal KO cells, which partially co-stained with LAMP1. We also observed increased degradation of endocytic cargo in these cells, suggesting that incoming virions which failed to fuse could be targeted for degradation in the endo-lysosomes. In addition, we found that X:31 virus, a more acid stable H3N2 strain than the Udorn virus53, retained normal infection rates in WDR7, CCDC115 and TMEM199 polyclonal KO cells. Second, sequential exposure to lower pH from the early to late endosome has been shown to be required for productive IAV infection45,57. Depletion of WDR7, CCDC115, and TMEM199 may thus block IAV infection by disrupting the pH gradient in the endo-lysosome pathway56. Here, we showed that BafA treatment, while reducing lysotracker red staining, did not restore IAV infection in WDR7, CCDC115, and TMEM199 polyclonal KO cells. The lysotracker red signal in the KO cells also co-stained with both late endosome and lysosome markers, indicating possible homogenization of pH across different endosomal compartments. IAV relies on a unique strategy of cap-snatching to carry out viral transcription and replication78. The PA subunit of the IAV polymerase complex functions as a cap-dependent endonuclease, which recognizes and cleaves short fragments of capped host mRNA to use as primer for its own mRNA synthesis79. Although it has been long appreciated that 2′O-ribose methylation of the host mRNA cap is required for efficient recognition and cleavage by PA60,61, no cap methyltransferase had been identified in IAV genetic screens to date. In this study, we discovered CMTR1 as an important IAV HDF, whose absence confers resistance against IAV infection by blocking viral cap snatching. We also observed that depletion of CMTR1 resulted in increased IFN response in IAV-infected cells. Unlike a previous study which showed that siRNA knockdown of CMTR1 causes up-regulation of IFN-β in the absence of additional stimulation65, we found differential expression of type I IFN and IFN-stimulated genes (ISGs) only when the cells were infected with IAV. We speculate that this is due to their use of siRNA, which could lead to siRNA-induced innate immune sensing by RIG-I/MDA5. The lack of immune activation in CMTR1 KO cells at resting state makes it a good drug candidate due to its therapeutic window and low risk of autoimmunity. Previous studies have shown that the IFIT family of antiviral proteins sequester 2′-O-unmethylated capped RNA and block viral protein translation80,81. Coincidentally, we observed an increase in IFIT gene expression in CMTR1 polyclonal KO cells, suggesting that inhibition of viral replication might be attributed to both cap snatching blockade and direct sequestration of viral RNA. The advantage of targeting IAV cap snatching as a therapeutic strategy is best highlighted by the recent FDA approval of Xofluza (Baloxavir Marboxil), a small molecule drug that inhibits the endonuclease function of PA9,10,55,82. A single dose of Xofluza treatment has been shown to accelerate symptom alleviation and reduce viral load to a greater extent compared to the neuraminidase inhibitor Oseltamivir. Despite its effectiveness, resistant viral strains with reduced susceptibility to Xofluza have already been isolated in cell culture and clinical trials, raising concerns for long term administration of the drug9,83. Given that CMTR1 is required for efficient viral cap snatching, we tested for potential interaction between CMTR1 and Baloxavir. Our results provided preliminary evidence that depletion of CMTR1 confers synergistic protection again IAV infection with Baloxavir treatment. A combination therapy targeting both host CMTR1 and IAV endonuclease may thus serve as an attractive therapeutic option given greater barrier against drug resistance. In conclusion, our study has identified and validated a number of HDFs that play important roles during IAV infection. We show that WDR7, CCDC115, and TMEM199 regulate V-type ATPase assembly and their absence causes compensatory expansion and over-acidification of the endo-lysosomal compartments, which hamper IAV entry. We also report CMTR1 as a novel HDF that is required for efficient viral cap snatching and regulation of cell autonomous immune response. Lastly, our MAIC algorithm consolidates data from all previous genetic screens and proteomics studies and generates an annotated list of IAV HDFs which can serve as useful resource for future studies. Cell culture, reagents, and virus strains A549, A549-Cas9, and 293T cells were cultured in Dulbecco's Modified Eagle Medium (DMEM, Thermofisher) supplemented with 10% heat-inactivated fetal bovine serum (Sigma), 2 mM L-Glutamine (Gibco) and 1% penicillin. A549 and 293T cells were obtained from ATCC. A549-Cas9 cell line was generated by transducing A549 cells with a lentiviral construct (pXPR101) expressing Cas9 and Blasticidin deaminase. Cas9 activity was confirmed by transducing A549-Cas9 cells with a lentiviral construct (pXPR_011-sgEGFP) expressing eGFP and an sgRNA specific for eGFP. Polyclonal population of the A549-Cas9 cell line was used for the CRISPR screen to maintain heterogeneity of the cells. Primary NHLF cells were cultured in Mesenchymal Stem Cell Growth Medium (MSCGM, Lonza). PR8/A/34, A/Udorn/72 and A/Aichi/68 (X:31) Influenza A viruses were grown in MDCK cells in serum-free DMEM supplemented with 1% BSA and 1 μg/ml TPCK trypsin. GFP-Vesicular stomatits virus (VSV) was kindly.pngted by Dr. Sean Whelan's lab. Influenza A/New Caledonia/20/1999, A/California/04/2009 and A/Vietnam/1203/2004-PR8-IBCDC-RG/GLP viruses were kindly.pngted by Dr. Daniel Lingwood's lab. Bafilomycin A1 was obtained from invivogen (88899-55-2). Chloroquine diphosphate was obtained from Sigma (C6628). Baloxavir was obtained from MedChemExpress (HY-109025A). pXPR101 and pXPR_011-sgEGFP used to generate A549-Cas9 cells and pLentiGuide-puro (Addgene #52963) for secondary screen were provided by the Broad Institute Genetic Perturbation Platform. Individual sgRNAs were cloned into pLentiCRISPR-V2(Addgene #52961) and pXPR_004 (Puromycin resistance gene in pLentiCRISPR-V2 was replaced by eGFP) for validation in A549 cells and NHLFs respectively. For rescue experiments, the Cas9 gene in pXPR101 was replaced by codon-mutated versions of WDR7, CCDC115, TMEM199, and CMTR1 genes (pXPR101_rescue). For pseudovirus production, we used MLV Gag-pol, GFP, PR8 HA, PR8 NA, and MLV Env plasmids (kindly provided by Michael Farzan and Wayne Marasco). The following antibodies were used throughout this study: From EMD Millipore, Anti-Influenza A HA (AB1074) (1:200), FITC Anti-Influenza A Nucleoprotein clone A1 (MAB8257F) (1:200). From Abcam, Anti-LAMP1 clone H4A3 (ab25630) (1:100), Anti-Rab7 Alexa-Fluor647 clone EPR7589 (ab198337) (1:100), Anti-ATP6V0D1 (ab56441) (1:2000), β-actin antibody (ab6276) (1:10000). From BD bioscience, FITC mouse anti-human CD71 (555536) (1:100). From Thermofisher, Alexa-Fluor488 Goat anti-mouse IgG (1:500), Alexa-Fluor488 Donkey anti-goat IgG (1:500). From Sigma Aldrich, Anti-Flag M2 antibody (F3165) (1:2000). From Cell Signaling Technology, TFEB antibody (#4240S) (1:2000), Phospho-TFEB antibody (Ser211) (#37681S) (1:2000), Cox-IV antibody (4850s) (1:2000), HSP90 antibody (#4874) (1:2000) and TBP antibody (#8515) (1:2000). From Abnova, Anti-ATP6V1A (H00000523-A01) (1:2000). Pooled genome-wide CRISPR screen Hundred million A549-Cas9 cells were transduced with the AVANA-4 lentiviral library34 to achieve 40% infection rate and average 500-fold coverage of the library after selection. After 24 h, the cells were selected with puromycin and an initial pool of 40 million cells were harvested for genomic DNA extraction using the Qiagen Blood and Tissue extraction kit according to manufacturer protocol. On day 9 post-transduction, 200–400 million puromycin resistant A459-Cas9 cells were infected with Influenza A PR8 virus at MOI5 for 16 h. They were then washed and stained with florescent anti-Influenza A HA (AB1074) antibody. HA-positive and HA-negative cells were sorted by FACS and harvested for genomic DNA. PCR of gDNA was performed in 100 μl reactions to attach sequencing adaptors and barcode samples. Each reaction consisted of 50 μL gDNA plus water, 40 μL PCR master mix and 10 μL of a uniquely barcoded P7 primer (stock at 5 μM concentration). Master mix comprised of 75 μL ExTaq DNA Polymerase (Clontech), 1000 μL of 10x Ex Taq buffer, 800 μL of dNTP provided with the enzyme, 50 μL of P5 stagger primer mix (stock at 100 μM concentration), and 2075 μL water. PCR cycling conditions: an initial 1 min at 95 °C; followed by 30 s at 94 °C, 30 s at 52.5 °C, 30 s at 72 °C, for 28 cycles; and a final 10 min extension at 72 °C. Samples were purified with Agencourt AMPure XP SPRI beads according to manufacturer's instructions (Beckman Coulter, A63880) and sequenced on a HiSeq2000 (Illumina). sgRNA library cloning and lentiviral production The AVANA-4 library (74,700 sgRNAs targeting 18,675 genes and 1000 non-targeting sgRNA) was provided by the Broad Institute Genetic Perturbation Platform. For the secondary screen, a plasmid library containing 18,870 sgRNAs targeting the top 1000 ranked genes from the primary screen and 787 genes from cross-validation analysis as well as 1000 non-targeting sgRNAs was synthesized as oligonucleotides (Broad Institute Biotechnology Lab). The sgRNAs were cloned by Gibson Assembly into the pLentiGuide-Puro vector. To produce lentivirus, 293T cells were plated in a 6-well dish at 0.5 × 106 cells per well. Transfection was performed using TransIT-LT1 (Mirus) according to manufacturer's protocol and virus was harvested 48 h post-transfection. Influenza A virus and VSV infection A549 or NHLF cells were inoculated with 300l (6-well plate) or 2 ml (T75 flask) of influenza A virus at MOI 5 for 1 hour at 37° in serum-free DMEM. The cells were then washed and replaced with fresh serum-free DMEM supplemented with 1% BSA for 16 h. Infection was subsequently monitored by FACS or plaque assay. For VSV infection, A549 cells were inoculated with 300 μl (6-well plate) of VSV virus at MOI 1 for 1 h at 37° in complete DMEM. The cells were then washed and replaced with fresh DMEM for 16 h. Infection was subsequently monitored by FACS. Screen analysis Read counts corresponding to each guide RNA were normalised to reads per million and and log transformed. Quantile normalisation was performed in R. In order to control for the marked heteroscedasticity (Fig. 1b), local z-scores, for pools of values with different read counts, were calculated for sliding bins of varying size. For any comparison of two samples from which n read counts [x] and [y] are derived (for example, the flu-permissive and control FACS pools), the null hypothesis is xi = yi, where i is the ranked position in the list of read counts. The read count bin was determined from the shortest distance between any point (xi, yi) and the line y = x. Lower (l) and upper (u) limits of n sliding bins of size b were defined such that each bin contains b values: Where \(i \,{<}\, 0.5 \times b\), l = 0, u = b In the middle of the list, \(l = i - 0.5 \times b\), \(u = i + 0.5 \times b\) Where \(\left( {n - i} \right) \,{<}\, 0.5 {\times} b\), \(l = n - b\), u = n Z-scores were then calculated within each of these bins. p values were calculated from the sum of z-scores for sgRNAs targeting a particular gene compared to a density function modeled on an empirical distribution of possible combinations of sgRNA z-scores permuted at least 1e8 times by randomly rearranging z-scores for all sgRNAs in the screen. In order to minimize false negatives and maximize the discovery power of our screen, we did not require more than one sgRNA per gene to be significantly over-represented in the influenza virus-permissive FACS pool (permissive set). We report an additional "robust" set of hits in which the empirical p-value for a given gene, derived from the remaining sgRNAs after the sgRNA with the greatest effect is removed (remainder p), is less than 0.05. FDRs were calculated using the Benjamini-Hochberg method in scipy stats v1.1.0. The MAIC algorithm seeks to combine the information in a heterogeneous group of data sources, in the form of lists of genes implicated in similar processes. It creates a data-driven information weighting for each source to prioritise relevant information, and allows the systematic integration of both ranked and unranked gene lists. In a superset A of m input sets \(\{ L_1,L_2,L_3,...L_m\}\), such as experimental data sources, each input set contains n named entities \(\{ e_1,e_2,e_3,...e_n\}\), such as genes. Each input set belongs to a particular type of data source, which may have its own hidden biases. For example, siRNA affects some genes more than others, and some proteins have a tendency to be highly-connected in protein-protein interaction networks. Hence each input set is assigned to one of K categories, \(\{ C_1,C_2,C_3,...C_K\}\). The algorithm begins with the assumption that a set of true positives, T, exists, and that, for any entity e, membership of several data sets L belonging to independent categories C increases the probability that e is a member of T. Each one of the data sets \(L_j,j = 1,...,m\), has three attributes: set of nj entities \(L_j = \{ e_{j1},e_{j2},e_{j3},...\,e_{jn_j}\}\) a category, \(c_j \in \{ C_1,C_2,C_3,...C_K\}\) a structure, \(r_j \in \{ R,F\}\) where R is ranked and F is flat, or not ranked. A score value for each one of the genes in A which is based on the "popularity" of the genes in the input datasets. A gene will get a higher score for being represented in many different categories, as compared to being represented in many different datasets at the same category. Each input set L is assigned a weighting score w to quantify the evidence in e that derives from membership of. The weighting score w is itself defined as the sum of the scores assigned to each entity e within L. The starting value of s for any e is arbitrary - any numerical value can be chosen, without altering the final scores. For simplicity, the initial s for each e is set to 1. In order to prevent any single category (C) of data from biasing the results, each entity draws only one score (the highest score) from each category. If there is no score for this e in a particular C, the score assigned will be zero. In each iteration, the score of an entity i in a given category k is updated: $$\begin{array}{*{20}{c}} {s_{ik} = {\mathrm{max}}\left\{ {w_j^L\left| \,{g_i} \right. \in L_j \wedge c_j = C_k} \right\}i = 1,...,n;k = 1,...,K} \end{array}$$ The score of an entity for this iteration is the sum of the scores in each one of the categories. $$\begin{array}{*{20}{c}} {s_i = \mathop {\sum }\limits_{k = 1}^K s_{ik},i = 1,...,n} \end{array}$$ The weighting score given to a dataset is the square root of the average score of the genes belongs to this dataset. $$\begin{array}{*{20}{c}} {w_j^L = \sqrt {\frac{{{\sum} {\left( {g_i \in L_j} \right)s_i} }}{{n_j}}} } \end{array}$$ These equations are iterated until the values for w for all input sets L are no longer changing (ie. each value for \(w_j^L\) is changed within 0.01 compared with the previous value.) Some input data sources provide gene lists that are ranked according to the strength or statistical significance of experimental results. With descending rank, the probability that a given gene is a true positive result is expected to decrease. This decline in information content is modeled by fitting exponential decay curve to the measured information content at each position in a ranked list. The information content is inferred from the MAIC algorithm by truncating the list at every position and calculating a weighting (\(w_j^L\)) for the list up to that point, as if it were unranked. A specific weighting for each position in a ranked list is then calculated from the exponential decay function specific to this list. Code to run the MAIC algorithm, and an online service with a user interface is present at https://baillielab.net/maic. Evaluation of MAIC In order to evaluate MAIC against existing methods, we built a simulated data generator to generate ranked, unranked, and mixed data based on the Thurstonian ranking model84 which ranks entities by figure Z in descending order for each entity generated from Gaussian distribution with mean value μ and variance of the square of σ. Then we cut the generated lists by leaving only 0.5% (2) entities and labeled the list as ranked or unranked. The total ratio for real entities among all entities is also 10% in this evaluation. In the case of the present work, an entity is a protein-coding gene in the human genome. We use the term "entity" here because the approach is generalizable to a broad range of applications. For List_i (i = 1…n), Entity_k (k =1…m), mean_noise M, the score Z for Entity_k is: $${Z}_{k}\sim {N}\left( {{\mu}_{k},{\sigma}_{i}^2} \right)$$ $${\mathrm{Log}}\left( {\sigma _i} \right) = {\mathrm{log}}\left( {M} \right) + {t}$$ $${t}\sim {N}\left( {0,{h}^2} \right)$$ $${D} = {h}^2$$ We used MAIC, robust tank aggregation (RRA)37, and a simple vote counting (VC) method ranking entities by frequency on this model. We used top-25 overlap ratio (classification accuracy) as the metric of success, comparing the top-25 entities of result with top-25 true entities ranked by μk. In evaluation experiments we tested MAIC against RRA and VC over the following variations in synthetic input data: Noise: setting mean noise M (defining mean quality of lists instead of mean numerical value of σ) among input lists. M was varied in the range [0,0.1,0.5,1,3,12], from perfect data without noise to data with very high noise. Heterogeneity: setting D [0 to 3] to show variability of the quality of input lists, in order to model the real-life scenario in which data quality and relevance from different experiments is expected to vary widely. D is varied from 0 (indicating the same noise level (data quality) for all lists) to 3 (indicating very heterogeneous noise levels among the input lists). Importantly, heterogeneity is varied independently of the average noise among all input lists. Ranked:unranked ratio: setting the ratio of the number of ranked lists to be 50% and 100%, leaving the remaining lists as unranked. Gene set enrichment analysis (MAIC output) Gene set enrichment analysis was performed on gene MAIC score ranks, using package 'fgsea' in R version 3.5.2. 106 permutations were used to derive p-values, and the Benjamini-Hochberg method was used to control false discovery rate (<0.05). The following gene set libraries were queried: KEGG 2016, BioCarta 2016, Reactome 2016, WikiPathways 2016, NCI Nature 2016, GO Biological Process 2018, GO Molecular Function 2018 and GO Cellular Component 2018. Reference for FGSEA (note no PMCID yet as only on bioRxiv): Sergushichev A (2016). "An algorithm for fast preranked gene set enrichment analysis using cumulative statistic calculation." bioRxiv. https://doi.org/10.1101/060012, http://biorxiv.org/content/early/2016/06/20/060012. Validation of individual hits using gene-specific CRISPR sgRNA For validation of individual hits in A549 cells, the two best sgRNAs from the AVANA-4 library were cloned into pLentiCRISPR-V2 and lentivirus was produced from 239T cells as described above. A549 Cells were transduced and selected with 1 μg/μl Puromycin for 8 days and genome-editing was confirmed by deep sequencing and CRISPResso analysis. For validation in NHLF cells, sgRNAs were cloned into pXPR_004, which carries eGFP instead of Puromycin resistance gene. Following transduction, GFP+ NHLF cells were sorted by FACS. GFP+ cells were infected with influenza A virus for 16 h at MOI5 and stained for surface HA using anti-influenza A HA antibody (AB1074). Rescue and over-expression of the KO genes For the rescue experiment, A549 cells were transduced with pLentiCRISPR-V2 expressing a gene-specific sgRNA together with a XPR101_rescue plasmid expressing Flag-tagged codon-mutated version of the gene. Cells were selected with 1 μg/μl Puromycin and 10ug/ul Blasticidin for 8 days. Expression of the add-back gene was confirmed by Western Blot. To test if the genes of interest have redundant functions, A549 cells were transduced with different combinations of the gene-specific sgRNAs and codon-mutated versions of the genes. To test the effect of over-expressing the genes alone, A549 cells were transduced with the rescue plasmids in the absence of sgRNAs. Cells were stained with antibodies in PBS + 1% BSA for 30 min on ice and fixed with 4% paraformaldehyde. For intracellular staining for Influenza A nucleoprotein (NP), cells were fixed and permeabilized using 0.1% Saponin (Sigma Aldrich) prior to antibody staining. Data were acquired on the BD Accuri (Bd Bioscience) and analyzed by FlowJo software (TreeStar). To check for expression of WDR7, CCDC115, TMEM199, and CMTR1 expression in rescue experiments, 5 × 105 transduced cells were washed with ice-cold PBS and lysed in RIPA buffer (Thermofisher) supplemented with EDTA-free Protease inhibitor cocktail (Roche). Cell lysates were span at 12,000 rpm in a microcentrifuge for 10 min at 4 °C and denatured by heating at 95 °C in SDS loading buffer + DTT. Proteins were separated on a NuPAGE Novex 12% Tris-Glycine gel and transferred to a polyvinylidene difluoride membrane (Milipore). Immunoblotting was performed according to standard protocols using Rabbit Anti-Flag primary antibody and HRP-conjugated anti-rabbit secondary antibody. To check for TFEB and Phospho-TFEB expression, cytoplasmic and nuclear proteins were extracted from 5 × 105 cells using the NE-PER Nuclear and Cytoplasmic Extraction reagents (Thermo Scientific) according to manufacturer protocol. Immunoblotting was performed according to standard protocols using anti-TFEB and anti-phospho-TFEB primary antibodies and HRP-conjugated anti-rabbit secondary antibody. RNA-extraction and qPCR Total RNA was extracted from 1 × 105 cells using the RNeasy Mini Kit (Qiagen) according to manufacturer's protocol. First strand cDNA synthesis was performed using 500 ng of total RNA with the Superscript III First-strand Synthesis system with Oligo(dT) (Thermofisher). Quantitative qPCR was performed using the Q5 hot start high fidelity polymerase and SYBR green I Nucleic Acid Gel stain (Thermofisher) on the Roche 480 Light Cycler (Roche). Human GAPDH was used as reference normalization control and expression levels were quantified by the delta Ct method. Primer sequences are as follow: Human IFN- β F: 5′ – TGCTCTCCTGTTGTGCTTCT-3′ R:5′ – ATAGATGGTCAATGCGGCGT-3′ Influenza PR8 NP F: 5′ – ATCGGAACTTCTGGAGGGGT-3′ R:5′ – CAGGACTTGTGAGCAACCGA-3′ Influenza PR8 NS1 F: 5′ – GTCTGGACATCGAGACAGCC-3′ R:5′ – GAGTCTCCAGCCGGTCAAAA-3′ Influenza A/New Caledonia/1999 HA F: 5′ – TCACCCGCCTAACATAGGGA-3′ R:5′ – TGCAAAAGCATACCATGGCG-3′ Influenza A/California/2009 HA F: 5′ – GGACACTAGTAGAGCCGGGA-3′ R:5′ – CAATCCTGTGGCCAGTCTCA-3′ Influenza A/Vietnam/2005 HA (H5N1) F: 5′ – TGAGCGCAGCATGTTCCTAT-3′ R:5′ – GCCCGTTCACTTTGGGTCTA-3′ Human GAPDH F: 5′ – GGGAGCCAAAAGGGTCATCA-3′ R:5′ – AGTGATGGCATGGACTGTGG-3′ Transcriptomic analysis was performed using the Smart-Seq2 protocol. Total RNA was extracted using the RNeasy Mini Kit (Qiagen). cDNA was synthesized from 1 ng of total RNA using the SuperScript III reverse transcription system, followed by PCR pre-amplification and quality check using high-sensitivity DNA Bioanalyzer chip (Agilent). 0.15 ng of pre-amplified cDNA was then used for the tagmentation reaction carried out with the Nextera XT DNA sample preparation kit (Illumina) and final PCR amplification. Amplified library was sequenced on a Nextseq 500 (Illumina). For data analysis, short sequencing reads were aligned using Bowtie and used as input in RSEM to quantify gene expression levels for all UCSC hg19 genes. Data were normalized and analyzed using the R software package DESeq2. Plaque assays A549 or NHLFs cells were infected with Influenza A PR8 or Udorn virus at MOI 0.1 in serum-free DMEM supplemented with 1% BSA and 1 μg/μl TPCK trypsin. 48 h post-infection, supernatant was collected and serial-diluted. Two hundred microliters of the diluted supernatant was used to infect MDCK cells on 6-well plates and the number of plaques were counted after 72 h. The virus titer was calculated in Plaque forming units (PFU)/ml. Proliferation assays A549 cells were transduced with pLentiCRISPR-V2 expressing sgRNA against genes of interest and selected with 1 μg/μl Puromycin for 2 days. On day 3, 5000 puromycin resistant cells were re-seeded on 6-well plates and changes in total cell number were monitored on day 5, 7, and 9. On day 9, some cells were harvested for ALAMAR Blue assay (Thermofisher, DAL1025) and Annexin V staining (Thermofisher, V13241) according to manufacturer protocol. MLV-GFP pseudovirus production and entry assay MLV-GFP pseudovirus was produced by transfecting 1 μg of MLV Gag-pol plasmid, 1 μg of GFP plasmids, 0.3ug of Influenza PR8 HA plasmid, 1.2 μg of NA plasmid or 1.2 μg of MLV-Env plasmid into 293T cells seeded on 6-well plate at 0.5 × 106 cells per well. Virus was harvested and filtered 48 h post-transfection. To test for entry, A549 cells were spinoculated with the pseudovirus at 2000 rpm for 30 min. Before cell transduction, pseudovirus was incubated with 1ug/ml TPCK-treated trypsin for 1 h at room temperature and then mixed with trypsin-neutralizing solution. GFP expression was monitored 48 h post-spinoculation by FACS. Influenza A virus binding assay Cells were seeded on 6-well-plates and inoculated with Influenza A PR8 virus at MOI 100 for 30 minutes at 4 °C. Cells were then washed twice with ice cold PBS and stained for surface HA using anti-influenza A HA antibody (AB1074). Measuring level of cell surface sialic acid Cells were stained with Sambucus Nigra lectin (SNA) (Vector Laboratories Inc.) according to manufacturer protocol. Briefly, Cells were incubated with 10ug/ml FITC-conjugated Lectin at room temperature for 30 min. They were then washed twice in PBS and analyzed by FACS. Fluorescent-in situ-hybridization (FISH) 1 × 105 cells were seeded on a chambered cover glass (VWR, Nunc Lab-Tec 2 wells) pre-treated with 0.1 mg/ml poly-D-lysine. The cells were infected with Influenza A PR8 virus the following day for 4 h at 37°. and then fixed and stained with Stellaris Quasar 570 RNA FISH probes against Influenza A PR8 NP RNA according to manufacturer protocol (LGC Biosearch Technologies). Images were taken on the Olympus FV1200 IX83 confocal microscope and percentage of RNA+ cells relative to the total number of cells was quantified. Cells were imaged on the Olympus FV1200 IX83 laser scanning confocal microscope equipped with a 40X objective and LD559, LD635 and LD405 (Olympus Life Science). Images were taken using the Olympus FV software and analyzed using ImageJ. For imaging of lysotracker red, Lysosensor blue, Oregon Green Dextran, Rab7 and LAMP1, 1 × 105 A549/NHLF cells were seeded onto chambered cover glass (VWR, Nunc Lab-Tec 4 wells) pre-treated with 0.1 mg/ml poly-D-lysine the day before. They were treated with 100 nM lysotracker dye for 1 h at 37 °C, followed by fixation with 4% paraformaldehyde and permeabilization with 0.1% Saponin. The cells were blocked with PBS with 1% BSA and 0.1% Tween20 for 1 h at room temperature and stained with anti-Rab7 and anti-LAMP1 antibodies overnight at 4 °C. The cells were then stained with secondary Alexa-fluor488-conjugated goat anti-mouse IgG antibody and DAPI for 1 h at room temperature. Images were acquired with a ×40 objective using the setup described above. For visualization of Influenza NP localization within the cells, A549 cells were infected with Influenza A PR8 virus at MOI 200 for 2 h at 37°. Infected cells were then fixed with 4% paraformaldehyde and stained with FITC anti-influenza A NP antibody overnight at 4°. The next day cells were washed, stained with DAPI for 1 h at room temperature and images were acquired as described above. Measuring lysosomal degradation of DQ-BSA 1 × 105 A549 cells were seeded on 12-well plates and incubated with 20ug/ml DQ Green BSA (Thermofisher, D12050), 100 nM Lysotracker Red and DAPI for 1 h at 37 °C. The cells were then washed in PBS and fixed in 4% paraformaldehyde. Confocal microscopy Images were acquired with a ×40 objective using the setup described earlier. Luciferase reporter assay for influenza A virus replication To measure viral polymerase activity, we utilized a vRNA-luciferase reporter system. Briefly, A549 cells were transfected with a vRNA reporter plasmid expressing firefly luciferase under a viral UTR. The cells were also transfected with influenza A virus PA, PB1, PB2, NP, and Renilla. Twenty-four hours post-transfection, cells were lysed and mixed with Dual Glo substrate (Promega) according to Manufacturer's protocol. Luminescence was measured and quantified using a Synergy H1 multi-mode microplate reader (BioTek). In vivo cross-linking coupled immunoprecipitation with anti-eIF4E antibody Cells were harvested and cross-linked with 0.3% formaldehyde in culture media for 10 min at 37 °C to enable high stringency washes of the in vivo protein-RNA complexes63,85. Cells were washed three times with PBS and then fractionated into nuclear and cytoplasmic fractions. Extracts from the two fractions were combined and treated with Turbo DNase I (Ambion) and RNase inhibitor (NEB) prior to pre-clearing using protein-G agarose to remove non-specific contaminants that bind agarose. Anti-eIF4E (Cell Signaling) was used for immunoprecipitation. The immunoprecipitates were subject to heat inactivation at 56 °C for 15 min before subjecting to RNA isolation with 3 volumes of Trizol (Invitrogen). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Source data underlying Figs. 3A–F, 4A–B, 5C–5F, 6A and C, 7A–F, and Supplementary Figs. 5A–E, 6A and C, 7A–B, 8A–8C, and 10C are provided as a Source Data file. RNAseq data for Fig.7d, Supplementary Fig. 9B and Supplementary Fig. 10B have been uploaded on NCBI Gene Expression Omnibus (GEO) database (accession number: GSE141171). Interactive results from MAIC can be viewed at http://baillielab.net/maic/flu Code availability A repository of the software developed for this project can be downloaded at https://github.com/baillielab/maic Shrestha, S. S. et al. Estimating the burden of 2009 pandemic influenza a (H1N1) in the United States (April 2009–April 2010). Clin. Infect. Dis. 52, S75–S82 (2011). Simonsen, L. et al. Global Mortality Estimates for the 2009 Influenza Pandemic from the GLaMOR Project: A Modeling Study. PLoS Med. 10, e1001558 (2013). Gao, R. et al. Human Infection with a Novel Avian-Origin Influenza A (H7N9) Virus. N. Engl. J. Med. 368, 1888–1897 (2013). Webster, R. G. & Govorkova, E. A. H5N1 influenza–continuing evolution and spread. N. Engl. J. Med. 355, 2174–2177 (2006). Yen, H. L. & Webster, R. G. Pandemic influenza as a current threat. Curr. Top. Microbiol. Immunol. 333, 3–24 (2009). Bright, R. A., Shay, D. K., Shu, B., Cox, N. J. & Klimov, A. I. Adamantane resistance among influenza A viruses isolated early during the 2005-2006 influenza season in the United States. J. Am. Med. Assoc. 295, 891–894 (2006). Team, N. S.-O. I. A. H. V. I. et al. Emergence of a novel swine-origin influenza A (H1N1) virus in humans. N. Engl. J. Med. 360, 2605–2615 (2009). Nicoll, A., Ciancio, B. & Kramarz, P. Observed oseltamivir resistance in seasonal influenza viruses in Europe interpretation and potential implications. Euro. Surveillance 13, pii: 8025 (2008). Hayden, F. G. et al. Baloxavir Marboxil for uncomplicated influenza in adults and adolescents. N. Engl. J. Med. 379, 913–923 (2018). Takashita, E. et al. Susceptibility of influenza viruses to the novel cap-dependent endonuclease inhibitor baloxavir marboxil. Front. Microbiol. 9, 3026 (2018). Vasin, A. V. et al. Molecular mechanisms enhancing the proteome of influenza A viruses: an overview of recently discovered proteins. Virus Res. 185, 53–63 (2014). Baillie, J. K. Targeting the host immune response to fight infection. Science 344, 807–808 (2014). Warfield, K. L. et al. Lack of selective resistance of influenza A virus in presence of host-targeted antiviral, UV-4B. Sci. Rep. https://doi.org/10.1038/s41598-019-43030-y (2019). Vercauteren, K. et al. Targeting a host-cell entry factor barricades antiviral-resistant HCV variants from on-therapy breakthrough in human-liver mice. Gut https://doi.org/10.1136/gutjnl-2014-309045 (2016). Brass, A. L. et al. The IFITM proteins mediate cellular resistance to influenza A H1N1 Virus, West Nile Virus, and Dengue Virus. Cell 139, 1243–1254 (2009). Hao, L. et al. Drosophila RNAi screen identifies host genes important for influenza virus replication. Nature 454, 890–893 (2008). Karlas, A. et al. Genome-wide RNAi screen identifies human host factors crucial for influenza virus replication. Nature 463, 818–822 (2010). König, R. et al. Human host factors required for influenza virus replication. Nature 463, 813–817 (2010). Shapira, S. D. et al. A physical and regulatory map of host-influenza interactions reveals pathways in H1N1 infection. Cell 139, 1255–1267 (2009). Sui, B. et al. The use of random homozygous gene perturbation to identify novel host-oriented targets for influenza. Virology 387, 473–481 (2009). Tran, A. T. et al. Knockdown of specific host factors protects against influenza virus-induced cell death. Cell Death Dis. 4, e769 (2013). Stertz, S. & Shaw, M. L. Uncovering the global host cell requirements for influenza virus replication via RNAi screening. Microbes Infect. 13, 516–525 (2011). Watanabe, T., Watanabe, S. & Kawaoka, Y. Cellular networks involved in the influenza virus life cycle. Cell Host Microbe 7, 427–439 (2010). Everitt, A. R. et al. IFITM3 restricts the morbidity and mortality associated with influenza. Nature 484, 519–523 (2012). Chou, Y.-C. et al. Variations in genome-wide RNAi screens: lessons from influenza research. J. Clin. Bioinforma. 5, 2 (2015). König, R. et al. Global analysis of host-pathogen interactions that regulate early-stage HIV-1 replication. Cell 135, 49–60 (2008). Brass, A. L. et al. Identification of host proteins required for HIV infection through a functional genomic screen. Science 319, 921–926 (2008). Zhou, H. et al. Genome-scale RNAi screen for host factors required for HIV replication. Cell Host Microbe 4, 495–504 (2008). Ma, Y. et al. CRISPR/Cas9 screens reveal epstein-barr virus-transformed B cell host dependency factors. Cell Host Microbe 21, 580–591.e7 (2017). Marceau, C. D. et al. Genetic dissection of Flaviviridae host factors through genome-scale CRISPR screens. Nature 535, 159–163 (2016). Savidis, G. et al. Identification of zika virus and dengue virus dependency factors using functional genomics. Cell Rep. 16, 232–246 (2016). Park, R. J. et al. A genome-wide CRISPR screen identifies a restricted set of HIV host dependency factors. Nat. Genet. 49, 193–203 (2017). Han, J. et al. Genome-wide CRISPR/Cas9 screen identifies host factors essential for influenza virus replication. Cell Rep. 23, 596–607 (2018). Doench, J. G. et al. Optimized sgRNA design to maximize activity and minimize off-target effects of CRISPR-Cas9. Nat. Biotechnol. 34, 184–191 (2016). Kanehisa, M., Sato, Y., Kawashima, M., Furumichi, M. & Tanabe, M. KEGG as a reference resource for gene and protein annotation. Nucleic Acids Res. 44, D457–D462 (2016). Tripathi, S. et al. Meta- and orthogonal integration of influenza 'oMICs' data defines a role for UBR4 in virus budding. Cell Host Microbe 18, 723–735 (2015). Kolde, R., Laur, S., Adler, P. & Vilo, J. Robust rank aggregation for gene list integration and meta-analysis. Bioinformatics 28, 573–580 (2012). Rimmelzwaan, G. F., Baars, M., Claas, E. C. J. & Osterhaus, A. D. M. E. Comparison of RNA hybridization, hemagglutination assay, titration of infectious virus and immunofluorescence as methods for monitoring influenza virus replication in vitro. J. Virol. Methods 74, 57–66 (1998). Huang, I.-C. et al. Influenza A virus neuraminidase limits viral superinfection. J. Virol. 82, 4834–4843 (2008). Luo, M. Influenza virus entry. Adv. Exp. Med. Biol. 726, 201–221 (2012). Merkulova, M. et al. Mapping the H+ (V)-ATPase interactome: identification of proteins involved in trafficking, folding, assembly and phosphorylation. Sci. Rep. 5, 1–15 (2015). Miles, A. L., Burr, S. P., Grice, G. L. & Nathan, J. A. The vacuolar-ATPase complex and assembly factors, TMEM199 and CCDC115, control HIF1α prolyl hydroxylation by regulating cellular Iron levels. Elife 6, 1–28 (2017). Yoshimori, T., Yamamoto, a, Moriyama, Y., Futai, M. & Tashiro, Y. Bafilomycin-a1, a specific inhibitor of vacuolar-type H+-Atpase, inhibits acidification and protein-degradation in lysosomes of cultured-cells. J. Biol. Chem. 266, 17707–17712 (1991). Ochiai, H., Sakai, S., Hirabayashi, T., Shimizu, Y. & Terasawa, K. Inhibitory effect of bafilomycin A1, a specific inhibitor of vacuolar-type proton pump, on the growth of influenza A and B viruses in MDCK cells. Antivir. Res. 27, 425–430 (1995). Stauffer, S. et al. Stepwise priming by acidic pH and a high K+ concentration is required for efficient uncoating of influenza A virus cores after penetration. J. Virol. 88, 13029–13046 (2014). McGuire, C. M. & Forgac, M. Glucose starvation increases V-ATPase assembly and activity in mammalian cells through AMP kinase and phosphatidylinositide 3-kinase/Akt signaling. J. Biol. Chem. 293, 9113–9123 (2018). Lu, S., Sung, T., Lin, N., Abraham, R. T. & Jessen, B. A. Lysosomal adaptation: how cells respond to lysosomotropic compounds. PLoS ONE 12, e0173771 (2017). Mauthe, M. et al. Chloroquine inhibits autophagic flux by decreasing autophagosome-lysosome fusion. Autophagy 14, 1435–1455 (2018). Martina, J. A. & Puertollano, R. Protein phosphatase 2A stimulates activation of TFEB and TFE3 transcription factors in response to oxidative stress. J. Biol. Chem. 293, 12525–12534 (2018). Brozzi, A., Urbanelli, L., Germain, P. L., Magini, A. & Emiliani, C. hLGDB: a database of human lysosomal genes and their regulation. Database 2013, (2013). Skehel, J. J. & Wiley, D. C. Receptor binding and membrane fusion in virus entry: the influenza hemagglutinin. Annu. Rev. Biochem. 69, 531–569 (2000). Marwaha, R. & Sharma, M. DQ-Red BSA trafficking assay in cultured cells to assess cargo delivery to lysosomes. Bio. Protoc. 7, e2571 (2017). Costello, D. A., Whittaker, G. R. & Daniel, S. Variations in pH sensitivity, acid stability, and fusogenicity of three Influenza Virus H3 subtypes. J. Virol. 89, 350–360 (2015). Fontana, J., Cardone, G., Heymann, J. B., Winkler, D. C. & Steven, A. C. Structural changes in influenza virus at low pH characterized by cryo-electron tomography. J. Virol. 86, 2919–2929 (2012). Stegmann, T., Booy, F. P. & Wilschut, J. Effects of low pH on influenza virus. Activation and inactivation of the membrane fusion capacity of the hemagglutinin. J. Biol. Chem. 25, 17744–17749 (1987). Huotari, J. & Helenius, A. Endosome maturation. EMBO J. 30, 3481–3500 (2011). Li, S. et al. PH-ontrolled two-step uncoating of influenza virus. Biophys. J. 106, 1447–1456 (2014). Bélanger, F., Stepinski, J., Darzynkiewicz, E. & Pelletier, J. Characterization of hMTr1, a human Cap1 2′-O-ribose methyltransferase. J. Biol. Chem. 285, 33037–33044 (2010). Smietanski, M. et al. Structural analysis of human 2′-O-ribose methyltransferases involved in mRNA cap structure formation. Nat. Commun. 5, 4321 (2014). Bouloy, M., Plotch, S. J. & Krug, R. M. Both the 7-methyl and the 2′-O-methyl groups in the cap of mRNA strongly influence its ability to act as primer for influenza virus RNA transcription. Proc. Natl Acad. Sci. USA 77, 3952–3956 (1980). Wakai, C., Iwama, M., Mizumoto, K. & Nagata, K. Recognition of cap structure by Influenza B Virus RNA polymerase is less dependent on the methyl residue than recognition by influenza A virus polymerase. J. Virol. 85, 7504–7512 (2011). Lutz, A., Dyall, J., Olivo, P. D. & Pekosz, A. Virus-inducible reporter genes as a tool for detecting and quantifying influenza A virus replication. J. Virol. Methods 126, 13–20 (2005). Bukhari, S. I. A. et al. A specialized mechanism of translation mediated by FXR1a-associated microRNP in cellular quiescence. Mol. Cell 61, 760–773 (2016). Züst, R. et al. Ribose 2′-O-methylation provides a molecular signature for the distinction of self and non-self mRNA dependent on the RNA sensor Mda5. Nat. Immunol. 12, 137–143 (2011). Schuberth-Wagner, C. et al. A conserved histidine in the RNA sensor RIG-I controls immune tolerance to N1-2′O-methylated Self RNA. Immunity 43, 41–52 (2015). Loo, Y. M. & Gale, M. Immune signaling by RIG-I-like receptors. Immunity 34, 680–692 (2011). Sabeti, P. C. et al. Genome-wide detection and characterization of positive selection in human populations. Nature 449, 913–918 (2007). Matsuoka, Y. et al. A comprehensive map of the influenza A virus replication cycle. BMC Syst. Biol. 7, 97 (2013). Yang, C. H. et al. Influenza A virus upregulates PRPF8 gene expression to increase virus production. Arch. Virol. 162, 1223–1235 (2017). Jansen, J. C. et al. CCDC115 deficiency causes a disorder of Golgi homeostasis with abnormal protein glycosylation. Am. J. Hum. Genet. 98, 310–321 (2016). Jansen, J. C. et al. TMEM199 deficiency is a disorder of golgi homeostasis characterized by elevated aminotransferases, alkaline phosphatase, and cholesterol and abnormal glycosylation. Am. J. Hum. Genet. 98, 322–330 (2016). Ballabio, A. et al. TFEB Links Autophagy to Lysosomal Biogenesis. Science 332, 1429–1433 (2011). Perreira, J. M. et al. RNASEK Is a V-ATPase-associated factor required for endocytosis and the replication of rhinovirus, influenza A virus, and dengue virus. Cell Rep. 12, 850–863 (2015). Decressac, M. et al. TFEB-mediated autophagy rescues midbrain dopamine neurons from -synuclein toxicity. Proc. Natl Acad. Sci. USA (2013). https://doi.org/10.1073/pnas.1305623110 (2013). Rega, L. R. et al. Activation of the transcription factor EB rescues lysosomal abnormalities in cystinotic kidney cells. Kidney Int. https://doi.org/10.1016/j.kint.2015.12.045 (2016). Pastore, N. et al. Gene transfer of master autophagy regulator TFEB results in clearance of toxic protein and correction of hepatic disease in alpha-1-anti-trypsin deficiency. EMBO Mol. Med. https://doi.org/10.1002/emmm.201202046 (2013). Carr, C. M. & Kim, P. S. A spring-loaded mechanism for the conformational change of influenza hemagglutinin. Cell 73, 823–832 (1993). Bouloy, M., Plotch, S. J. & Krug, R. M. Globin mRNAs are primers for the transcription of influenza viral RNA in vitro. Proc. Natl Acad. Sci. USA 75, 4886–4890 (2006). Dias, A. et al. The cap-snatching endonuclease of influenza virus polymerase resides in the PA subunit. Nature 458, 914–918 (2009). Daffis, S. et al. 2′-O methylation of the viral mRNA cap evades host restriction by IFIT family members. Nature 468, 452–456 (2010). Habjan, M. et al. Sequestration by IFIT1 impairs translation of 2′O-unmethylated capped RNA. PLoS Pathog. 9, e100363 (2013). Zaraket, H., Bridges, O. A. & Russell, C. J. The pH of activation of the hemagglutinin protein regulates H5N1 influenza virus replication and pathogenesis in mice. J. Virol. https://doi.org/10.1128/jvi.03110-12 (2013). Omoto, S. et al. Characterization of influenza virus variants induced by treatment with the endonuclease inhibitor baloxavir marboxil. Sci. Rep. 8, 9633 (2018). Maydeu-Olivares, A. Thurstonian modeling of ranking data via mean and covariance structure analysis. Psychometrika https://doi.org/10.1007/BF02294299 (1999). Truesdell, S. S. et al. MicroRNA-mediated mRNA translation activation in quiescent cells and oocytes involves recruitment of a nuclear microRNP. Sci. Rep. 2, 842 (2012). We would like to thank the Ragon Institute and MGH Flow Cytometry cores for superb technical assistance. We would like to thank the Broad Institute Genomic Perturbation Platform for providing the AVANA-4 lentivirus library and pXPR101 and pXPR_011-sgEGFP plasmids. This work was supported by the Broad Institute-Israel Science Foundation Partnership (NH, IG-V), NIH P50HG006193 (NH), Wellcome Trust Intermediate Clinical Fellowship (103258/Z/13/Z), Wellcome-Beit Prize (103258/Z/13/A), BBSRC Institute Strategic Programme Grant to the Roslin Institute BBS/E/D/10002071, BBS/E/D/20002172, BBS/E/D/20002174), the UK Intensive Care Foundation, the Medical Research Council SHIELD grant (MR/N02995X/1), the Edinburgh Global Research Scholarship, and Singapore Agency for Science, Technology and Research (A*STAR) National Science Scholarship. Harvard University Virology Program, Harvfvard Medical School, Boston, MA02142, USA & Bing Shao Chia Broad Institute of MIT and Harvard, 415 Main Street, Cambridge, MA, 02142, USA , Bing Shao Chia , Ang Cui , Thomas Eisenhaure , Lawrence D. Schweitzer , Paul Hoover , Irit Gat-Viks , John G. Doench & Nir Hacohen Roslin Institute, University of Edinburgh, Easter Bush, EH25 9RG, UK Sara M. Clohisey , Bo Wang , Nicholas J. Parkinson , Nikki Smith , Tim Regan , David Farr , Andrew Law , Paul Digard & J. Kenneth Baillie Harvard-MIT Health Sciences and Technology, Harvard Medical School, Boston, MA, 02115, USA Ang Cui School of Molecular Cell Biology and Biotechnology, Department of Cell Research and Immunology, George S. Wise Faculty of Life Sciences, Tel Aviv University, Tel Aviv, Israel Aharon Nachshon & Irit Gat-Viks School of informatics, University of Edinburgh, Edinburgh, EH8 9YL, UK Michael U. Gutmann Center for Cancer Research, Massachusetts General hospital, Harvard Medical School, Boston, MA, USA Syed Irfan Bukhari & Shobha Vasudevan The Ragon Institute of Massachusetts General Hospital, MIT and Harvard University, Cambridge, MA, USA Maya Sangesland & Daniel Lingwood MRC Center for Inflammation Research, University of Edinburgh, Edinburgh, UK David H. Dockrell Intensive Care Unit, Royal Infirmary Edinburgh, Edinburgh, EH16 5SA, UK J. Kenneth Baillie Massachusetts General Hospital Cancer Center, Boston, MA, 02129, USA Nir Hacohen Search for Bo Li in: Search for Sara M. Clohisey in: Search for Bing Shao Chia in: Search for Bo Wang in: Search for Ang Cui in: Search for Thomas Eisenhaure in: Search for Lawrence D. Schweitzer in: Search for Paul Hoover in: Search for Nicholas J. Parkinson in: Search for Aharon Nachshon in: Search for Nikki Smith in: Search for Tim Regan in: Search for David Farr in: Search for Michael U. Gutmann in: Search for Syed Irfan Bukhari in: Search for Andrew Law in: Search for Maya Sangesland in: Search for Irit Gat-Viks in: Search for Paul Digard in: Search for Shobha Vasudevan in: Search for Daniel Lingwood in: Search for David H. Dockrell in: Search for John G. Doench in: Search for J. Kenneth Baillie in: Search for Nir Hacohen in: B.L., S.M.C., J.K.B., and N.H. designed research; B.L., J.K.B., S.M.C., N.J.P., N.S., T.R., B.S.I., D.F., M.G., and M.S. conducted experiments; B.S.C., T.E., L.S., P.H., P.D., V.S., J.G.D., D.L., and D.D. contributed methods and reagents; J.K.B., A.C., and B.W. analyzed data; J.K.B. developed and implemented meta-analysis, A.N, I.G-V, A.L. and M.G. advised on meta-analysis methodology; and B.L, J.K.B, and N.H. wrote the paper. Correspondence to J. Kenneth Baillie or Nir Hacohen. Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Description of Additional Supplementary Files Supplementary Data 1 Li, B., Clohisey, S.M., Chia, B.S. et al. Genome-wide CRISPR screen identifies host dependency factors for influenza A virus infection. Nat Commun 11, 164 (2020) doi:10.1038/s41467-019-13965-x DOI: https://doi.org/10.1038/s41467-019-13965-x Nature Communications menu Editors' Highlights
CommonCrawl
Multipliers of locally compact quantum groups via Hilbert C*-modules Daws, M (2010) Multipliers of locally compact quantum groups via Hilbert C*-modules. Journal of the London Mathematical Society, 84 (2). 385 - 407 (23). ISSN 0024-6107 A result of Gilbert shows that every completely bounded multiplier $f$ of the Fourier algebra $A(G)$ arises from a pair of bounded continuous maps $\alpha,\beta:G \rightarrow K$, where $K$ is a Hilbert space, and $f(s^{-1}t) = (\beta(t)|\alpha(s))$ for all $s,t\in G$. We recast this in terms of adjointable operators acting between certain Hilbert C$^*$-modules, and show that an analogous construction works for completely bounded left multipliers of a locally compact quantum group. We find various ways to deal with right multipliers: one of these involves looking at the opposite quantum group, and this leads to a proof that the (unbounded) antipode acts on the space of completely bounded multipliers, in a way which interacts naturally with our representation result. The dual of the universal quantum group (in the sense of Kustermans) can be identified with a subalgebra of the completely bounded multipliers, and we show how this fits into our framework. Finally, this motivates a certain way to deal with two-sided multipliers. Daws, M (c) 2010, London Mathematical Society. This is an author produced version of a paper published in the Journal of the London Mathematical Society. Uploaded in accordance with the publisher's self-archiving policy Published: 1 April 2010 http://dx.doi.org/10.1112/jlms/jdr013 London Mathematical Society https://doi.org/10.1112/jlms/jdr013 Filename: 1004.0215v3_with_coversheet.pdf
CommonCrawl
\`x^2+y_1+z_12^34\` AIMS Journals {{journal.titleEn}} {{journal.titleEn}} AIMS Press Math Journals {{book.nameEn}} {{subColumn.name}} All Title Author Keyword Abstract DOI Classify Address Funds Discrete and Continuous Dynamical Systems {{newsColumn.name}} This issue Previous Article Next Article All Title Author Keyword Abstract DOI Category Address Fund 2020, Volume 40, Issue 10: 5869-5895. Doi: 10.3934/dcds.2020250 This issue Previous Article Analysis of a spatial memory model with nonlocal maturation delay and hostile boundary condition Next Article Continuous orbit equivalence of topological Markov shifts and KMS states on Cuntz–Krieger algebras Extended symmetry groups of multidimensional subshifts with hierarchical structure Álvaro Bustos Departamento de Ingeniería Matemática, Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile, Beauchef 851 (Of. 415), 8370456 Santiago, Región Metropolitana, Chile Received: September 30, 2019 The author is supported by ANID-PFCHA/Doctorado Nacional/2017-21171061 (formerly CONICYT). Please check the Acknowledgments section below for further details Abstract Full Text(HTML) Figure(12) Related Papers Cited by The centralizer (automorphism group) and normalizer (extended symmetry group) of the shift action inside the group of self-homeomorphisms are studied, in the context of certain $ \mathbb{Z}^d $ subshifts with a hierarchical supertile structure, such as bijective substitutive subshifts and the Robinson tiling. Restrictions on these groups via geometrical considerations are used to characterize explicitly their structure: nontrivial extended symmetries can always be described via relabeling maps and rigid transformations of the Euclidean plane permuting the coordinate axes. The techniques used also carry over to the well-known Robinson tiling, both in its minimal and non-minimal versions. Symbolic dynamics, automorphism groups, extended symmetries, substitutions, Robinson tiling. Mathematics Subject Classification: 37B10, 37B51, 20B27, 52C23. \begin{equation} \\ \end{equation} Full Text(HTML) Figure 1. An example of applying a rectangular substitution to a pattern Download: Full-size image PowerPoint slide Figure 4. Points from the two-dimensional Thue-Morse substitution. The first two configurations correspond to (the central pattern of) two points $ x, y\in\mathsf{X}_{\theta_{\text TM}} $ matching exactly in one half-plane, as in Lemma 4.4. The third configuration is an "illegal" point $ z\in\mathsf{X}_{\theta_{\text TM}}^*\setminus\mathsf{X}_{\theta_{\text TM}} $ from the extended substitutive subshift. The associated seeds and substitution rule are shown below Figure 2. $ 2^n\times 2^n $ grids associated with the iterates of a primitive substitution $ \theta $ in a point from a substitutive subshift. The corresponding substitution is indicated on the right Figure 3. In the figure, we see how $ x|_{K_{\boldsymbol{p}}} = \theta^m(a) $ (for some $ a\in\mathcal{A} $) determines $ f(x)|_{K_{\boldsymbol{p}}^{\circ r}} $ and, in particular, $ f(x)|_{I_{\boldsymbol{p}}} $. Since the substitution is bijective, this forces $ f(x)|_{L_{\boldsymbol{p}}} $ to equal $ \theta^m(b) $ for some $ b\in\mathcal{A} $ which depends solely on $ a $ Figure 5. The situation in the proof of Lemma 4.6. As the side length of the rectangles associated with the substitution increases exponentially, the inner product $ \langle\boldsymbol{v}, \boldsymbol{w}\rangle $ which determines whether $ \boldsymbol{w} $ belongs to $ S $ or $ S' $ (or neither) takes sufficiently many different (integer) values inside any of these rectangles to ensure that at least one such rectangle intersects both $ S $ and $ S' $ Figure 6. The five types of Robinson tiles, resulting in an alphabet of $ 28 $ symbols after applying all possible rotations and reflections. The third tile is usually called a cross Figure 7. The formation of a second order supertile of size $ 3\times 3 $ Figure 8. A fragment of a point from the Robinson shift, distinguishing the four supertiles involved, the vertical and horizontal strips of tiles separating each supertile and the $ 2\mathbb{Z}\times 2\mathbb{Z} $ sublattice that contains only crosses. Note that the tiles in the vertical strip separating the supertiles are copies of the first tile of Figure 6 with the same orientation Figure 9. Two possible ways in which the tiling from Figure 8 exhibits fracture-like behavior, resulting in valid points from $ X_{\text Rob} $ Figure 10. The substructure of a point of $ X_{\text Rob} $ in terms of $ n $-th order supertiles. Note how all supertiles overlap either $ S^+ $ or $ S^- $ Figure 11. How a shift by $ k_1\boldsymbol{q} $ makes the arrangement of supertiles in $ S^+ $ not match with the corresponding tiles in $ S^- $ Figure 12. The relabeling map $ \mathfrak{R} $ which replaces each tile with its corresponding rotation by $ \frac{1}{2}\pi $ [1] M. Baake, Structure and representations of the hyperoctahedral group, J. Math. Phys., 25 (1984), 3171-3182. doi: 10.1063/1.526087. [2] M. Baake and U. Grimm, Aperiodic Order, vol. 1. A Mathematical Invitation, With a foreword by Roger Penrose. Encyclopedia of Mathematics and its Applications, 149. Cambridge University Press, Cambridge, 2013. doi: 10.1017/CBO9781139025256. [3] M. Baake and U. Grimm, Squirals and beyond: Substitution tilings with singular continuous spectrum, Ergodic Theory Dynam. Systems, 34 (2014), 1077–1102, arXiv: 1205.1384. doi: 10.1017/etds.2012.191. [4] M. Baake, J. A. G. Roberts and R. Yassawi, Reversing and extended symmetries of shift spaces, Discrete Cont. Dyn. Syst., 38 (2018), 835–866, arXiv: 1611.05756. doi: 10.3934/dcds.2018036. [5] M. M. Boyle, Topological Orbit Equivalence and Factor Maps in Symbolic Dynamics, PhD thesis, University of Washington, 1983. [6] M. Boyle, D. Lind and D. Rudolph, The automorphism group of a shift of finite type, Trans. Amer. Math. Soc., 306 (1988), 71-114. doi: 10.1090/S0002-9947-1988-0927684-2. [7] M. Boyle and J. Tomiyama, Bounded topological orbit equivalence and ${C}^*$-algebras, J. Math. Soc. Japan, 50 (1998), 317-329. doi: 10.2969/jmsj/05020317. [8] T. Ceccherini-Silberstein and M. Coornaert, Cellular automata and groups, Computational Complexity, Springer, New York, 1 (2012), 336–349. doi: 10.1007/978-1-4614-1800-9_23. [9] E. M. Coven, Endomorphisms of substitution minimal sets, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 20 (1971), 129-133. doi: 10.1007/BF00536290. [10] E. M. Coven, A. Quas and R. Yassawi, Computing automorphism groups of shifts using atypical equivalence classes, Discrete Anal., 2016 (2016), 28 pp, arXiv: 1505.02482. doi: 10.19086/da.611. [11] V. Cyr and B. Kra, The automorphism group of a shift of linear growth: Beyond transitivity, Forum Math. Sigma, 3 (2015), e5, 27 pp, arXiv: 1411.0180. doi: 10.1017/fms.2015.3. [12] S. Donoso, F. Durand, A. Maass and S. Petite, On automorphism groups of low complexity subshifts, Ergodic Theory Dynam. Systems, 36 (2016), 64–95. arXiv: 1501.00510. doi: 10.1017/etds.2015.70. [13] S. Donoso and W. Sun, Dynamical cubes and a criteria for systems having product extensions, J. Mod. Dyn., 9 (2015), 365–405. arXiv: 1406.1220. doi: 10.3934/jmd.2015.9.365. [14] N. P. Frank, Multidimensional constant-length substitution sequences, Topology Appl., 152 (2005), 44-69. doi: 10.1016/j.topol.2004.08.014. [15] F. Gähler, Substitution rules and topological properties of the Robinson tilings, in Aperiodic Crystals (eds. S. Schmid, R. L. Withers and R. Lifshitz), Springer, Dordrecht, (2013), 67–73. arXiv: 1210.6468. [16] F. Gähler, A. Julien and J. Savinien, Combinatorics and topology of the Robinson tiling, C. R. Math. Acad. Sci. Paris, 350 (2012), 627–631. arXiv: 1203.1387 doi: 10.1016/j.crma.2012.06.007. [17] G. R. Goodson, Inverse conjugacies and reversing symmetry groups, Amer. Math. Monthly, 106 (1999), 19-26. doi: 10.1080/00029890.1999.12005002. [18] G. James and A. Kerber, The Representation Theory of the Symmetric Group, Encyclopedia of Mathematics and its Applications, 16. Addison-Wesley Publishing Co., Reading, Mass., 1981. [19] J. Kellendonk and R. Yassawi, The Ellis semigroup of bijective substitutions, preprint, arXiv: 1908.05690. [20] B. P. Kitchens, Symbolic Dynamics. One-sided, Two-sided and Countable State Markov Shifts, Universitext. Springer-Verlag, Berlin, 1998. doi: 10.1007/978-3-642-58822-8. [21] B. Kitchens and K. Schmidt, Isomorphism rigidity of irreducible algebraic Zd-actions, Invent. Math., 142 (2000), 559-577. doi: 10.1007/PL00005793. [22] P. Kůrka, Topological and Symbolic Dynamics, Société mathématique de France, Paris, 2003. [23] M. Lemańczyk and M. K. Mentzen, On metric properties of substitutions, Compositio Math, 65 (1988), 241-263. [24] D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511626302. [25] G. R. Maloney and D. Rust, Beyond primitivity for one-dimensional substitution subshifts and tiling spaces, Ergodic Theory Dynam. Systems, 38 (2018), 1086–1117, arXiv: 1604.01246 doi: 10.1017/etds.2016.58. [26] G. A. Miller, Groups formed by special matrices, Bull. Am. Math. Soc., 24 (1918), 203-206. doi: 10.1090/S0002-9904-1918-03043-7. [27] B. Mossé, Reconnaissabilité des substitutions et complexité des suites automatiques, Bull. Soc. Math. France, 124 (1996), 329-346. doi: 10.24033/bsmf.2283. [28] A. G. O'Farrell and I. Short, Reversibility in Dynamics and Group Theory, Cambridge University Press, Cambridge, 2015. doi: 10.1017/CBO9781139998321. [29] J. Olli, Endomorphisms of Sturmian systems and the discrete chair substitution tiling system, Discrete Cont. Dyn. Syst., 33 (2013), 4173-4186. doi: 10.3934/dcds.2013.33.4173. [30] N. P. Fogg (ed.), Substitutions in Dynamics, Arithmetics and Combinatorics, Lecture Notes in Mathematics, 1794. Springer-Verlag, Berlin, 2002. doi: 10.1007/b13861. [31] A. M. Robert, A Course in p-adic Analysis, Graduate Texts in Mathematics, 198. Springer-Verlag, New York, 2000. doi: 10.1007/978-1-4757-3254-2. [32] R. M. Robinson, Undecidability and nonperiodicity for tilings of the plane, Invent. Math., 12 (1971), 177-209. doi: 10.1007/BF01418780. [33] K. Schmidt, Dynamical Systems of Algebraic Origin, Modern Birkhäuser Classics. Birkhäuser/Springer Basel AG, Basel, 1995. [34] B. Solomyak, Nonperiodicity implies unique composition for self-similar translationally finite tilings, Discrete Comput. Geom., 20 (1998), 265-279. doi: 10.1007/PL00009386. Access History Figures(12) HTML views(196) PDF downloads(202) Cited by(0) Other Articles By Authors Export File RIS(for EndNote,Reference Manager,ProCite) Citation and Abstract DownLoad: Full-Size Img PowerPoint An example of applying a rectangular substitution to a pattern Points from the two-dimensional Thue-Morse substitution. The first two configurations correspond to (the central pattern of) two points $ x, y\in\mathsf{X}_{\theta_{\text TM}} $ matching exactly in one half-plane, as in Lemma 4.4. The third configuration is an "illegal" point $ z\in\mathsf{X}_{\theta_{\text TM}}^*\setminus\mathsf{X}_{\theta_{\text TM}} $ from the extended substitutive subshift. The associated seeds and substitution rule are shown below $ 2^n\times 2^n $ grids associated with the iterates of a primitive substitution $ \theta $ in a point from a substitutive subshift. The corresponding substitution is indicated on the right In the figure, we see how $ x|_{K_{\boldsymbol{p}}} = \theta^m(a) $ (for some $ a\in\mathcal{A} $) determines $ f(x)|_{K_{\boldsymbol{p}}^{\circ r}} $ and, in particular, $ f(x)|_{I_{\boldsymbol{p}}} $. Since the substitution is bijective, this forces $ f(x)|_{L_{\boldsymbol{p}}} $ to equal $ \theta^m(b) $ for some $ b\in\mathcal{A} $ which depends solely on $ a $ The situation in the proof of Lemma 4.6. As the side length of the rectangles associated with the substitution increases exponentially, the inner product $ \langle\boldsymbol{v}, \boldsymbol{w}\rangle $ which determines whether $ \boldsymbol{w} $ belongs to $ S $ or $ S' $ (or neither) takes sufficiently many different (integer) values inside any of these rectangles to ensure that at least one such rectangle intersects both $ S $ and $ S' $ The five types of Robinson tiles, resulting in an alphabet of $ 28 $ symbols after applying all possible rotations and reflections. The third tile is usually called a cross The formation of a second order supertile of size $ 3\times 3 $ A fragment of a point from the Robinson shift, distinguishing the four supertiles involved, the vertical and horizontal strips of tiles separating each supertile and the $ 2\mathbb{Z}\times 2\mathbb{Z} $ sublattice that contains only crosses. Note that the tiles in the vertical strip separating the supertiles are copies of the first tile of Figure 6 with the same orientation Two possible ways in which the tiling from Figure 8 exhibits fracture-like behavior, resulting in valid points from $ X_{\text Rob} $ The substructure of a point of $ X_{\text Rob} $ in terms of $ n $-th order supertiles. Note how all supertiles overlap either $ S^+ $ or $ S^- $ How a shift by $ k_1\boldsymbol{q} $ makes the arrangement of supertiles in $ S^+ $ not match with the corresponding tiles in $ S^- $ The relabeling map $ \mathfrak{R} $ which replaces each tile with its corresponding rotation by $ \frac{1}{2}\pi $ Site map Copyright © 2023 American Institute of Mathematical Sciences
CommonCrawl
\(\newcommand{\dollar}{\$} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\arctanh}{arctanh} \DeclareMathOperator{\arcsec}{arcsec} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \definecolor{fillinmathshade}{gray}{0.9} \newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}} \) Active Calculus Matthew Boelkins IndexPrevUpNext Active Calculus: Our Goals Features of the Text Students! Read this! Instructors! Read this! 1 Understanding the Derivative How do we measure velocity? The notion of limit The derivative of a function at a point The derivative function Interpreting, estimating, and using the derivative Limits, Continuity, and Differentiability The Tangent Line Approximation 2 Computing Derivatives Elementary derivative rules The sine and cosine functions The product and quotient rules Derivatives of other trigonometric functions The chain rule Derivatives of Inverse Functions Derivatives of Functions Given Implicitly Using Derivatives to Evaluate Limits 3 Using Derivatives Using derivatives to identify extreme values Using derivatives to describe families of functions Global Optimization Applied Optimization Related Rates 4 The Definite Integral Determining distance traveled from velocity Riemann Sums The Definite Integral The Fundamental Theorem of Calculus 5 Evaluating Integrals Constructing Accurate Graphs of Antiderivatives The Second Fundamental Theorem of Calculus Integration by Substitution Integration by Parts Other Options for Finding Algebraic Antiderivatives 6 Using Definite Integrals Using Definite Integrals to Find Area and Length Using Definite Integrals to Find Volume Density, Mass, and Center of Mass Physics Applications: Work, Force, and Pressure Improper Integrals 7 Differential Equations An Introduction to Differential Equations Qualitative behavior of solutions to DEs Euler's method Separable differential equations Modeling with differential equations Population Growth and the Logistic Equation 8 Sequences and Series Series of Real Numbers Alternating Series Taylor Polynomials and Taylor Series A A Short Table of Integrals B Answers to Activities C Answers to Selected Exercises FeedbackAuthored in PreTeXt Section 1.2 The notion of limit Motivating Questions What is the mathematical notion of limit and what role do limits play in the study of functions? What is the meaning of the notation \(\lim_{x \to a} f(x) = L\text{?}\) How do we go about determining the value of the limit of a function at a point? How do we manipulate average velocity to compute instantaneous velocity? In Section 1.1 we used a function, \(s(t)\text{,}\) to model the location of a moving object at a given time. Functions can model other interesting phenomena, such as the rate at which an automobile consumes gasoline at a given velocity, or the reaction of a patient to a given dosage of a drug. We can use calculus to study how a function value changes in response to changes in the input variable. Think about the falling ball whose position function is given by \(s(t) = 64 - 16t^2\text{.}\) Its average velocity on the interval \([1,x]\) is given by \begin{equation*} AV_{[1,x]} = \frac{s(x) - s(1)}{x-1} = \frac{(64-16x^2) - (64-16)}{x-1} = \frac{16 - 16x^2}{x-1}\text{.} \end{equation*} Note that the average velocity is a function of \(x\text{.}\) That is, the function \(g(x) = \frac{16 - 16x^2}{x-1}\) tells us the average velocity of the ball on the interval from \(t = 1\) to \(t = x\text{.}\) To find the instantaneous velocity of the ball when \(t = 1\text{,}\) we need to know what happens to \(g(x)\) as \(x\) gets closer and closer to \(1\text{.}\) But also notice that \(g(1)\) is not defined, because it leads to the quotient \(0/0\text{.}\) This is where the notion of a limit comes in. By using a limit, we can investigate the behavior of \(g(x)\) as \(x\) gets arbitrarily close, but not equal, to \(1\text{.}\) We first use the graph of a function to explore points where interesting behavior occurs. Preview Activity 1.2.1. Suppose that \(g\) is the function given by the graph below. Use the graph in Figure 1.2.1 to answer each of the following questions. Determine the values \(g(-2)\text{,}\) \(g(-1)\text{,}\) \(g(0)\text{,}\) \(g(1)\text{,}\) and \(g(2)\text{,}\) if defined. If the function value is not defined, explain what feature of the graph tells you this. For each of the values \(a = -1\text{,}\) \(a = 0\text{,}\) and \(a = 2\text{,}\) complete the following sentence: "As \(x\) gets closer and closer (but not equal) to \(a\text{,}\) \(g(x)\) gets as close as we want to ." What happens as \(x\) gets closer and closer (but not equal) to \(a = 1\text{?}\) Does the function \(g(x)\) get as close as we would like to a single value? Figure 1.2.1. Graph of \(y = g(x)\) for Preview Activity 1.2.1. Subsection 1.2.1 The Notion of Limit Limits give us a way to identify a trend in the values of a function as its input variable approaches a particular value of interest. We need a precise understanding of what it means to say "a function \(f\) has limit \(L\) as \(x\) approaches \(a\text{.}\)" To begin, think about a recent example. In Preview Activity 1.2.1, we saw that as \(x\) gets closer and closer (but not equal) to 0, \(g(x)\) gets as close as we want to the value 4. At first, this may feel counterintuitive, because the value of \(g(0)\) is \(1\text{,}\) not \(4\text{.}\) But limits describe the behavior of a function arbitrarily close to a fixed input, and the value of the function at the fixed input does not matter. More formally, 1 we say the following. Definition 1.2.2. Given a function \(f\text{,}\) a fixed input \(x = a\text{,}\) and a real number \(L\text{,}\) we say that \(f\) has limit \(L\) as \(x\) approaches \(a\), and write \begin{equation*} \lim_{x \to a} f(x) = L \end{equation*} provided that we can make \(f(x)\) as close to \(L\) as we like by taking \(x\) sufficiently close (but not equal) to \(a\text{.}\) If we cannot make \(f(x)\) as close to a single value as we would like as \(x\) approaches \(a\text{,}\) then we say that \(f\) does not have a limit as \(x\) approaches \(a\text{.}\) Example 1.2.3. For the function \(g\) pictured in Figure 1.2.1, we make the following observations: \begin{equation*} \lim_{x \to -1} g(x) = 3, \ \lim_{x \to 0} g(x) = 4, \ \text{and} \ \lim_{x \to 2} g(x) = 1\text{.} \end{equation*} When working from a graph, it suffices to ask if the function approaches a single value from each side of the fixed input. The function value at the fixed input is irrelevant. This reasoning explains the values of the three limits stated above. However, \(g\) does not have a limit as \(x \to 1\text{.}\) There is a jump in the graph at \(x = 1\text{.}\) If we approach \(x = 1\) from the left, the function values tend to get close to 3, but if we approach \(x = 1\) from the right, the function values get close to 2. There is no single number that all of these function values approach. This is why the limit of \(g\) does not exist at \(x = 1\text{.}\) For any function \(f\text{,}\) there are typically three ways to answer the question "does \(f\) have a limit at \(x = a\text{,}\) and if so, what is the limit?" The first is to reason graphically as we have just done with the example from Preview Activity 1.2.1. If we have a formula for \(f(x)\text{,}\) there are two additional possibilities: Evaluate the function at a sequence of inputs that approach \(a\) on either side (typically using some sort of computing technology), and ask if the sequence of outputs seems to approach a single value. Use the algebraic form of the function to understand the trend in its output values as the input values approach \(a\text{.}\) The first approach produces only an approximation of the value of the limit, while the latter can often be used to determine the limit exactly. Example 1.2.4. Limits of Two Functions. For each of the following functions, we'd like to know whether or not the function has a limit at the stated \(a\)-values. Use both numerical and algebraic approaches to investigate and, if possible, estimate or determine the value of the limit. Compare the results with a careful graph of the function on an interval containing the points of interest. \(f(x) = \frac{4-x^2}{x+2}\text{;}\) \(a = -1\text{,}\) \(a = -2\) \(g(x) = \sin\left(\frac{\pi}{x}\right)\text{;}\) \(a = 3\text{,}\) \(a = 0\) a. We first construct a graph of \(f\) along with tables of values near \(a = -1\) and \(a = -2\text{.}\) Table 1.2.5. Table of \(f\) values near \(x=-1\text{.}\) \(x\) \(f(x)\) \(-0.9\) \(2.9\) \(-0.99\) \(2.99\) \(-0.999\) \(2.999\) \(-0.9999\) \(2.9999\) Figure 1.2.7. Plot of \(f(x)\) on \([-4,2]\text{.}\) From Table 1.2.5, it appears that we can make \(f\) as close as we want to 3 by taking \(x\) sufficiently close to \(-1\text{,}\) which suggests that \(\lim_{x \to -1} f(x) = 3\text{.}\) This is also consistent with the graph of \(f\text{.}\) To see this a bit more rigorously and from an algebraic point of view, consider the formula for \(f\text{:}\) \(f(x) = \frac{4-x^2}{x+2}\text{.}\) As \(x \to -1\text{,}\) \((4-x^2) \to (4 - (-1)^2) = 3\text{,}\) and \((x+2) \to (-1 + 2) = 1\text{,}\) so as \(x \to -1\text{,}\) the numerator of \(f\) tends to 3 and the denominator tends to 1, hence \(\lim_{x \to -1} f(x) = \frac{3}{1} = 3\text{.}\) The situation is more complicated when \(x \to -2\text{,}\) because \(f(-2)\) is not defined. If we try to use a similar algebraic argument regarding the numerator and denominator, we observe that as \(x \to -2\text{,}\) \((4-x^2) \to (4 - (-2)^2) = 0\text{,}\) and \((x+2) \to (-2 + 2) = 0\text{,}\) so as \(x \to -2\text{,}\) the numerator and denominator of \(f\) both tend to 0. We call \(0/0\) an indeterminate form. This tells us that there is somehow more work to do. From Table 1.2.6 and Figure 1.2.7, it appears that \(f\) should have a limit of \(4\) at \(x = -2\text{.}\) To see algebraically why this is the case, observe that \begin{align*} \lim_{x \to -2} f(x) = \amp \lim_{x \to -2} \frac{4-x^2}{x+2}\\ = \amp \lim_{x \to -2} \frac{(2-x)(2+x)}{x+2}\text{.} \end{align*} It is important to observe that, since we are taking the limit as \(x \to -2\text{,}\) we are considering \(x\) values that are close, but not equal, to \(-2\text{.}\) Because we never actually allow \(x\) to equal \(-2\text{,}\) the quotient \(\frac{2+x}{x+2}\) has value 1 for every possible value of \(x\text{.}\) Thus, we can simplify the most recent expression above, and find that \begin{equation*} \lim_{x \to -2} f(x) = \lim_{x \to -2} 2-x\text{.} \end{equation*} This limit is now easy to determine, and its value clearly is \(4\text{.}\) Thus, from several points of view we've seen that \(\lim_{x \to -2} f(x) = 4\text{.}\) b. Next we turn to the function \(g\text{,}\) and construct two tables and a graph. Table 1.2.8. Table of \(g\) values near \(x=3\text{.}\) \(x\) \(g(x)\) \(2.9\) \(0.88351\) \(2.99\) \(0.86777\) \(2.999\) \(0.86620\) \(2.9999\) \(0.86604\) \(-0.1\) \(0\) \(-0.01\) \(0\) \(-0.001\) \(0\) \(-0.0001\) \(0\) \(0.1\) \(0\) \(0.01\) \(0\) \(0.001\) \(0\) \(0.0001\) \(0\) Figure 1.2.10. Plot of \(g(x)\) on \([-4,4]\text{.}\) First, as \(x \to 3\text{,}\) it appears from the table values that the function is approaching a number between \(0.86601\) and \(0.86604\text{.}\) From the graph it appears that \(g(x) \to g(3)\) as \(x \to 3\text{.}\) The exact value of \(g(3) = \sin(\frac{\pi}{3})\) is \(\frac{\sqrt{3}}{2}\text{,}\) which is approximately 0.8660254038. This is convincing evidence that \begin{equation*} \lim_{x \to 3} g(x) = \frac{\sqrt{3}}{2}\text{.} \end{equation*} As \(x \to 0\text{,}\) we observe that \(\frac{\pi}{x}\) does not behave in an elementary way. When \(x\) is positive and approaching zero, we are dividing by smaller and smaller positive values, and \(\frac{\pi}{x}\) increases without bound. When \(x\) is negative and approaching zero, \(\frac{\pi}{x}\) decreases without bound. In this sense, as we get close to \(x = 0\text{,}\) the inputs to the sine function are growing rapidly, and this leads to increasingly rapid oscillations in the graph of \(g\) betweem \(1\) and \(-1\text{.}\) If we plot the function \(g(x) = \sin\left(\frac{\pi}{x}\right)\) with a graphing utility and then zoom in on \(x = 0\text{,}\) we see that the function never settles down to a single value near the origin, which suggests that \(g\) does not have a limit at \(x = 0\text{.}\) How do we reconcile the graph with the righthand table above, which seems to suggest that the limit of \(g\) as \(x\) approaches \(0\) may in fact be \(0\text{?}\) The data misleads us because of the special nature of the sequence of input values \(\{0.1, 0.01, 0.001, \ldots\}\text{.}\) When we evaluate \(g(10^{-k})\text{,}\) we get \(g(10^{-k}) = \sin\left(\frac{\pi}{10^{-k}}\right) = \sin(10^k \pi) = 0\) for each positive integer value of \(k\text{.}\) But if we take a different sequence of values approaching zero, say \(\{0.3, 0.03, 0.003, \ldots\}\text{,}\) then we find that \begin{equation*} g(3 \cdot 10^{-k}) = \sin\left(\frac{\pi}{3 \cdot 10^{-k}}\right) = \sin\left(\frac{10^k \pi}{3}\right) = \frac{\sqrt{3}}{2} \approx 0.866025\text{.} \end{equation*} That sequence of function values suggests that the value of the limit is \(\frac{\sqrt{3}}{2}\text{.}\) Clearly the function cannot have two different values for the limit, so \(g\) has no limit as \(x \to 0\text{.}\) An important lesson to take from Example 1.2.4 is that tables can be misleading when determining the value of a limit. While a table of values is useful for investigating the possible value of a limit, we should also use other tools to confirm the value. Activity 1.2.2. Estimate the value of each of the following limits by constructing appropriate tables of values. Then determine the exact value of the limit by using algebra to simplify the function. Finally, plot each function on an appropriate interval to check your result visually. \(\displaystyle \lim_{x \to 1} \frac{x^2 - 1}{x-1}\) \(\displaystyle \lim_{x \to 0} \frac{(2+x)^3 - 8}{x}\) \(\displaystyle \lim_{x \to 0} \frac{\sqrt{x+1} - 1}{x}\) Recall that our primary motivation for considering limits of functions comes from our interest in studying the rate of change of a function. To that end, we close this section by revisiting our previous work with average and instantaneous velocity and highlighting the role that limits play. Subsection 1.2.2 Instantaneous Velocity Suppose that we have a moving object whose position at time \(t\) is given by a function \(s\text{.}\) We know that the average velocity of the object on the time interval \([a,b]\) is \(AV_{[a,b]} = \frac{s(b)-s(a)}{b-a}\text{.}\) We define the instantaneous velocity at \(a\) to be the limit of average velocity as \(b\) approaches \(a\text{.}\) Note particularly that as \(b \to a\text{,}\) the length of the time interval gets shorter and shorter (while always including \(a\)). We will write \(IV_{t=a}\) for the instantaneous velocity at \(t = a\text{,}\) and thus \begin{equation*} IV_{t=a} = \lim_{b \to a} AV_{[a,b]} = \lim_{b \to a} \frac{s(b)-s(a)}{b-a}\text{.} \end{equation*} Equivalently, if we think of the changing value \(b\) as being of the form \(b = a + h\text{,}\) where \(h\) is some small number, then we may instead write \begin{equation*} IV_{t=a} = \lim_{h \to 0} AV_{[a,a+h]} = \lim_{h \to 0} \frac{s(a+h)-s(a)}{h}\text{.} \end{equation*} Again, the most important idea here is that to compute instantaneous velocity, we take a limit of average velocities as the time interval shrinks. Consider a moving object whose position function is given by \(s(t) = t^2\text{,}\) where \(s\) is measured in meters and \(t\) is measured in minutes. Determine the most simplified expression for the average velocity of the object on the interval \([3, 3+h]\text{,}\) where \(h \gt 0\text{.}\) Determine the average velocity of the object on the interval \([3,3.2]\text{.}\) Include units on your answer. Determine the instantaneous velocity of the object when \(t = 3\text{.}\) Include units on your answer. The closing activity of this section asks you to make some connections among average velocity, instantaneous velocity, and slopes of certain lines. For the moving object whose position \(s\) at time \(t\) is given by the graph in Figure 1.2.11, answer each of the following questions. Assume that \(s\) is measured in feet and \(t\) is measured in seconds. Figure 1.2.11. Plot of the position function \(y = s(t)\) in Activity 1.2.4. Use the graph to estimate the average velocity of the object on each of the following intervals: \([0.5,1]\text{,}\) \([1.5,2.5]\text{,}\) \([0,5]\text{.}\) Draw each line whose slope represents the average velocity you seek. How could you use average velocities or slopes of lines to estimate the instantaneous velocity of the object at a fixed time? Use the graph to estimate the instantaneous velocity of the object when \(t = 2\text{.}\) Should this instantaneous velocity at \(t = 2\) be greater or less than the average velocity on \([1.5,2.5]\) that you computed in (a)? Why? Subsection 1.2.3 Summary Limits enable us to examine trends in function behavior near a specific point. In particular, taking a limit at a given point asks if the function values nearby tend to approach a particular fixed value. We read \(\lim_{x \to a} f(x) = L\text{,}\) as "the limit of \(f\) as \(x\) approaches \(a\) is \(L\text{,}\)" which means that we can make the value of \(f(x)\) as close to \(L\) as we want by taking \(x\) sufficiently close (but not equal) to \(a\text{.}\) To find \(\lim_{x \to a} f(x)\) for a given value of \(a\) and a known function \(f\text{,}\) we can estimate this value from the graph of \(f\text{,}\) or we can make a table of function values for \(x\)-values that are closer and closer to \(a\text{.}\) If we want the exact value of the limit, we can work with the function algebraically to understand how different parts of the formula for \(f\) change as \(x \to a\text{.}\) We find the instantaneous velocity of a moving object at a fixed time by taking the limit of average velocities of the object over shorter and shorter time intervals containing the time of interest. Exercises 1.2.4 Exercises 1. Limits on a piecewise graph. Use the figure below, which gives a graph of the function \(f(x)\text{,}\) to give values for the indicated limits. If a limit does not exist, enter none. (a) \(\lim\limits_{x \rightarrow -1} f(x)\) = (b) \(\lim\limits_{x \rightarrow 0} f(x)\) = (c) \(\lim\limits_{x \rightarrow 1} f(x)\) = (d) \(\lim\limits_{x \rightarrow 4} f(x)\) = 2. Estimating a limit numerically. Use a graph to estimate the limit \begin{equation*} \lim_{\theta \rightarrow 0} \frac{\sin(6\theta)}{\theta}. \end{equation*} Note: \(\theta\) is measured in radians. All angles will be in radians in this class unless otherwise specified. \(\lim\limits_{\theta \rightarrow 0} \frac{\sin(6\theta)}{\theta} =\) 3. Limits for a piecewise formula. For the function \begin{equation*} f(x) = \begin{cases} x^2 - 4, \amp 0\le x \lt 4\\ 4, \amp x = 4\\ 3 x + 0, \amp 4 \lt x \end{cases} \end{equation*} use algebra to find each of the following limits: \(\lim\limits_{x\to 4^{+}}\, f(x) =\) \(\lim\limits_{x\to 4^{-}}\, f(x) =\) \(\lim\limits_{x\to 4}\, f(x) =\) (For each, enter DNE if the limit does not exist.) Sketch a graph of \(f(x)\) to confirm your answers. 4. Evaluating a limit algebraically. Evaluate the limit \begin{equation*} \lim_{ x \rightarrow -7 } \frac{x^2 - 49}{x + 7} \end{equation*} If the limit does not exist enter DNE. Limit = Consider the function whose formula is \(f(x) = \frac{16-x^4}{x^2-4}\text{.}\) What is the domain of \(f\text{?}\) Use a sequence of values of \(x\) near \(a = 2\) to estimate the value of \(\lim_{x \to 2} f(x)\text{,}\) if you think the limit exists. If you think the limit doesn't exist, explain why. Use algebra to simplify the expression \(\frac{16-x^4}{x^2-4}\) and hence work to evaluate \(\lim_{x \to 2} f(x)\) exactly, if it exists, or to explain how your work shows the limit fails to exist. Discuss how your findings compare to your results in (b). True or false: \(f(2) = -8\text{.}\) Why? True or false: \(\frac{16-x^4}{x^2-4} = -4-x^2\text{.}\) Why? How is this equality connected to your work above with the function \(f\text{?}\) Based on all of your work above, construct an accurate, labeled graph of \(y = f(x)\) on the interval \([1,3]\text{,}\) and write a sentence that explains what you now know about \(\lim_{x \to 2} \frac{16-x^4}{x^2-4}\text{.}\) Let \(g(x) = -\frac{|x+3|}{x+3}\text{.}\) What is the domain of \(g\text{?}\) Use a sequence of values near \(a = -3\) to estimate the value of \(\lim_{x \to -3} g(x)\text{,}\) if you think the limit exists. If you think the limit doesn't exist, explain why. Use algebra to simplify the expression \(\frac{|x+3|}{x+3}\) and hence work to evaluate \(\lim_{x \to -3} g(x)\) exactly, if it exists, or to explain how your work shows the limit fails to exist. Discuss how your findings compare to your results in (b). (Hint: \(|a| = a\) whenever \(a \ge 0\text{,}\) but \(|a| = -a\) whenever \(a \lt 0\text{.}\)) True or false: \(g(-3) = -1\text{.}\) Why? True or false: \(-\frac{|x+3|}{x+3} = -1\text{.}\) Why? How is this equality connected to your work above with the function \(g\text{?}\) Based on all of your work above, construct an accurate, labeled graph of \(y = g(x)\) on the interval \([-4,-2]\text{,}\) and write a sentence that explains what you now know about \(\lim_{x \to -3} g(x)\text{.}\) For each of the following prompts, sketch a graph on the provided axes of a function that has the stated properties. Figure 1.2.12. Axes for plotting \(y = f(x)\) in (a) and \(y = g(x)\) in (b). \(y = f(x)\) such that \(f(-2) = 2\) and \(\lim_{x \to -2} f(x) = 1\) \(f(1)\) is not defined and \(\lim_{x \to 1} f(x) = 0\) \(f(2) = 1\) and \(\lim_{x \to 2} f(x)\) does not exist. \(y = g(x)\) such that \(g(-2) = 3\text{,}\) \(g(-1) = -1\text{,}\) \(g(1) = -2\text{,}\) and \(g(2) = 3\) At \(x = -2, -1, 1\) and \(2\text{,}\) \(g\) has a limit, and its limit equals the value of the function at that point. \(g(0)\) is not defined and \(\lim_{x \to 0} g(x)\) does not exist. A bungee jumper dives from a tower at time \(t=0\text{.}\) Her height \(s\) in feet at time \(t\) in seconds is given by \(s(t) = 100\cos(0.75t) \cdot e^{-0.2t}+100\text{.}\) Write an expression for the average velocity of the bungee jumper on the interval \([1,1+h]\text{.}\) Use computing technology to estimate the value of the limit as \(h \to 0\) of the quantity you found in (a). What is the meaning of the value of the limit in (b)? What are its units? What follows here is not what mathematicians consider the formal definition of a limit. To be completely precise, it is necessary to quantify both what it means to say "as close to \(L\) as we like" and "sufficiently close to \(a\text{.}\)" That can be accomplished through what is traditionally called the epsilon-delta definition of limits. The definition presented here is sufficient for the purposes of this text.
CommonCrawl
Existence of a function I came across this question Does there exist a function $f: \mathbb{R}^+ \to \mathbb{R}^+$ such that $f(x+y)>f(x)(1+yf(x))$ and $x,y\in \mathbb{R}^+$ and I didn't know how to begin on it. functional-equations edited Aug 1 '11 at 5:44 picakhu picakhupicakhu $\begingroup$ try $y=0$ in the equation... $\endgroup$ – marwalix Aug 1 '11 at 5:32 $\begingroup$ @marwalix, please read the question carefully. $\endgroup$ – picakhu Aug 1 '11 at 5:34 $\begingroup$ @picakhu, I think marwalix's point is that if you substitute y=0 into the inequality it becomes f(x) > f(x) which is a contradiction, so that there can be no such function... $\endgroup$ – Ben Blum-Smith Aug 1 '11 at 5:39 $\begingroup$ you can try $y=0$ because $y$ does not necessarily belong to the domain of $f$. $x+0$ belongs to the domain whenever $x$ belongs to the domain and in the right hand term you only see $f(x)$. I read the question very carefully $\endgroup$ – marwalix Aug 1 '11 at 5:40 $\begingroup$ @Srivatsan: it seems all you can show is that if such f exists, it cannot be differentiable, right? We can tell that, at least within each integer unit, f is increasing: f(x+1)>f(x)(1+f(x)), so that f(x+1)-f(x)>$f(x)^2$>0, and f is then increasing in [n,n+1], for all n $\endgroup$ – gary Aug 1 '11 at 5:56 Here's a slightly different approach. Suppose for contradiction that such an $f$ exists. In fact we will reach the same contradiction even if $f$ only satisfies $f(x+y) \geq f(x)(1+yf(x))$ for $x,y>0$. Define a sequence $x_0 < x_1 < x_2 < \ldots$ via the recurrence $$x_0 := 2011 \text{ and } x_{n+1} := x_n + 1/f(x_n) \text{ for } n>0.$$ Making repeated use of the bound $f(x + 1/f(x)) \geq f(x)(1+f(x)/f(x)) = 2f(x)$, we find that $f(x_n) \geq 2^n f(x_0)$ for all $n$. Taking the reciprocal of the latter inequality and summing a geometric series enables us to establish $$ x_n = x_0 + \sum_{i=0}^{n-1} (x_{i+1} - x_i) = x_0 + \sum_{i=0}^{n-1} 1/f(x_i) \leq x_0 + \sum_{i=0}^{n-1} 1/(2^i f(x_0)) < x_0 + 2/f(x_0) =: M.$$ But now, the sequence $x_0,x_1,\ldots$ is bounded (by $M$) while the sequence $f(x_0),f(x_1),\ldots$ diverges to infinity. It follows that there is some integer $N$ such that $f(x_N) > f(M)$ even though $x_N < M$. However, it has been observed such an $f$ must be monotone, so we have a contradiction. Mike FMike F $\begingroup$ Much more succinct, nice! (I wish I could upvote this; apparently I exhausted my votes for today :-() $\endgroup$ – Srivatsan Aug 1 '11 at 15:03 $\begingroup$ @Mike, how does one come up with such an elegant solution? What is the motivation to try something like $x_{n+1}=x_n+1/f(x_n)$? $\endgroup$ – picakhu Aug 1 '11 at 15:31 $\begingroup$ @picakhu Indeed. While my proof divides up the interval into $\epsilon$-sized intervals (so that one can approximate a finite Riemann sum), the idea of using adaptive increments $x \leftarrow x+1/f(x)$ has vastly improved the proof. It makes sense in retrospect, but it's a beautiful idea. $\endgroup$ – Srivatsan Aug 1 '11 at 17:31 $\begingroup$ @Srivatsan, your proof seems to be utilizing some traditional ideas. i.e. consider small $\epsilon$, and then you bump into a recursion which you manage to play with and tweak. However to realize that one can find a series $x_0,x_1,...$ such that $f(x_i)\geq g(i) f(x_0)$ has to be the crux. How one gets to that eludes me. Also, note that its a nice telescope that Mike saw would be useful. $\endgroup$ – picakhu Aug 1 '11 at 17:36 $\begingroup$ @picakhu: I'm glad you're happy with it! It occurred to me late at night, a mosquito had woken me up >:[, that the inequality could be interpreted as saying "increasing $x$ by $y$ results in multiplying the value of $f$ at $x$ by at least $1+yf(x)$". Then I just worked out what $y$ would be needed if one wished to at least double the value, and the rest was easy! $\endgroup$ – Mike F Aug 2 '11 at 21:36 There exists no such $f$. My solution is formally not based on analysis, but it is inspired by my analysis "solution" in the comments. Hopefully there's no bug here :-). Pick a sufficiently small $\epsilon > 0$. Then, for $n \in \mathbb N_{0}$, we have: $$ f(1+(n+1)\epsilon) - f(1+n\epsilon) > \epsilon f(1+ n\epsilon)^2. $$ I find it easier to work with a related sequence $u_n$ defined as $u_n = \epsilon f(1+n\epsilon)$. Plugging this in the above equation, we get the pleasant recursive inequality $u_{n+1} > u_n + u_n^2$, with the base condition $u_0 = \epsilon f(1)$. Proposition. If $u_k \geq \eta$, then $u_{k + \lceil 1/\eta \rceil} \geq 2 \eta$. Proof. We have $u_{k+1} - u_k > \eta^2$, $u_{k+2}-u_{k+1} > \eta^2$, and so on. (I am implicitly using the fact that $u_n$ is monotonic.) Adding all these $\lceil 1/\eta \rceil$ inequalities, we get the claim. $\Box$ Number of iterations before reaching $1$. For $\eta \leq 1$, we have the simplification $\lceil 1/\eta \rceil \leq 2/\eta$. Also I find it convenient to use the informal language "$u$-value at iteration $i$" to denote $u_i$. The above observation says that if the $u$-value at the current iteration is at least $\eta \leq 1$, then after at most $2/\eta$ iterations, the $u$-value becomes at least $2\eta$. Now, if $2\eta \leq 1$, then we stop; else, after an additional $2/(2\eta)$ iterations, the $u$-value is at least $2^2\eta$. Using this argument repeatedly, the number of iterations before the $u$-value becomes at least $1$ is at most $$ \frac{2}{\eta} + \frac{2}{2\eta} + \frac{2}{2^2 \eta} + \ldots $$ (this is actually a finite sum with $\approx \log(1/\eta)$ terms), which is smaller than the sum of the corresponding infinite geometric series, namely $4/\eta$. Final contradiction. Now, since $u_0 = \epsilon f(1) \stackrel{def}{=} \eta$, we have $$ 1 \leq u_{\frac{4}{\epsilon f(1)}} = \epsilon \cdot f\left(1+\epsilon \cdot \frac{4}{\epsilon f(1)} \right) = \epsilon \cdot f\left(1+\frac{4}{f(1)} \right). $$ This is a contradiction since the $\epsilon > 0$ in the above argument was arbitrary, whereas the second factor is a fixed positive quantity depending only on $f$. Note. While no $f$ exists satisfying the requirements of the problem, we can do better if we relax the domain to some interval of the form $(0,A)$. In this case, just the function $f(x) = \frac{1}{A-x}$ works. SrivatsanSrivatsan $\begingroup$ Looks good, I was working on something similar. I think you're missing an $n$ in your first inequality? $\endgroup$ – Mike F Aug 1 '11 at 7:40 $\begingroup$ @Mike corrected, thanks! $\endgroup$ – Srivatsan Aug 1 '11 at 7:45 Another approach: Rewrite the inequality as $(*): f(x+y)-f(x)>yf(x)^2$. So we see that $f$ is a strictly increasing function. Fix $x$ and let $y\rightarrow+\infty$, we also see that $f$ is unbounded. Hence we can pick a number $a$ such that $f(a)>1$. Now, for any $x\ge 0$, define $g(x)=f(x+a)-f(a)$. Clearly $g(0)=0$ and $g$ is strictly increasing. Also, it is easy to show that $(**): g(x+y) - g(x)>yg(x)^2$ whenever $x\ge 0$ and $y>0$. So, the inequalities for $f$ and $g$ are similar, except that we allow $x=0$ in the inequality for $g$. However, the function $g$ has a nicer property -- by putting $x=a$ in $(*)$, we have $g(y)>y$ for all $y>0$. Let $b>0$ and $h=b/n$. By $(**)$, we get $g(ih)-g((i-1)h)>hg((i-1)h)^2$. Summing up for $i=1,2,...,n$ and let $h\rightarrow 0$, we see that $g(b) >\int_0^b g(u)^2 du$. As $g$ is an increasing function, the Riemann integral exists. Now, apply the inequality $g(u)>u$ to the integrand, we get $$g(b)>\int_0^b g(u)^2 du > \int_0^b u^2 du = \frac{b^3}{3}.$$ Apply this to the integrand, we get $$g(b)>\int_0^b g(u)^2 du > \int_0^b \left(\frac{u^3}{3}\right)^2 du = \frac{b^7}{3^2 \times 7}.$$ Continue in this manner, we get $$ g(b) > \frac{b^3}{3},\ \frac{b^7}{3^2 \times 7},\ \frac{b^{15}}{3^4 \times 7^2 \times 15}, ... $$ The $n$-th term of the above sequence is of order $O(b^{2^{n+1}}/2^{n^3})$. Therefore, when $b>1$, the value of $g(b)$ will eventually blow up, i.e. $f(a+b)$ does not exist. The final step above suggests that when the domain of $f$ is restricted to a small neighbourhood of zero, $g$ may not blow up [edit: or even the neighbourhood is so small that $a$ and hence $g$ are not defined] and hence a solution for $f$ can exist. For example, if $f$ is only defined on some $(0, b)$, we may take $f(x)=e^{\alpha x}$ as a solution provided that $b$ is small and $\alpha > e^{\alpha b}$. $\begingroup$ A nice proof! Besides you seem to have uncovered another $f(x)$ that works for $(0, b)$ for small $b$. Wonder where this function is hiding in the other proofs. $\endgroup$ – Srivatsan Aug 1 '11 at 17:07 $\begingroup$ I'd noticed something somewhat relevant. If $f$ satisfies the given condition, then for $x,y >0$ and $n$ a positive integer we have $f(x+y)/f(x)=\prod_{k=0}^{n-1}f(x+\frac{(k+1)y}{n})/f(x+\frac{ky}{n})>\prod_{i=0}^{n-1} (1+\frac{y}{n}f(x+\frac{ky}{n}))\geq(1+\frac{yf(x)}{n})^n$. Taking $n \to \infty$ gives $f(x+y)\geq f(x)e^{f(x)y}$. $\endgroup$ – Mike F Aug 2 '11 at 21:25 $\begingroup$ Supposing $f$ is continuous at $y$ and taking $x\to 0$ from above shows $f(y) \geq A e^{Ay}$ where $A := \lim_{x \to 0^+} f(x)$. We can drop the assumption that $f$ is continuous at $y$ by considering an increasing sequence of points $y_n$ at which $f$ is continuous with $y_n \to y$. This is of course possible since $f$ is monotone and so has at most countably many discontinuities. The conclusion is that $f(x) \geq Ae^{Ax}$ holds for all $x>0$ so the "partial examples" in your answer are minimal in some sense. $\endgroup$ – Mike F Aug 2 '11 at 21:29 Not the answer you're looking for? Browse other questions tagged functional-equations or ask your own question. Is a function $f:\mathbb R\to\mathbb R$ such that $f(x+y)=f(x)+f(y)$ always continuous? Does there exist the function $f^2(x)\ge f(x+y)\left(f(x)+y \right) $ Does there exist a non-constant function $f:\mathbb N^2 \rightarrow \mathbb N$ such that $f(x,y)+f(y,x)=f(x^2,y^2)+1$ $f:[0,1]\rightarrow [0,1]$ and $f\circ f$ are not identically zero but $f(f(f(x)))=0$ for all $x\in [0,1]$. Does $f$ exist? Given $g(x)$, is there a function $f(x)$ so that $f(x) - f(x-1) = g(x)$? A function whose antiderivative equals its inverse. Function with Weird Property! Functional Square Root of Digit Sum? Does there exist a function $f$ such that $f(x)$ is an integer for only finitely many values of $x$? Prove or disprove existence of functions $f$ and $g$ (functional equations)
CommonCrawl
Grothendieck-Teichmüller Groups, Deformation and Operads Higher Structure 2013: Operads and Deformation Theory A Dold-Kan-type correspondence for superalgebras of differentiable functions and a "differential graded" approach to derived differential geometry D Roytenberg Universiteit Utrecht Tuesday 2nd April 2013 - 15:00 to 16:00 [GDOW02] Higher Structure 2013: Operads and Deformation Theory The commutative algebra appropriate for differential geometry is provided by the algebraic theory of $C^\infty$-algebras -- an enhancement of the theory of commutative algebras which admits all $C^\infty$ functions of $n$ variables (rather than just the polynomials) as its $n$-ary operations. Derived differential geometry requires a homotopy version of these algebras for its underlying commutative algebra. We present a model for the latter based on the notion of a "differential graded structure" on a superalgebra of differentiable functions, understood -- following Severa -- as a (co)action of the monoid of endomorphisms of the odd line. This view of a differential graded structure enables us to construct, in a conceptually transparent way, a Dold-Kan-type correspondence relating our approach with models based on simplicial $C^\infty$-algebras, generalizing a classical result of Quillen for commutative and Lie algebras. It may also shed new light on Dold-Kan-type co rrespondences in other contexts (e.g. operads and algebras over them). A similar differential graded approach exists for every geometry whose ground ring contains the rationals, such as real analytic or holomorphic. This talk is partly based on joint work with David Carchedi (arXiv:1211.6134 and arXiv:1212.3745).
CommonCrawl
Quantification of the effects of fur, fur color, and velocity on Time-Of-Flight technology in dairy production Jennifer Salau1Email author, Ulrike Bauer2, Jan H Haas1, Georg Thaller1, Jan Harms2 and Wolfgang Junge1 © Salau et al.; licensee Springer. 2015 Accepted: 24 February 2015 With increasing herd sizes, camera based monitoring solutions rise in importance. 3D cameras, for example Time-Of-Flight (TOF) cameras, measure depth information. These additional information (3D data) could be beneficial for monitoring in dairy production. In previous studies regarding TOF technology, only standing cows were recorded to avoid motion artifacts. Therefore, necessary conditions for a TOF camera application in dairy cows are examined in this study. For this purpose, two cow models with plaster and fur surface, respectively, were recorded at four controlled velocities to quantify the effects of movement, fur color, and fur. Comparison criteria concerning image usability, pixel-wise deviation, and precision in coordinate determination were defined. Fur and fur color showed large effects (η 2=0.235 and η 2=0.472, respectively), which became even more considerable when the models were moving. The velocity of recorded animals must therefore be controlled when using TOF cameras. As another main result, body parts which lie in the middle of the cow model's back can be determined neglecting the effect of velocity or fur. With this in mind, further studies may obtain sound results using TOF technology in dairy production. Dairy cow Automated monitoring Fur color Multi-disciplinary approaches and technological solutions will be characterizing concepts in the agricultural science of the next decade. Especially solutions to monitor animal health in terms of body condition changes and lameness gain more and more importance, as these are meaningful issues in herd health management and herd productivity (Collard et al. 2000; Booth et al. 2004). There have been several camera-based studies during the last years reaching high rates of correct classification in lameness detection. In (Song et al. 2008; Pluk et al. 2012), and (Poursaberi et al. 2011) walking cows had been recorded using digital 2D cameras in side view position and methods concerning hooves, legs' angles, and back posture came to use in the evaluation of cows' gait, respectively. Moreover, various 2D-camera-based studies on automated body condition scoring have been presented. In (Azzaro et al. 2011) cow shapes were reconstructed using linear and polynomial kernel principal component analysis and the body condition score (BCS) was estimated. BCS prediction models based on five anatomical points were presented in (Bercovich et al. 2012). Segmentation is always a difficult part of preprocessing when 2D digital images are used (Hertem et al. 2013), because changes in light conditions and scenery affect segmentation algorithms and complicate the definition of a common image background for all pictures. For this reason thermal images were considered for BCS determination in (Halachmi et al. 2013). BCS was assessed by fitting a parabola to the cow shape and full automation was reached. 3D cameras are another approach to overcome segmentation problems. As the pixel's relative distances from the camera are known, the separation between fore- and background is easier. Furthermore, the usage of 2D data forces the projection of a threedimensional scenery onto a plane. Objects and their movement through threedimensional space can only be described accurately when spatial anomalies like distances diagonal or parallel to the camera's line of sight are considered. Consequently, in (Krukowski 2009) images from a Time-Of-Flight (TOF) 3D camera were analyzed with regard to BCS determination. The rear view of dairy cows in standstill was recorded with a manually guided camera. In (Salau et al. 2014) and (Weber et al. 2014) a TOF-based system with automated calibration, animal identification and body trait gathering was introduced. The study was able to estimate the backfat thickness (BFT) using the characteristics extracted from the depth images. A different type of 3D camera was used in (Viazzi et al. 2014) and (Hertem et al. 2014). The Microsoft Kinect sensor (Microsoft 2010) works with the 3D measurement principle "Structured Light" (Fofi et al. 2004). The Kinect was used for lameness detection via back's posture extraction in (Viazzi et al. 2014) and algorithms and results were compared to those obtained from 2D video recordings as presented in (Poursaberi et al. 2011). As the Kinect camera's usage turned out to be promising, in (Hertem et al. 2014) the algorithm was improved and the classification performance was optimized. Digital cameras are prone to error when used outdoors or in barn environment, because of sunlight conditions, dirt, fur-covered surfaces, and the animals' movement. For a successful application of 3D cameras in monitoring solutions, their sensitivity towards fur, different fur colors, and animal movement should be analyzed. During data collection for (Weber et al. 2014) was found that fur and fur color changes cause imprecise TOF depth measurements. In addition, evaluations needed to be restricted to recordings of standing cows, as motion artifacts occurred. The dependence on the projected infrared pattern causes some limitations of "Structured Light" depth measurement for its part. Depth values can only be calculated from constellations of light dots not from a single dot, which causes difficulties in measuring thin objects (Lau 2013). Additionally, no depth value can be calculated between the light dots, which leads to a coarser depth resolution with increasing distance from the camera (Andersen et al. 2012). Furthermore, (Hansard et al. 2012) stated, that material properties strongly correlated to depth accuracy, and that both measurement principles had difficulties with various surfaces. This study would not compare the capabilities of Kinect and TOF depth sensors, because there have been detailed publications on this (i.e. (Langmann et al. 2012) where a TOF camera with a sensor similar to that used in SR4K was studied). The next generation (Microsoft 2014) of the Microsoft Kinect depth sensor is indeed a TOF camera. It has not been available for data collection during this study. Therefore, the present study quantified quality loss due to fur (color) and movement concerning TOF camera recordings. Indoor recordings of cow models were used to eliminate the effect of sunlight, and the software described in (Salau et al. 2014) was applied to them. The aim was to create a basis for a TOF camera application in moving dairy cows. All the criteria showed the same differences and significant effects, independently of whether they were extracted from the original or the mirrored images (for explanations on the mirroring see section 'Material and methods', 'Comparison criteria and statistical methods'). Only the data extracted from the original images is presented. Proportion of high quality images As the ratio of high quality images (N velocitiy) to recorded images (C velocitiy), the HQIratio (section 'Material and methods', 'Comparison criteria and statistical methods', Proportion of high quality images) served as a measure for the usability of the recorded images: $$ \text{HQIratio}_{\text{velocitiy}}=\frac{N_{\text{velocitiy}}}{C_{\text{velocitiy}}}. $$ Table 1 presents the numbers of recordings, the numbers of images that passed the quality tests, and the HQIratios for both models and all velocities. The numbers of recorded images ( C velocitiy ), the number of images that passed all quality tests that had been integrated in the developed software ( N velocitiy ), and the ratios \(\boldsymbol {\text {HQIratio}_{\text {velocitiy}}=\frac {N_{\text {velocitiy}}}{C_{\text {velocitiy}}}}\) for the plaster cast as well as the fur-covered model and all velocities Plaster cast Fur-covered model C velocitiy N velocitiy HQIratio velocitiy 10 cm/s For the plaster cast, the most significant decrease happened during the transition from standstill to movement, where the HQIratio dropped by ≈22% from 0.87 to 0.68. With the acceleration from 10 cm/s to 20 cm/s HQIratio dropped from 0.68 to 0.66, which was a decrease of ≈3%. In comparison with the final velocity of 30 cm/s, HQIratio fell by additional ≈6% to 0.62. The HQIratios of the fur-covered model dropped by 66% when the model started to move. The acceleration afterwards caused additional decreases in HQIratio by ≈82% (from 0.34 to 0.06), when speeding up from 10 cm/s to 20 cm/s, and 75% (from 0.06 to 0.015), when the final velocity of 30 cm/s was set. The polynomials of degree 2 (P plaster, P fur) and the Gaussian exponential functions (g plaster, g fur) that fit the vectors (HQIratio0, HQIratio10, HQIratio20, HQIratio30) best in a least square sense are given by $$\begin{array}{*{20}l} P_{\text{plaster}}(x) &= 0.037*x^{2} -0.261*x+ 1.087,\\ &\quad(\text{RMSD}=0.0432, \mathrm{R}^{2}=0.95), \end{array} $$ $$\begin{array}{*{20}l} P_{\text{fur}}(x) &= 0.153*x^{2}-1.09*x+1.93, \\ &\quad(\text{RMSD}=0.0297, \mathrm{R}^{2}=0.99), \end{array} $$ $$\begin{array}{*{20}l} g_{\text{plaster}}(x) &= 27.75*10^{29}*exp\left(-\left(\frac{x+1252.1 }{149.69}\right)^{2}\right),\\ &\quad(\text{RMSD}=0.078, \mathrm{R}^{2} =0.83) \end{array} $$ $$\begin{array}{*{20}l} g_{\text{fur}}(x) &= 1.545*exp\left(\left(\frac{x+0.1617}{1.761}\right)^{2}\right),\\ &\quad(\text{RMSD}=0.0095, \mathrm{R}^{2} =0.99) \end{array} $$ for the plaster cast and the fur-covered model, respectively. All fits had a single degree of freedom, the other goodness-of-fit statistics are stated in brackets behind the approximating functions. The Gaussian exponential approximation of the plaster cast's HQIratio (Equation 4) show considerably inferior goodness-of-fit statistics compared to all other approximations. Its R 2 value of 0.83 and RMSD =0.078 face R 2 values of 0.99 and RMSD ≤0.0432. Both approximations of the fur-covered model's HQIratios were suitable referring to the goodness-of-fit statistics, but the Gaussian exponential fit comes with three times smaller root-mean-square-deviation. The polynomial fit (dotted purple line in Figure 1) shows a local minimum between the original values belonging to 20 cm/s and 30 cm/s. In Figure 1 all approximations are displayed. Hereby in both models the inferior one is illustrated as dotted line. Behavior of HQIratio with increasing velocity in comparison between models. HQIratio is the quotient of the number of high quality images to the number of recorded images. The circles belong to the actual HQIratio values (olive: fur-covered model, red: plaster cast). For both models two types of functions have been fitted to the original values in a least square sense: a polynomial of degree two (purple: fur-covered model, green: plaster cast) and a Gaussian exponential function (cyan: fur-covered model, blue: plaster cast). The approximation that showed less goodness-of-fit is illustrated as dotted line, respectively. Pixelwise differences in standstill For both cow models the fluctuation criteria SumDiff $$ \text{SumDiff}\, := \,\frac{1}{N_{0}-1}*\sum_{i=1}^{N_{0}-1} \left|\text{image}_{i+1}-\text{image}_{i}\right| $$ and the pixel-wise calculated standard deviation pwStd (section 'Material and methods', 'Comparison criteria and statistical methods', Pixelwise differences in standstill) showed significant differences in medians between pixel belonging to "Interior" or "Boundary" (Table 2). The effect sizes for the criterion pwStd exceeded the effect sizes for SumDiff. According to (Cohen 1988), effect sizes in SumDiff were small (η 2=0.013 for plaster cast and η 2=0.017 for fur-covered model), while effect sizes for pwStd were medium (η 2=0.111 for plaster cast and η 2=0.096 for fur-covered model). The cow model significantly affected both criteria within both regions. The effect could be considered large within "Interior" (η 2=0.232 with SumDiff, η 2=0.235 with pwStd) but very small within "Boundary" (η 2=0.006). Additionally, the fur color had a significant effect in both criteria (Table 3). Both criteria showed large effect sizes (η 2=0.472). Descriptive statistics of pixel-wise added differences (SumDiff) and pixel-wise standard deviation (pwStd) in depth values for plaster cast as well as fur-covered model recorded in standstill SumDiff pwStd median "Interior" median "Boundary" Effect sizes η 2 (p=0.02) Grouping Variable Fur-cov. M. The area of both models had been disjointed in "Interior" and "Boundary". The numbers of pixel belonging to each group are given in column 2 and 5. Since the data (SumDiff, pwStd) is skewed, the median is preferable as a measure of center. Nevertheless, the means are given for the sake of completeness. The differences in medians of SumDiff and pwStd are significant (p=0.02) both between regions "Interior" and "Boundary" for the two models and between models for the regions "Interior" and "Boundary". Effect sizes η 2 are given in rows 13, 14 for the grouping after "Interior"/"Boundary" and in rows 15, 16 for the grouping after models, respectively. Descriptive statistics of pixel-wise added depth value differences (SumDiff) and standard deviation (pwStd) for the fur-covered model recorded in standstill to compare between black and white fur median "Interior White" median "Interior Black" η 2 (p=0.001) The numbers of pixel belonging to "Interior White" and "Interior Black" are given in column 2. Since the data is skewed, the median is preferable as a measure of center. The means are given for the sake of completeness. The differences in medians of SumDiff and pwStd between "Interior White" and "Interior Black" are significant (p=0.001). Effect sizes η 2 are given in row 4. Precision of coordinate determination Table 4 shows the development of precision criterion RpV $$ { \small\begin{aligned} \text{RpV}_{\text{velocity}}\, := \,\frac{\max(\text{X-coordinates})- \min(\text{X-coordinates})}{N_{\text{velocity}}} \end{aligned}} $$ Range per number of values calculated for X-coordinates at different velocities RpV \(\boldsymbol {{~}_{\text {velocity}} = \frac {max(coord{.})-min(coord{.})}{N_{\text {velocity}}}}\) , where N velocity is the number of images with determined X-coordinates at the corresponding velocity (standstill, 10 cm/s, 20 cm/s, 30 cm/s) RpV 0 RpV 10 quadr.coeff. RMSD Isc.Tub., L Dish, L Dish, R Isc.Tub.,R Isc.Tub.,L fur- Dish,L Dish,R Isc.Tub., R The seventh column contains the quadratic coefficients of the polynomial approximation of the vectors (RpV 0, RpV10, RpV20, RpV30) for all considered body parts (abbreviated in column 2). The medians of the quadratic coefficients differ significantly between plaster cast and fur-covered model (p=0.05, medianplaster=-0.005, medianfur=0.41). The last two columns contain the goodness-of-fit statistics root mean square deviation (RMSD) and coefficient of determination (R 2). All fits had a single degree of freedom. with increasing velocity for every considered body part and both models (section 'Material and methods', 'Comparison criteria and statistical methods', Precision of coordinate determination). Additionally, the goodness-of-fit statistics of the polynomial approximations of the vectors (RpV 0,RpV 10, RpV 20, RpV 30) are given. Except from the fit to RpV values belonging to BB30 measured with the fur-covered model, the coefficients of determination range from R 2=0.92 to R 2=0.98. The RMSD varies between 0.001 and 0.018 for the plaster cast and between 0.092 and 0.998 for the fur-covered model, again except the point BB30. The goodness-of-fit statistics for BB30 measured with the fur-covered model are noticeable, as RMSD =0.002 and R 2=0.57 are significantly lower than the values within the fits belonging to the fur-covered model. Furthermore, the R 2 value is significantly lower than the corresponding values of all other approximations. The quadratic coefficients of the RpV-approximations belonging to the plaster cast were close to zero (medianplaster= −0.005). For the fur-covered model they reached from 0.05 to 1.15 (medianfur=0.41). The imprecision criterion RpV grew significantly (p=0.05) faster with increasing velocity when it came to the fur-covered model. The size of the model's effect was very large (η 2=0.846). This study provided four possible measures to quantify the effects of fur in contrast to a homogeneous plaster surface, fur color, and velocity. Proportion of high quality images (HQIratio) Both surface materials showed loss in image quality measured via HQIratio due to animal movement. This was to be expected, as TOF cameras are prone to motion artifacts. As explained in section 'Material and methods', 'Time-Of-Flight Technology', the depth values were calculated using four signals S 1,…,S 4. Motion artifacts occur when objects move significantly during the acquisition of S 1,…,S 4 (Hansard et al. 2012). However, the behavior in decrease of image quality differed significantly between the models. In this study, no velocities greater than 30 cm/s were considered. For such speeds, no images of the fur-covered model would have passed the quality tests implemented in the software (see section 'Material and methods', 'Software'). Even at 30 cm/s only four usable images remained for that model as can be seen from Table 1. It was not expected that usable images of the model at a cow's assumed normal walking pace of about 111 cm/s (≈4 km/h) could be acquired. To quantify the differences, approximating functions for the vectors (HQIratio0,HQIratio10,HQIratio20,HQIratio30) were determined. As the velocity was constantly increased by 10 cm/s per step, a quadratic behavior of the acceleration was to be expected. Therefore, approximations of the form α∗x 2+β∗x+γ were calculated. The HQIratio values related to the fur-covered model (Table 1, rightmost column) indicated a faster decrease. Therefore, the behavior of the HQIratiovelocitiy vectors was in addition approximated by a Gaussian exponential function \(K*exp\left (-\frac {(x-L)}{M}\right)\), and goodness-of-fit statistics of the approximating functions were compared. The minimal value of the polynomial fit to the fur-covered model's HQIratios (Equation 3) was lower than the value for 30 cm/s. The polynomial approximation's goodness-of-fit statistics were, nevertheless, quite as suitable as the ones belonging to the exponential approximation (Equation 5). This fact and the position of the polynomial's minimum between the third and last original value could be explained by the limited number of HQIratio values, that had been considered. The approximation model had no original value beyond 30 cm/s to predict the decay, but the course could be described very well within the smaller velocities. Looking at the plaster cast's approximations, the polynomial (Equation 2) is superior to the exponential fit (Equation 4). This was mainly caused by the HQIratio belonging to 10 cm/s, as it is considerably smaller than the HQIratio in standstill, but nearly equal to the HQIratio calculated for 20 cm/s. On the one hand, an unobserved effect during the recording of the plaster cast at velocity 10 cm/s might have caused this outlier. On the other hand, the image quality might show a polynomial instead of an exponential decrease with increasing velocity due to the homogeneity of the plaster cast's surface. The fact, that the model was moving at all, seemed more meaningful than the actual velocity. A considerable amount of high quality images could be gathered even at the highest considered speed. With the fur-covered model on the contrary, every step in acceleration caused a substantial additional loss in image quality. This indicated, that the velocity would have to be kept as low as possible, when moving cows are to be recorded with the SR4K. To quantify the effect of the surface material in motion, coefficients of approximations of the same type had to be compared between models. The Gaussian exponential approximation for the plaster cast should not be used in this comparison, because its goodness-of-fit statistics were clearly inferior. Concerning the first acceleration steps, the degree 2 polynomials were good fits for both models. As the coefficient with the x 2 term had the most impact on a polynomial's growth, the much faster decrease caused by the fur-covered surface in contrast to the plaster surface could be quantified by the quotient of the quadratic coefficients of P fur and P plaster. That gives \(\frac {0.1532}{0.0368}\approx 4.16\). Pixelwise differences in standstill (SumDiff, pwStd) SumDiff (Equation 6) and pwStd were measures for pixel-wise deviation in depth values. As only recordings in standstill had been used for the calculation, these comparison criteria were independent of velocity and allowed to analyze the differences between models, that were caused strictly by surface material. Mixed phases are produced when infrared light with different phase shifts was observed by one pixel. As an implication, the depth values were calculated from an superposition of multiple reflected signals. Such multipath errors had been expected to be a problem at the cow models' boundaries, as this error generally increased as the objects surface's normal deviates from the optical axis of the camera (Hansard et al. 2012). Therefore, the cow area was split up in the regions "Boundary" and "Interior" to reach better comparability. The grouping after regions within models and the grouping after models within regions effected both criteria significantly. Especially the size of the model effect within "Interior" was large (η 2=0.23). This could be interpreted as a quantification of the effect of fur surface on TOF depth measurement precision. Due to the structure of fur, an augmented refraction of light occurred and less accurate TOF depth measurement was to be expected. Pixelwise deviation increased at the edge of the cow model's area. Within "Boundary" only a small model effect could be observed. A plausible explanation was, that for both models the depth measurement within "Boundary" was already less accurate due to mixed phases, and therefore the surface structure had hardly an impact, whereas the accurate depth measurement within "Interior" was strongly affected by the fur. It had additionally been distinguished between black and white fur within "Interior" of the fur-covered model. The fur color also had a significant effect in both criteria, and the effect sizes were very large (η 2=0.472), probably caused by different absorption coefficients of black and white fur. The absorption coefficient was the quotient of the electromagnetic radiation which a body absorbed and the electromagnetic radiation it was exposed to. It ranged between 0 and 1. The exact absorption coefficients for white and black fur were not determined in this study, but assuming a higher absorption coefficient for black fur was reasonable. For example a surface of the carbon black and white marble had absorption coefficients ≈0.96 and ≈0.46, respectively (Baehr and Stephan 2004). The infrared signal reflected from the black fur had lost more intensity when it returned to the sensor inside the TOF camera than the signal reflected from the white fur (MESA-Imaging 2013a). Therefore, the depth measurement varied in quality. As the fur needed to be glued to the model to avoid that measurements became unrepeatable due to displacements of the coat, only one coat was tested. The effect sizes might depend on the specific coat texture of this real cow fur. Then again, using different coats might have caused difficulties in distinguishing between the effects of the animal and the fur color. As pwStd is based on quadratic differences in contrast to the absolute differences used to calculate SumDiff, in pwStd larger differences in depth value gain more weight than in SumDiff. That might explain the smaller differences in medians and the less strong effects on SumDiff than pwStd when it came to a comparison between the models or the regions. Smoothing the images was not considered in this study, as the effects on the original recordings had been of interest. Specific smoothing could be a possibility in an image processing application to handle the differences between black and white fur, at the risk of losing information about the surface shape. Precision of coordinate determination (RpV) RpV (Equation 7) was a measure of imprecision concerning the determination of X-coordinates. The original software applied to cows in an electronic feeding dispenser had shown 1.5% error rate (Salau et al. 2014) in the detection of ischeal tuberosities, dishes of the rump, and tail. RpV had been calculated for all velocities with regard to the automatically determined X-coordinates of these five body parts and additionally for BB30. In both cases RpV rose while the models accelerated. However, for the plaster cast only a linear growth could be observed, as the quadratic coefficients were all close to zero, whereas RpV increased quadratically for the fur-covered model. Therefore, the fur affected the loss of precision due to velocity very strongly (η 2=0.846). Indeed, the imprecision in coordinate determination in standstill was higher for the plaster cast, than the fur-covered model. But differences in the geometrical shape of the models could not have been excluded as reasons for more erroneous coordinate determination concerning the plaster cast. A noticeable fact was, that BB30 showed not only the smallest RpV values for all velocities in both models, but also a large difference in quadratic coefficients compared to the other body parts when it came to the fur-covered model. This indicated, that this body part could be determined most accurately and exhibited the least loss of precision due to velocity. A reason for this might be, that of all considered body parts only BB30 lay in "Interior" instead of "Boundary", where the TOF depth measurement was more reliable. It has to be taken into account, that the coefficient of determination for the approximation with a quadratic polynomial corresponding to BB30 on the fur-covered model was inferior compared to the approximations belonging to the other body parts. It could be questioned, if the calculated quadratic coefficient was meaningful. Discussing TOF usage In dairy production fur surfaces had to be considered and no influence on the fur color could be taken. It had to be analyzed how the application of TOF technology could lead to dependable results. Dairy cows' BFT was successfully estimated from TOF recordings in (Weber et al. 2014) with the limitation, that cows were only recorded in standstill. Furthermore, traits were only extracted from one dimensional sections through the recorded cow surfaces and not from two dimensional areas on the surfaces. The reason was, that differences in depth measurement between black and white fur could be corrected in a more controlled way when only one dimension was considered. Principal descriptors for i.e. body condition scoring were located in the tail head area (Ferguson et al. 1994) where the effects of fur and velocity turned out to be strong. Thus, it had to be expected, that assessing BCS or BFT from TOF recordings of moving animals would be erroneous. The restriction of analyzing traits from "Boundary" only from recordings collected during feeding or milking is a serious limitation for a monitoring system. However, the effects of fur and velocity were noticeably smaller in "Interior", hence the TOF camera might be applicable for the determination of the backbone in moving cows and lameness detection via back posture analysis as in (Hertem et al. 2014). Yet, it was questionable if a TOF camera could be a superior choice for dairy farming application, as the Kinect was cheaper, did not show differences in depth measurement between black and white fur, and produced little motion artifacts. With regard to the latter should be mentioned, that real-time preprocessing methods to compensate motion artifacts in TOF recordings have been introduced (Hoegg et al. 2013). Considering the effect of fur again, Kinect's and SR4K's performance on measuring stuffed animals, small fur-covered animal models, and other test objects had been examined in (Hansard et al. 2012). It has to be mentioned, that synthetic fur's structure differs from that of real fur. But with both fur-covered test objects the RMSD of depth accuracies between Kinect and SR4K were comparable. The Kinect's performance over all test objects turned out to be worse than that of SR4K. The next generation of the Kinect is a TOF camera, but it is equipped with a novel image sensor (Lau 2013). Every pixel is divided in half, the pixel halves are ready to absorb reflected light alternately, and the absorbing time of the first half is aligned with the pulsing of the laser. During the time the first half is rejecting incoming light, the second half is absorbing, and the laser is off. Consequently, the distribution of received photons among both pixel halves changes with the distance between camera and object and is used to calculate depth values. A renunciation of the control signals S 1,…,S 4 could limit motion artifacts, because there are less possibilities for the object to move during calculation. If proportions of light were absorbed by the object's surface and did not return to the sensor, both pixel halves were affected equally, and the distribution was not altered. This would reduce black-white-differences in depth measurement significantly, as they were a consequence of the absorbing coefficients of black, respectively, white fur. The next Kinect has not been available while data collection for this study was carried out, but it promises to be an affordable alternative to both Kinect and the current generation of TOF depth sensors. This study introduced criteria to quantify the effects of fur and animal movement. The experimental indoor test scenario included two cow models with fur and plaster surface, respectively. According to the criteria concerning pixel-wise deviation (SumDiff, pwStd), the effect of the fur surface in contrast to a more homogeneous surface on TOF measurement was large. Additionally, crucial differences related to fur color were observed, as criteria medians were two to three times higher with black than white fur. In any application of TOF cameras, the velocity of the recorded animals needed to be controlled, because in the analysis of moving models, the impact of the fur surface became even more decisive: With increasing velocity the proportion of high quality images (HQIratio) dropped four times faster due to fur, and furthermore, the fur caused quadratic loss of precision in coordinate determination (RpV) in contrast to a linear behavior without fur. The latter was a problem especially at the edge of the cow model's area, i.e. the tail head region. However, coordinate determination was sound in the middle of the cow's back and hardly affected by velocity or fur. It was shortly discussed, if TOF depth sensors could compete with the Microsoft Kinect 3D camera when it comes to studies dealing with traits from the cow area's interior. At this, an outlook on the next Kinect camera generation was given. Its new type of TOF sensor seemed to be a noticeable improvement to both current TOF sensors and Kinect. Time-Of-Flight Technology The SR4K (Mesa Imaging AG) emits infrared light (modulated signal frequency f = 30 MHz), which is reflected by the object. From four phase control signals S 1,…,S 4 with 90 degree phase delays from each other the collection of electrons from the detected reflected infrared signal is determined. Let Q 1,…,Q 4 represent the amount of electric charge for S 1,…,S 4, respectively. Using the four phase algorithm, the phase difference t d is estimated as \(t_{d} = \text {arctan}\frac {Q_{3} - Q_{4}}{Q_{1} - Q_{2}}\). The distance d between object and camera is calculated from the phase shift with the following formula: \(d = \frac {c}{2 f}\frac {t_{d}}{2\pi }\), whereas c and f denote the speed of light and the signal frequency, respectively (Hansard et al. 2012). The camera's range is 0.8 to 5 m (MESA-Imaging 2013a). Its accuracy of measurement over this calibrated range is 1 cm (for the 11×11 central pixel). It is recording up to 54 images per second with a resolution of 176×144 pixel depending on the exposure time and has 43.6o horizontal and 34.6o vertical field of view. SR4K provides distance and (x,y,z) coordinate data, amplitudes, confidence maps as an estimate of reliability, and SwissRanger streams (srs) consisting of sequences of images as output according to user's choice. The camera was used with default settings. Recorded cow models Two cow models were recorded with a SR4K TOF camera in September 2012 at the Institute for Agricultural Engineering and Animal Husbandry of Bavarian State Research Center for Agriculture (BSRCfA) in Grub (Germany). Recording (details in 'Installation and recording') of both models were taken from top view in standstill and motion. A model of a cow's lower back was build at BSRCfA from solid, synthetic material using CNC (computer numerical control) carving (width: 0.5 m, length: 0.5 m, height: 0.15-0.22 m). It had a tail, ischeal tuberosities and a lower backbone but no hipbones and was not modeled after a real cow (Figure 2, left). A black and white real Holstein-Friesian's fur was permanently glued to the model. The fur-covered model was firmly mounted on a board (0.6×0.6 m 2) with wooden beams (0.05×0.05 m 2, height: 0.25 m) at the corners. Since the fur could not be removed from the model without destroying it, no other furs were used for testing. Originally, this model was intended for another purpose within a study related to body condition determination (Weber et al. 2014; Salau et al. 2014). Hipbones were not included in the model, because they were irrelevant then. The model nevertheless shows most of the points of interest (ischeal tuberosities, dishes of the rump, tail, backbone) that could be determined by the software described in (Salau et al. 2014). Therefore, the model was reused in this context. The two recorded cow models. Left: Fur-covered Model; Right: Plaster Cast. Later, since no fur-less version of the model was available, a plaster cast was taken from a Holstein-Friesian cow's lower back to obtain a portable model of a real cow's shape as a negative control for fur (Figure 2, right). This was done at the research farm Karkendamm of Christian-Albrecht-University (CAU) in Kiel (Germany). The lower back was greased with petrolatum to protect fur and skin of the animal. Afterwards this area was uniformly covered with several layers of wet plaster bandages. The covered area included base of the tail, ischeal tuberosities, lower backbone and lower back. The hipbones were not included. The bandages reached approximately 15 cm down the animal's side. A blow-dryer was used to fasten the drying, before the plaster cast (length: 0.56 m, width: 0.55 m, height: 0.16-0.22 m) was lifted of the cow. Installation and recording A metallic frame with two horizontal running rails was build by BSRCfA (length: 3 m, width: 1.04 m, height: 0.8 m). A wooden plate (1.14×1.14 m 2) was placed on the running rails which could be towed by a rope stretched up to an impeller wheel. The motor could be regulated with a control panel that also displayed the velocity. At the end of the running rails the plate was stopped automatically, and the moving direction could be changed by a switch. The construction was supported by a vertical frame (width: 1.27 m, height: 2.13 m) above which's center line the TOF camera was attached in top view (Figure 3). Installation for recording in controlled velocities. Left: Framework with fur-covered model on a wooden plate placed on running rails. SR4K mounted in top view; Right: Motor (background), impeller wheel and rope to tow the wooden plate. Recording took place indoors at BSRCfA to exclude the influences of direct sunlight or insects. In Grub a 2-core system having 3.43 gb RAM with the recording software (see section 'Software') developed at the Institute of Animal Breeding and Husbandry of CAU was used for recording. SwissRanger streams of both models were recorded in standstill and at the controlled velocities 10 cm/s, 20 cm/s and 30 cm/s. The camera stayed in fixed position with 1.28 m distance between sensor and the wooden plate. Additionally streams of the wooden plate without any model on it were recorded to capture the completely empty scenery. Originally developed software At CAU software was developed to record cows in an electronic feeding dispenser and automatically extract body traits (Salau et al. 2014). The software firstly calculated scenery information out of a number of images of the completely empty scenery. It then could decide automatically if an image showed a cow's lower back. These images were segmented and stored for further processing, all others were deleted. Subsequently, the body parts ischeal tuberosities, base of the tail, dishes of the rump, hipbones, and backbone were determined automatically. The software tested the segmentation results and the coordinates of body parts directly after their calculation (for details see (Salau et al. 2014)). Images failing any test were deleted. Necessary software modifications made in this study The streams recorded from the cow models were used as virtual camera and analyzed with this software. As the models differed from real cows, some slight modifications to the software were necessary: The basis for the automated decision that the image showed a cow's lower back had been, that the area covered with the cow's body exceeded the lower image border. As the models did not reach the image border, an additional rectangle (Figure 4, middle) had been temporarily added to every image to close the gap and to avoid, that all images were deleted. After successful segmentation the rectangle was removed again (Figure 4, right). All three illustrations were prepared using the MATLAB function imagesc and its default color scale. Left: Original depth image, showing the fur-covered model on the wooden plate. It was mounted on a board. Middle and Right: The subsequent image processing steps. Middle: A rectangle has been added to the depth image. This modification of the original software was necessary to prevent the image from being deleted, because the models in contrast to real cows did not reach the image's lower edge. Afterwards the automated segmentation has set all background to zero (blue). Right: The backbone (black line), ischeal tuberosities, dishes of the rump and tail (white dots) and BB30 (point on the backbone in 30 pixel radius from the tail, white rectangle) have been determined automatically. The cow was separated from the background by subtracting the averaged empty scenery and using the height differences between the cow and the floor. Both models reached a maximum height of 0.22 m above the wooden plate (This served as the floor in the present scenario.), thus the tolerances had to be adapted. In case of the fur-covered model the position of the wooden beams (section 'Material and methods', 'Recorded cow models', Figure 4, left) relative to the model had once to be specified manually and their removal had to be added to the automatic segmentation. As hipbones were not included in both models, their automatic determination had to be removed from the software. Instead, the point on the backbone in 30 pixel radius from the tail (BB30) was determined, in order to include a measurement point in the comparison, that was not positioned at the edge of the cow area (Figure 4, right). Comparison criteria and statistical methods MATLAB presents the images as matrices with 176 rows and 144 columns. Counting rows and columns starts at the left upper corner. The vertical midline runs between columns 72 and 73. The algorithms of the software developed in (Salau et al. 2014) work the images row-wise and column-wise from the left upper corner. To exclude this running direction as reason for left-right-differences in the extracted comparison criteria, the analysis had been repeated with all images mirrored on the vertical line between column 72 and 73. As the camera stayed in a fixed position during recording the number of images showing the cow model decreased with increasing velocity (Table 1). These numbers will be called C 0 belonging to the recording in standstill and C 10, C 20, and C 30 belonging to the recordings at 10 cm/s, 20 cm/s, and 30 cm/s, respectively. Both models were recorded for four minutes in standstill. For all velocities both cows model were recorded passing the camera five times. As explained in section 'Software', Originally Developed Software various quality tests had been integrated in the software and all images failing any of these tests were deleted. The numbers of output images after applying the quality tests will be called N 0,N 10,N 20, and N 30. The quotient $$\text{HQIratio}_{\text{velocitiy}}=\frac{N_{\text{velocitiy}}}{C_{\text{velocitiy}}} $$ (compare Equation 1) is the ratio of High Quality Images in relation to recorded images. For both models the behavior of HQIratiovelocitiy with increasing velocity was analyzed by approximating the vector (HQIratio0,HQIratio10,HQIratio20,HQIratio30) with a quadratic polynomial α∗x 2+β∗x+γ on the one hand and with a Gaussian exponential function \(K*exp\left (-\frac {(x-L)}{M}\right)\) on the other hand. For every approximation the root-mean-square-deviation RMSD, coefficient of determination R 2, and degrees of freedom were calculated as goodness-of-fit statistics. The RMSD in general is the sample standard deviation between the actually observed values y t and the values \(\widehat {y}_{t}\) calculated by the approximation \(\text {RMSD}=\sqrt {\frac {1}{n}*\sum _{t=1}^{n} \left (y_{t}-\widehat {y}_{t}\right)^{2}}\), where n denotes the sample size. For all approximations the MATLAB Curve Fitting Toolbox (The MathWorks Inc 2014a) was used. Considering only the images in standstill the pixel-wise deviation in depth values was analyzed for both models. For this purpose the pixel-wise absolute differences of every two consecutive images were taken, summed up, and divided by the number of summands (N 0 as defined in section 'Material and methods', 'Comparison criteria and statistical methods', Proportion of high quality images): $$\text{SumDiff}\, := \,\frac{1}{N_{0}-1}*\sum_{i=1}^{N_{0}-1} \left|\text{image}_{i+1}-\text{image}_{i}\right|, $$ (compare Equation 6). This resulted in a matrix containing the values of the criterion SumDiff for every pixel. Additionally for every pixel the standard deviation (pwStd) in depth values was calculated. While pwStd used the quadratic aberration around the mean depth value, in SumDiff the variation from image to image was taken neglecting the pixel-wise depth values' mean. Each image could be split up in foreground (covered by cow model) and background (set to zero). Abrupt changes in the distance between recorded object and camera led to more possible ways for the infrared light to be reflected and return to the sensor and to less accurate depth measurement. The problem of erratic depth values along steep edges is a well known problem of TOF-cameras; compare (Langmann et al. 2012). The pixel-wise deviation in depth value was thus expected to be larger for pixel at the edge of the cow area. Visual inspection of depth maps revealed, that the main reflections or peaks in depth measurement occurred in an only one to two pixel wide area between background and cow area. Therefore, the foreground was split up in the disjoint areas "Boundary" and "Interior" (Figure 5). If a pixel's neighborhood of radius one intersected with both the foreground and background, the pixel was considered "Boundary" (425 pixel fur-covered model, 505 pixel plaster cast). If the neighborhood was fully included in the foreground, the pixel was considered "Interior" (7920 pixel fur-covered model, 10903 pixel plaster cast). All images where tested once using this definition of "Boundary". The effect of different boundaries was not tested. In the analysis of the fur-covered model, "Interior" was additionally partitioned in the disjoint areas "Interior Black" (6362 pixel) and "Interior White" (1558 pixel). A gray scale image of the amplitudes' map was used to distinguish between black or white fur (Figure 6). All pixel with a gray scale value ≥25 were considered to belong to the white spot. "Interior" and "Boundary". All pixel with a neighborhood of radius 1 that intersected with the background (black) and the cow-area (gray) belonged to "Boundary". All other pixel of the cow area were defined to be "Interior". Left: Fur-covered Model; Right: Plaster Cast. Distinction between black or white fur. Left: Segmented gray scale image of the fur-covered model; Right: The white spot is defined as all pixel with gray scale ≥25. The Wilcoxon rank-sum test is a nonparametric version of the classical t-test. It compares the medians of the sample groups by examining the ranks of the data's scores within both groups' observations. The values of SumDiff and pwStd were considerably skewed and thus not normally distributed. Additionally, the grouping in "Boundary" or "Interior" (likewise "Interior Black" and "Interior White") naturally led to unequal group sizes. Therefore, for both models Wilcoxon rank-sum tests instead of t-tests were performed to examine if the pixel's position in "Boundary" or "Interior" had significant effect. Furthermore, all SumDiff and pwStd data collected from the regions "Interior" and "Boundary" was grouped after models, respectively, and Wilcoxon rank-sum tests were performed to look into the effect of the cow model on pixel-wise deviation. Concerning the fur-covered model additional Wilcoxon rank-sum tests (level of significance p =0.02) were performed to analyze the effect of the fur color. All group medians were calculated. In case significance was given, the ranked data was used to calculate the effect size \(\eta ^{2} = \frac {SS\,due\,to\,grouping\,variable}{total\,sum\,of\,squares\,(SS)}\), which is the proportion of variance in the data explained by the grouping. For all statistical calculations the MATLAB Statistic Toolbox (The MathWorks 2014b) was used. The software automatically detected six points: ischeal tuberosities, dishes of the rump, tail, and the point on the backbone in 30 pixel radius from the tail (BB30, Figure 4, right). It was analyzed how strongly velocity effects the precision of coordinate determination. The X-coordinates were detected automatically in standstill and motion. Within each velocity their deviation was measured using the criterion RpV defined in Equation 7, and those RpV-values were compared between velocities. Due to the deviation in depth values the X-coordinates of body parts naturally were subject to 1 to 2 pixel fluctuation. This was analyzed with the criteria SumDiff and pwStd. As in this analysis solely fluctuation due to errors in the automatic determination of body parts was to be considered, in this comparison not the standard deviation in X-coordinates was used as criterion. Instead, the quotient of the X-coordinates' range divided by the number of values (RpV: Range per number of Values) was taken as a measure of imprecision: $${\small\begin{aligned} \text{RpV}_{\text{velocity}}\, := \,\frac{\max(\text{X-coordinates})-\min(\text{X-coordinates})}{N_{\text{velocity}}} \end{aligned}} $$ (compare Equation 7) with velocities 0,10,20,30 cm/s. Therefore, for each of the six considered body parts a vector (RpV 0, RpV 10, RpV 20, RpV 30) was calculated using the X-coordinates extracted from both models, respectively. These vectors were approximated with quadratic polynomials α∗x 2+β∗x+γ using the MATLAB Curve Fitting Toolbox (The MathWorks Inc 2014a) and root-mean-square-deviations RMSD, degrees of freedom, and coefficients of determination R 2 were calculated as goodness-of-fit statistics. The quadratic coefficients α describe the polynomials' growth behavior. To examine the cow model's effect on the growth of imprecision a Wilcoxon rank-sum test was performed on the quadratic coefficients. The medians belonging to each cow model were calculated, and the effect size η 2 was determined. Except from the recordings in standstill, the models were moving vertically through the camera's field of view. This implies, that Y-coordinates changed from image to image. This led to several sources of imprecision concerning the analysis of Y-coordinates. Therefore, it is excluded from the main article and presented in the Additional file 1. Declaration of adherence to ethical guidelines The authors declare that the plaster cast was taken strictly following international animal welfare guidelines. The institutions the authors are affiliated with do not have research ethics committees or review boards. The cast was taken in a completely noninvasive manner. The cow was not forced into an unnatural body posture and was fastened for no longer than one hour. Feed was provided during the procedure. No corrosive, burning, unpleasant, extremely hot or cold substances were used. TOF: Time-Of-Flight, a principle of depth measurement (see (Hansard et al. 2012)) η 2 : Measure for the size of the effect of grouping data after a certain criterion, calculated from analysis of variance's SS: \(\eta ^{2} = \frac {\mathrm {SS\,\,due\,\,to\,\,grouping}}{\mathrm {total\,\,SS}}\) SS : Sum of squares, \(SS=\sum _{t=1}^{n} \left (y_{t}-\widehat {y}_{t}\right)^{2}\), y t observed values, \(\widehat {y}_{t}\) estimated values, n sample size BCS: body condition score BFT: Back fat thickness SR4K: Swiss Ranger 4000, TOF camera produced by Mesa Imaging AG (MESA-Imaging 2013b) HQIratio: The quotient of the number of images that passed the quality test by the number of images showing the cow model: HQIratio\(_{\text {velocity}}=\frac {N_{\text {velocity}}}{C_{\text {velocity}}}\) P plaster, P fur, g plaster, g fur : Approximating polynomials and Gaussian exponential functions for HQIratio (Equations 2, 3, 4, and 5) RMSD: Root-mean-square-deviation, \(\text {RMSD}=\sqrt {\frac {1}{n}*\sum _{t=1}^{n} \left (y_{t}-\widehat {y}_{t}\right)^{2}}\), y t observed values, \(\widehat {y}_{t}\) estimated values, n sample size R 2 : Coefficient of determination SumDiff: Sum of pixel-wise absolute differences of every two consecutive images, divided by the number of summands (Equation 6) pwStd: pixel-wise standard deviation RpV: Quotient of the X-coordinates' range divided by the number of values (Equation 7) BB30: Point on the cow models' backbones in a radius of 30 pixel from the tail; t d : Phase delay S 1,…,S 4 : Phase control infrared signals emitted by SR4K to estimate phase delay t d MHz: Q 1,…,Q 4 : Amount of electrical charge for S 1,… ,S 4, respectively; d : Distance between object and TOF camera Modulated signal frequency used by SR4K srs: swiss ranger stream, data format generated by SR4K BSRCfA: Bavarian State Research Center for Agriculture in Grub (Bayerische Landesanstalt für Landwirtschaft 2015) (Germany) Computerized numerical control CAU: Christian-Albrechts-University Kiel (Christian-Albrechts-Universität zu Kiel 2015) (Germany) C 0, C 10, C 20, C 30 : Number of images showing the cow model recorded at 0,10,20,30 cm/s N 0, N 10, N 20, N 30 : Numbers of images recorded at 0,10,20,30 cm/s that passed the quality tests α∗x 2+β∗x+γ : (approximating) quadratic polynomial \(K*exp\left (-\frac {(x-L)}{M}\right)\) : (approximating) Gaussian exponential function We gratefully acknowledge the Federal Office for Agriculture and Food and the Federal Ministry of Food, Agriculture and Consumer Protection for financial support of the project "Entwicklung und Bewertung eines automatischen optischen Sensorsystems zur Körperkonditionsüberwachung bei Milchkühen". Furthermore, gratitude is expressed to our cooperation partners the Institute for Agricultural Engineering and Animal Husbandry and the Institute for Animal Nutrition and Feed Management of the Bavarian State Research Center for Agriculture and GEA Farm Technologies. Additional file 1 Precision of automatically determined Y-coordinates at different velocities. Supplementary Material to "Quantification of the effects of fur, fur color, and velocity on Time-Of-Flight technology in dairy production", provided as pdf. JS developed and implemented all image processing procedures and performed the statistical analysis. UB recorded the cow models and delivered feedback during software development in terms of test recordings, statistical quality tests and interim analyses. JHH wrote all parts of the software handling the camera setting, acquisition and saving of data, and automated recording. WJ calculated pixel-wise coefficients of determination concerning the camera's depth values in advance and provided HF cows, and supported the software development with statistical quality tests. JH developed the recording setting in Grub, initiated and supervised the construction of the fur-covered model and the framework. GT revised the manuscript and provided valuable food for thoughts. All authors read and approved the final manuscript. Institute of Animal Breeding & Husbandry of Christian - Albrechts - University, Olshausenstraße 40, Kiel, 24098, Germany Institute for Agricultural Engineering & Animal Husbandry of Bavarian State Research Center for Agriculture, Prof.-Dürrwaechter-Platz 2, Poing-Grub, 85586, Germany Andersen, M, Jensen T, Lisouski P, Mortensen A, Hansen M, Gregersen T, Ahrent P (2012) Kinect depth sensor evaluation for computer vision applications. Tech. Rep. Technical Report ECE-TR6, Department of Engineering, Aarhus University, Denmark.Google Scholar Azzaro, G, Caccamo M, Ferguson J, Battiato S, Farinella G, Guarnera G, Puglisi G, Petriglieri R, Licitra G (2011) Objective estimation of body condition score by modeling cow body shape from digital images. J Dairy Sci 94(4): 2126–2137. http://www.sciencedirect.com/science/article/pii/S0022030211001846.View ArticleGoogle Scholar Baehr, HD, Stephan K (2004) Wärme- und Stoffübertragung. 4th edn. Springer, Berlin.View ArticleGoogle Scholar Bayerische Landesanstalt für Landwirtschaft (2015). www.lfl.bayern.de. Bercovich, A, Edan Y, Alcahantis V, Moallem U, Parmet Y, Honig H, Maltz E, Antler A, Halachmi I (2012) Automatic cow's body condition scoring. http://www2.atb-potsdam.de/cigr-imageanalysis/images/images12/tabla_137_C0565.pdf. Booth, C, Warnick L, Gröhn Y, Maizon D, Guard C, Janssen D (2004) Effect of Lameness on Culling in Dairy Cows. J Dairy Sci 87(12): 4115–4122. http://www.sciencedirect.com/science/article/pii/S0022030204735547.View ArticleGoogle Scholar Christian-Albrechts-Universität zu Kiel (2015). http://www.uni-kiel.de. Cohen, J (1988) Statistical power analysis for the behavioral sciences. 2nd ed. Lawrence Erlbaum Associates, Hillsdale.Google Scholar Collard, B, Boettcher P, Dekkers J, Petitclerc D, Schaeffer L (2000) Relationships between energy balance and health traits of dairy cattle in early lactation. J Dairy Sci 83(11): 2683–2690. http://www.sciencedirect.com/science/article/pii/S0022030200751629.View ArticleGoogle Scholar Ferguson, JD, Galligan DT, Thomsen N (1994) Principal descriptors of body condition score in Holstein Cows. J Dairy Sci 77(http://www.sciencedirect.com/science/article/pii/S002203029477212X): 2695–2703.View ArticleGoogle Scholar Fofi, D, Sliwa T, Voisin Y (2004) A comparative survey of invisible structured light In: SPIE electronic imaging–machine vision applications in industrial inspection XII, 90–97, San Jose, USA. http://fofi.pagesperso-orange.fr/Downloads/Fofi_EI2004.pdf. Halachmi, I, Klopcic M, Polak P, Roberts D, Bewley J (2013) Automatic assessment of dairy cattle body condition score using thermal imaging. Comput Electron Agr 99(0): 35–40. http://www.sciencedirect.com/science/article/pii/S0168169913001907.View ArticleGoogle Scholar Hansard, M, Lee S, Choi O, Horaud R (2012) Time-of-Flight Cameras– Principles, Methods and Applications. Springer, London.Google Scholar Hertem, TV, Alchanatis V, Antler A, Maltz E, Halachmi I, Schlageter-Tello A, Lokhorst C, Viazzi S, Romanini C, Pluk A, Bahr C, Berckmans D (2013) Comparison of segmentation algorithms for cow contour extraction from natural barn background in side view images. Comput Electron Agr 91(0): 65–74. http://www.sciencedirect.com/science/article/pii/S016816991200275X.View ArticleGoogle Scholar Hertem, TV, Viazzi S, Steensels M, Maltz E, Antler A, Alchanatis V, Schlageter-Tello AA, Lokhorst K, Romanini EC, Bahr C, Berckmans D, Halachmi I (2014) Automatic lameness detection based on consecutive 3D-video recordings. Biosyst Eng 119(0): 108–116. http://www.sciencedirect.com/science/article/pii/S1537511014000142.View ArticleGoogle Scholar Hoegg, T, Lefloch D, Kolb A (2013) Real-Time motion artifact compensation for PMD-ToF images In: Time-of-Flight and depth imaging. Sensors, algorithms, and applications, Dagstuhl 2012 seminar on Time-of-Flight imaging and GCPR 2013 workshop on imaging new modalities, 273–288.. Springer, Berlin, Heidelberg.Google Scholar Krukowski, M (2009) Automatic determination of body condition score of dairy cows from 3D images. Master's thesis, KTH Computer Science and Communication, Stockholm.Google Scholar Langmann, B, Hartmann K, Loffeld O (2012) Depth camera technology comparison and performance evaluation In: Proceedings of the 1st international conference on pattern recognition applications and methods, 438–444.. SciTePress. doi:10.5220/0003778304380444.Google Scholar Lau, D (2013) The Science Behind Kinects or Kinect 1.0 versus 2.0.http://www.gamasutra.com/blogs/DanielLau/20131127/205820/The_Science_Behind_Kinects_or_Kinect_10_versus_20.php. Accessed 22 Aug 2014. MESA-Imaging (2013a) SR4000 User Manual, version 2.0. http://www.mesa-imaging.ch/prodview4k.php. [Download: 18th of March]. MESA-Imaging (2013b). www.mesa-imaging.ch. Microsoft (2010) PrimeSense Supplies 3-D-Sensing Technology to "Project Natal" for Xbox 360. www.microsoft.com/en-us/news/press/2010/mar10/03-31primesensepr.aspx. accessed: 2nd of June 2014. Microsoft (2014) Kinect for Windows. http://www.microsoft.com/en-us/kinectforwindows. Pluk, A, Bahr C, Poursaberi A, Maertens W, van Nuffel A, Berckmans D (2012) Automatic measurement of touch and release angles of the fetlock joint for lameness detection in dairy cattle using vision techniques. J Dairy Sci 95(4): 1738–1748. http://www.sciencedirect.com/science/article/pii/S0022030212001397.View ArticleGoogle Scholar Poursaberi, A, Pluk A, Bahr C, Berckmanns D, Veermaeand I, Kokin E, Pokalainen V (2011) Online lameness detection in dairy cattle using Body Movement Pattern (BMP) In: Intelligent Systems Design and Applications (ISDA) 2011 11th International Conference.. IEEE. doi:10.1109/ISDA.2011.6121743.Google Scholar Salau, J, Haas J, Junge W, Bauer U, Harms J, Bieletzki S (2014) Feasibility of automated body trait determination using the SR4K Time-Of-Flight Camera in Cow Barns. Springer Plus 3(225). Http://www.springerplus.com/content/3/1/225. Song, X, Leroy T, Vranken E, Maertens W, Sonck B, Berckmans D (2008) Automatic detection of lameness in dairy cattle – Vision-based trackway analysis in cow's locomotion. Comput Electron Agr 64: 39–44. http://www.sciencedirect.com/science/article/pii/S0168169908001440. [Smart Sensors in precision livestock farming].View ArticleGoogle Scholar The MathWorks Inc (2014a) Curve Fitting Toolbox Users Guide, MATLAB. http://www.mathworks.com/help/pdf_doc/curvefit/curvefit.pdf. http://www.mathworks.de/de/help/curvefit/functionlist.html. The MathWorks, Inc (2014b) Statistics Toolbox User's Guide, MATLAB. http://www.mathworks.com/help/pdf_doc/stats/stats.pdf. http://www.mathworks.de/de/help/stats/index.html. Viazzi, S, Bahr C, Hertem TV, Schlageter-Tello A, Romanini C, Halachmi I, Lokhorst C, Berckmans D (2014) Comparison of a three-dimensional and two-dimensional camera system for automated measurement of back posture in dairy cows. Comput Electron Agr 100(0): 139–147. http://www.sciencedirect.com/science/article/pii/S0168169913002755.View ArticleGoogle Scholar Weber, A, Salau J, Haas J, Junge W, Bauer U, Harms J, Suhr O, Schönrock K (2014) RothfußH, Bieletzki S, Thaller G. Estimation of backfat thickness using extracted traits from an automatic 3D optical system in lactating Holstein-Friesian Cows. Livest Sci 165(0): 129–137. http://www.sciencedirect.com/science/article/pii/S1871141314001747.Google Scholar This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
CommonCrawl
Qualitative analysis of integro-differential equations with variable retardation Kuo-Shou Chiu Departamento de Matemática, Facultad de Ciencias Básicas, Universidad Metropolitana de Ciencias de la Educación, José Pedro Alessandri 774, Santiago, Chile Received August 2020 Revised December 2020 Published February 2022 Early access February 2021 Fund Project: This research was in part supported by PGI 03-2020 DIUMCE Figure(15) In this paper, the global exponential stability and periodicity are investigated for impulsive neural network models with Lipschitz continuous activation functions and generalized piecewise constant delay. The sufficient conditions for the existence and uniqueness of periodic solutions of the model are established by applying fixed point theorem and the successive approximations method. By constructing suitable differential inequalities with generalized piecewise constant delay, some sufficient conditions for the global exponential stability of the model are obtained. The methods, which does not make use of Lyapunov functional, is simple and valid for the periodicity and stability analysis of impulsive neural network models with variable and/or deviating arguments. The results extend some previous results. Typical numerical examples with simulations are utilized to illustrate the validity and improvement in less conservatism of the theoretical results. This paper ends with a brief conclusion. Keywords: Impulsive neural networks, Piecewise constant delay of generalized type, Periodic solutions, Asymptotic stability, Global exponential stability, Gronwall integral inequality. Mathematics Subject Classification: Primary: 92B20, 34A37; Secondary: 34A36, 34K13, 34D23, 34K20. Citation: Kuo-Shou Chiu. Periodicity and stability analysis of impulsive neural network models with generalized piecewise constant delays. Discrete & Continuous Dynamical Systems - B, 2022, 27 (2) : 659-689. doi: 10.3934/dcdsb.2021060 M. U. Akhmet and E. Yımaz, Impulsive Hopfield-type neural network system with piecewise constant argument, Nonlinear Anal. Real World Appl., 11 (2010), 2584-2593. doi: 10.1016/j.nonrwa.2009.09.003. Google Scholar E. Barone and C. Tebaldi, Stability of equilibria in a neural network model, Math. Meth. Appl. Sci., 23 (2000), 1179-1193. doi: 10.1002/1099-1476(20000910)23:13<1179::AID-MMA158>3.0.CO;2-6. Google Scholar S. Busenberg and K. Cooke, Vertically Transmitted Diseases: Models and Dynamics in Biomathematics, vol. 23, Springer-Verlag, Berlin, 1993. doi: 10.1007/978-3-642-75301-5. Google Scholar Z. Cai, J. Huang and L. Huang, Generalized Lyapunov-Razumikhin method for retarded differential inclusions: Applications to discontinuous neural networks., Discrete and Continuous Dynamical Systems - B, 22 (2017), 3591-3614. doi: 10.3934/dcdsb.2017181. Google Scholar J. Cao, Global asymptotic stability of neural networks with transmission delays, International Journal of Systems Science, 31 (2000), 1313-1316. doi: 10.1080/00207720050165807. Google Scholar K.-S. Chiu and M. Pinto, Variation of parameters formula and Gronwall inequality for differential equations with a general piecewise constant argument, Acta Math. Appl. Sin. Engl. Ser., 27 (2011), 561-568. doi: 10.1007/s10255-011-0107-5. Google Scholar K.-S. Chiu and M. Pinto, Periodic solutions of differential equations with a general piecewise constant argument and applications, E. J. Qualitative Theory of Diff. Equ., 46 (2010), 1-19. doi: 10.14232/ejqtde.2010.1.46. Google Scholar K.-S. Chiu, M. Pinto and J.-Ch. Jeng, Existence and global convergence of periodic solutions in recurrent neural network models with a general piecewise alternately advanced and retarded argument, Acta Appl. Math., 133 (2014), 133-152. doi: 10.1007/s10440-013-9863-y. Google Scholar K.-S. Chiu, Existence and global exponential stability of equilibrium for impulsive cellular neural network models with piecewise alternately advanced and retarded argument, Abstract and Applied Analysis, 2013 (2013), Article ID 196139, 13 pages. doi: 10.1155/2013/196139. Google Scholar K.-S. Chiu and J.-Ch. Jeng, Stability of oscillatory solutions of differential equations with general piecewise constant arguments of mixed type, Math. Nachr., 288 (2015), 1085-1097. doi: 10.1002/mana.201300127. Google Scholar K.-S. Chiu, Exponential stability and periodic solutions of impulsive neural network models with piecewise constant argument, Acta Appl. Math., 151 (2017), 199-226. doi: 10.1007/s10440-017-0108-3. Google Scholar K.-S. Chiu, Asymptotic equivalence of alternately advanced and delayed differential systems with piecewise constant generalized arguments, Acta Math. Sci., 38 (2018), 220-236. doi: 10.1016/S0252-9602(17)30128-5. Google Scholar K.-S. Chiu and T. Li, Oscillatory and periodic solutions of differential equations with piecewise constant generalized mixed arguments, Math. Nachr., 292 (2019), 2153-2164. doi: 10.1002/mana.201800053. Google Scholar K.-S. Chiu and F. Córdova-Lepe, Global exponential periodicity and stability of neural network models with generalized piecewise constant delays, Mathematica Slovaca (2021) appear. Google Scholar K.-S. Chiu, Green's function for impulsive periodic solutions in alternately advanced and delayed differential systems and applications, Commun. Fac. Sci. Univ. Ank. Ser. A1 Math. Stat., 70 (2021), 15-37. doi: 10.31801/cfsuasmas.785502. Google Scholar L. O. Chua and L. Yang, Cellular neural networks: Theory, IEEE Trans. Circuits Syst., 35 (1988), 1257-1272. doi: 10.1109/31.7600. Google Scholar K. Gopalsamy, Stability of artificial neural networks with impulses, Appl. Math. Comput., 154 (2004), 783-813. doi: 10.1016/S0096-3003(03)00750-1. Google Scholar C.-H. Hsu and S.-Y. Yang, Structure of a class of traveling waves in delayed cellular neural networks, Discrete and Continuous Dynamical Systems - A, 13 (2005), 339-359. doi: 10.3934/dcds.2005.13.339. Google Scholar Z. K. Huang, X. H. Wang and F. Gao, The existence and global attractivity of almost periodic sequence solution of discrete-time neural networks, Phys. Lett. A, 350 (2006), 182-191. doi: 10.1016/j.physleta.2005.10.022. Google Scholar O. M. Kwona, S. M. Lee, J. H. Park and E. J. Cha, New approaches on stability criteria for neural networks with interval time–varying delays, Appl. Math. Comput., 218 (2012), 9953-9964. doi: 10.1016/j.amc.2012.03.082. Google Scholar B. Lisena, Average criteria for periodic neural networks with delay, Discrete and Continuous Dynamical Systems - B, 19 (2014), 761-773. doi: 10.3934/dcdsb.2014.19.761. Google Scholar T. Li, X. Yao, L. Wu and J. Li, Improved delay–dependent stability results of recurrent neural networks, Appl. Math. Comput., 218 (2012), 9983-9991. doi: 10.1016/j.amc.2012.03.013. Google Scholar Z. Liu and L. Liao, Existence and global exponential stability of periodic solutions of cellular neural networks with time–varying delays, J. Math. Anal. Appl., 290 (2004), 247-262. doi: 10.1016/j.jmaa.2003.09.052. Google Scholar X. Y. Lou and B. T. Cui, Novel global stability criteria for high-order Hopfield-type neural networks with time-varying delays, J. Math. Anal. Appl., 330 (2007), 144-158. doi: 10.1016/j.jmaa.2006.07.058. Google Scholar S. Mohamad and K. Gopalsamy, Exponential stability of continuous-time and discrete-time cellular neural networks with delays, Appl. Math. Comput., 135 (2003), 17-38. doi: 10.1016/S0096-3003(01)00299-5. Google Scholar S. Novo, R. Obaya and A. M. Sanz, Exponential stability in non-autonomous delayed equations with applications to neural networks, Discrete and Continuous Dynamical Systems - A, 18 (2007), 517-536. doi: 10.3934/dcds.2007.18.517. Google Scholar J. H. Park, Global exponential stability of cellular neural networks with variable delays, Appl. Math. Comput., 183 (2006), 1214-1219. doi: 10.1016/j.amc.2006.06.046. Google Scholar M. Pinto, Asymptotic equivalence of nonlinear and quasilinear differential equations with piecewise constant arguments, Math. and Comp. Model., 49 (2009), 1750-1758. doi: 10.1016/j.mcm.2008.10.001. Google Scholar M. Pinto, Cauchy and Green matrices type and stability in alternately advanced and delayed differential systems, J. Difference Equ. Appl., 17 (2011), 235-254. doi: 10.1080/10236198.2010.549003. Google Scholar S. M. Shah and J. Wiener, Advanced differential equations with piecewise constant argument deviations, Internat. J. Math.and Math. Sci., 6 (1983), 671-703. doi: 10.1155/S0161171283000599. Google Scholar T. Su and X. Yang, Finite-time synchronization of competitive neural networks with mixed delays, Discrete and Continuous Dynamical Systems - B, 21 (2016), 3655-3667. doi: 10.3934/dcdsb.2016115. Google Scholar J.-P. Tseng, Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays, Discrete and Continuous Dynamical Systems - A, 33 (2013), 4693-4729. doi: 10.3934/dcds.2013.33.4693. Google Scholar B. Wang, S. Zhong and X. Liu, Asymptotical stability criterion on neural networks with multiple time–varying delays, Appl. Math. Comput., 195 (2008), 809-818. doi: 10.1016/j.amc.2007.05.027. Google Scholar Z. Wang, J. Cao, Z. Cai and L. Huang, Finite-time stability of impulsive differential inclusion: Applications to discontinuous impulsive neural networks, Discrete and Continuous Dynamical Systems - B, 2020. doi: 10.3934/dcdsb.2020200. Google Scholar J. Wiener, Differential equations with piecewise constant delays, Trends in Theory and Practice of Nonlinear Differential Equations (Arlington, Tex., 1982), Marcel Dekker, New York, 90 (1984), 547–552. Google Scholar J. Wiener, Generalized Solutions of Functional Differential Equations, World Scientific, Singapore, 1993. doi: 10.1142/1860. Google Scholar J. Wiener and V. Lakshmikantham, Differential equations with piecewise constant argument and impulsive equations, Nonlinear Stud., 7 (2000), 60-69. Google Scholar B. Xu, X. Liu and X. Liao, Global exponential stability of high order Hopfield type neural networks, Appl. Math. Comput., 174 (2006), 98-116. doi: 10.1016/j.amc.2005.03.020. Google Scholar S. Xu, Y. Chu and J. Lu, New results on global exponential stability of recurrent neural networks with time-varying delays, Phys. Lett. A, 352 (2006), 371-379. doi: 10.1016/j.physleta.2005.12.031. Google Scholar T. H. Yu, D. Q. Cao, S. Q. Liu and H. T. Chen, Stability analysis of neural networks with periodic coefficients and piecewise constant arguments, Journal of the Franklin Institute, 353 (2016), 409-425. doi: 10.1016/j.jfranklin.2015.11.010. Google Scholar L. Zhou and G. Hu, Global exponential periodicity and stability of cellular neural networks with variable and distributed delays, Appl. Math. Comput., 195 (2008), 402-411. doi: 10.1016/j.amc.2007.04.114. Google Scholar Y. Zhang, D. Yue and E. Tian, New stability criteria of neural networks with interval time–varying delay: A piecewise delay method, Appl. Math. Comput., 208 (2009), 249-259. doi: 10.1016/j.amc.2008.11.046. Google Scholar Figure 1a. Some trajectories uniformly convergent to the unique exponentially stable $\pi$/2-periodic solution of the ICNN models with IDEGPCD system (33) Figure 1b. Phase plots of state variable ($x_1$, $x_2$, $x_3$) in the ICNN models with IDEGPCD system (33) with the initial condition (7, 6, 3) Figure 1c. Phase plots of state variable ($x_1$, $x_2$, $x_3$) in the ICNN models with IDEGPCD system (33) with the initial condition (6.7897, 6.0565, 4.6992) Figure 1d. Phase plots of state variable ($t$, $x_1$, $x_2$) in the ICNN models with IDEGPCD system (33) Figure 1e. Phase plots of state variable ($t$, $x_1$, $x_3$) in the ICNN models with IDEGPCD system (33) Figure 1f. Phase plots of state variable ($t$, $x_2$, $x_3$) in the ICNN models with IDEGPCD system (33) Figure 2a. $\pi/2$-periodic solution of the CNN models with DEGPCD system (33a) for $t\in [0, 6\pi] $ with the initial value (4.9228, 4.5238, 3.6121) Figure 2b. Trajectories uniformly convergent to the unique exponentially stable $\pi$/2-periodic solution of the CNN models with DEGPCD system (33a) with the initial value (5.0, 4.3, 3.65) Figure 2c. Phase plots of state variable ($x_1$, $x_2$, $x_3$) in the CNN models with DEGPCD system (33a) with the initial condition (4.9228, 4.5238, 3.6121) Figure 3a. Some trajectories uniformly convergent to the unique $1$-periodic solution of the ICNN models with IDEGPCD system (37) Figure 3b. Exponential convergence of two trajectories towards a $1$-periodic solution of the ICNN models with IDEGPCD system (37). Initial conditions: ($i$) (3, 6) in red and ($ii$) (4, 6) in blue Figure 3c. Phase plots of state variable ($t$, $x_1$, $x_2$) in the ICNN models with IDEGPCD system (37) Figure 4a. Unique asymptotically stable solution of the CNN models with DEGPCD system (37a) Figure 4b. Unique asymptotically stable solution of the CNN models with DEGPCD system (37a) Figure 4c. Some trajectories uniformly convergent to the unique asymptotically stable solution of the CNN models with DEGPCD system (37a) Ricai Luo, Honglei Xu, Wu-Sheng Wang, Jie Sun, Wei Xu. A weak condition for global stability of delayed neural networks. Journal of Industrial & Management Optimization, 2016, 12 (2) : 505-514. doi: 10.3934/jimo.2016.12.505 Junya Nishiguchi. On parameter dependence of exponential stability of equilibrium solutions in differential equations with a single constant delay. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5657-5679. doi: 10.3934/dcds.2016048 Qiang Li, Mei Wei. Existence and asymptotic stability of periodic solutions for neutral evolution equations with delay. Evolution Equations & Control Theory, 2020, 9 (3) : 753-772. doi: 10.3934/eect.2020032 Zengyun Wang, Jinde Cao, Zuowei Cai, Lihong Huang. Finite-time stability of impulsive differential inclusion: Applications to discontinuous impulsive neural networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2677-2692. doi: 10.3934/dcdsb.2020200 Sylvia Novo, Rafael Obaya, Ana M. Sanz. Exponential stability in non-autonomous delayed equations with applications to neural networks. Discrete & Continuous Dynamical Systems, 2007, 18 (2&3) : 517-536. doi: 10.3934/dcds.2007.18.517 Benedetta Lisena. Average criteria for periodic neural networks with delay. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 761-773. doi: 10.3934/dcdsb.2014.19.761 Yong Zhao, Qishao Lu. Periodic oscillations in a class of fuzzy neural networks under impulsive control. Conference Publications, 2011, 2011 (Special) : 1457-1466. doi: 10.3934/proc.2011.2011.1457 Haoyue Song, Fanwei Meng. Some generalizations of delay integral inequalities of Gronwall-Bellman type with power and their applications. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021022 Anatoli F. Ivanov, Musa A. Mammadov. Global asymptotic stability in a class of nonlinear differential delay equations. Conference Publications, 2011, 2011 (Special) : 727-736. doi: 10.3934/proc.2011.2011.727 Luis Barreira, Claudia Valls. Delay equations and nonuniform exponential stability. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 219-223. doi: 10.3934/dcdss.2008.1.219 Teresa Faria, José J. Oliveira. On stability for impulsive delay differential equations and application to a periodic Lasota-Wazewska model. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2451-2472. doi: 10.3934/dcdsb.2016055 Ozlem Faydasicok. Further stability analysis of neutral-type Cohen-Grossberg neural networks with multiple delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1245-1258. doi: 10.3934/dcdss.2020359 Yong Ren, Xuejuan Jia, Lanying Hu. Exponential stability of solutions to impulsive stochastic differential equations driven by $G$-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2157-2169. doi: 10.3934/dcdsb.2015.20.2157 Yong Ren, Huijin Yang, Wensheng Yin. Weighted exponential stability of stochastic coupled systems on networks with delay driven by $ G $-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3379-3393. doi: 10.3934/dcdsb.2018325 Eugen Stumpf. On a delay differential equation arising from a car-following model: Wavefront solutions with constant-speed and their stability. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3317-3340. doi: 10.3934/dcdsb.2017139 Jun Zhou, Jun Shen, Weinian Zhang. A powered Gronwall-type inequality and applications to stochastic differential equations. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7207-7234. doi: 10.3934/dcds.2016114 Jui-Pin Tseng. Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4693-4729. doi: 10.3934/dcds.2013.33.4693 Jan Sieber, Matthias Wolfrum, Mark Lichtner, Serhiy Yanchuk. On the stability of periodic orbits in delay equations with large delay. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 3109-3134. doi: 10.3934/dcds.2013.33.3109 István Györi, Ferenc Hartung. Exponential stability of a state-dependent delay system. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 773-791. doi: 10.3934/dcds.2007.18.773 Jifeng Chu, Pedro J. Torres, Feng Wang. Radial stability of periodic solutions of the Gylden-Meshcherskii-type problem. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 1921-1932. doi: 10.3934/dcds.2015.35.1921
CommonCrawl
Last edited by Mikaktilar Sunday, February 9, 2020 | History 5 edition of Calculus and analytic geometry found in the catalog. Calculus and analytic geometry Douglas F. Riddle Published 1979 by Wadsworth Pub. Co. in Belmont, Calif . Calculus., Geometry, Analytic. Statement Douglas F. Riddle. LC Classifications QA303 .R53 1979 Pagination 972 p. in various pagings : OCLC/WorldCa 4504468 Anton enjoys traveling and photography. He has published numerous research papers in functional analysis, approximation theory, and topology, as well as pedagogical papers. He was well liked at MIT, and was invited to join the faculty after his teaching fellowship ended. He is best known for his textbooks in mathematics, which are among the most widely used in the world. Calculus and Analytic Geometry is specifically written for the students so it is known as the best course book as well as taught in various universities. It provides solid grounds and the concise information necessary to learn mathematics, science, and engineering problems. The couple lived in Pullman, Washington for a year; Thomas worked at a local shoe store to save money for further graduate education. Although not published in his lifetime, a manuscript form of Ad locos planos et solidos isagoge Introduction to Plane and Solid Loci was circulating in Paris injust prior to the publication of Descartes' Discourse. He is currently a faculty consultant for the Educational testing Service Advanced Placement Calculus Test, a board member of the North Carolina, Association of Advanced Placement Mathematics Teachers, and is actively involved in nurturing mathematically talented high school students through leadership in the Charlotte Mathematics Club. Irl C. In he joined the Mathematics Department at Drexel University, where he taught full time until Finney is a very important book in undergraduate programs. When he is not pursuing mathematics, Professor Bivens enjoys juggling, swimming, walking, and spending time with his son Robert. This monograph will be a useful resource for undergraduate students of mathematics and algebra. A typical academic year sees him teachingcourses in calculus, topology, and geometry. Probability and Statistics. The loneliness of the poet/housewife Supply Estimates 1992-93 for the year ending 31 March 1993. Profiles--agricultural sciences Sun, stand thou still Drug testing in hair Construction forecasts 1996-1997-1998 Cervantes, a biography Offending women H.L.M. Natural history of Hertfordshire Senator from Michigan Paper respecting the slave-trade Mars / Venus Affair The substance of a thanksgiving sermon Witchcraft in Northamptonshire. New complete arithmetic on the inductive method Economics in ship design and operation Jump Start Your Career - The Nuts and Bolts for Entering The Workforce for the First Time Who killed the pinup queen? Calculus and analytic geometry book The reader is then introduced to derivatives and applications of differentiation; exponential and trigonometric functions; and techniques and applications of integration. Calculus and Analytic Geometry[ edit ] InAddison-Wesley was then a new publishing company specializing in textbooks and technical literature. It is providing problem-solving strategies for helping the students that can give a better hold and understanding of various new problem-solving techniques in Calculus. Subsequent chapters deal with inverse functions, plane analytic geometry, and approximation as well as convergence, and power series. Forrelaxation, Calculus and analytic geometry book plays basketball, juggles, and travels. Since that time he has been an adjunct professor at Drexel and has devoted the majority of his time to textbook writing and activities for mathematical associations. They lived in a tent with a wooden floor and a coal stove. From tohe served on the board of governors of the Mathematical Association of America and was the group's first vice president from to A sabbatical in took him to Swarthmore College as a visiting associate professor. You may also like to download Calculus by Howard Anton 10th Edition. He has published numerous articles on undergraduate mathematics, as well as research papers in his specialty, differential geometry. Later, after much experience in the use of the topic, an appropriate amount of theory is presented. In the early 's he worked for Burroughs Corporation and Avco Corporation at Cape Canaveral, Florida, where he was involved with the manned space program. The management was unhappy with the calculus textbook they were then publishing, so they approached Thomas, asking if he could revise the book. Commitment to education[ edit ] Thomas became involved with math and science education in America's primary and secondary schools some years before the Soviet Union launched Sputnik. Subsequent chapters deal with inverse functions, plane analytic geometry, and approximation as well as convergence, and power series. During the Second World WarThomas was involved in early computation systems and programmed the differential analyzer to calculate firing tables for the Navy. The couple lived in Pullman, Washington for a year; Thomas worked at a local shoe store to save money for further graduate education. When he is not pursuing mathematics, Professor Bivens enjoys juggling, swimming, walking, and spending time with his son Robert. Later life[ edit ] Jane Thomas died in from breast cancer. From tohe served on the executive committee of the mathematics division of the American Society for Engineering Education. InThomas married Thais Erving; she died inalso from breast cancer. Calculus and Analytic Geometry ninth Edition by Thomas and Finney is a very comprehensive book for those who want to learn Calculus from scratch. Davis received his B. It was Leonhard Euler who first applied the coordinate method in a systematic study of space curves and surfaces. Professor Davis has held several offices in the Southeastern section of the MAA, including chair and secretary-treasurer. Calculus and Analytic Geometry. There are currently more than one hundred versions of his books, including translations into Spanish, Arabic, Portuguese, Italian, Indonesian, French, Japanese, Chinese, Hebrew, and German. The initial approach to each topic is intuitive, numerical, and motivated by examples, with theory kept to a bare minimum. Thomas, George B. Similarly, Euclidean space is given coordinates where every point has three coordinates. After his stepmother Lena died from complications due to childbirth, the father and son moved to the Spokane Valley in Washington State, where they both attended Spokane University. Initially the work was not well received, due, in part, to the many gaps in arguments and complicated equations.Jan 01, · Buy a cheap copy of Calculus and Analytic Geometry book by Ross L. Finney. The tenth edition of this clear, precise calculus text with superior applications sets the standard in calculus. The tenth edition of this proven text was carefully Free shipping over $/5(5). Chapter Real numbers, limits and continuity Notes of the book Calculus with Analytic Geometry written by Dr. S. M. Yusuf and Prof. Muhammad Amin, published by Ilmi Kitab Khana, Lahore - PAKISTAN. The notes of this chapter is written by Prof. Muhammad Farooq, Punjab College, Sargodha.$\mathbb{R}$$\mathbb{R}$$\mathbb{R}$. In classical mathematics, analytic geometry, also known as coordinate geometry or Cartesian geometry, is the study of geometry using a coordinate system. This contrasts with synthetic geometry. Analytic geometry is widely used in physics and engineering, and also in aviation, rocketry, space science, and spaceflight. Student Solutions Manual, Volume 2, to accompany Calculus and Analytic Geometry by Stein,Sherman; Barcellos,Anthony and a great selection of related books, art and collectibles available now at atlasbowling.com Calculus with Analytic Geometry. Murray H. Protter, Philip E. Protter. Jones & Bartlett I would like to request that this book be available in book stores especially in Philippines because there lots of students would like to use this as reference in the mathematics course for masteral and undergrad students. Mathematics / Calculus 5/5(5). Calculus with Analytic Geometry presents the essentials of calculus with analytic geometry. The emphasis is on how to set up and solve calculus problems, that is, how to apply calculus. The initial approach to each topic is intuitive, numerical, and motivated by examples, with theory kept to a . atlasbowling.com - Calculus and analytic geometry book © 2020
CommonCrawl
View all Nature Research journals nature human behaviour Ranking the effectiveness of worldwide COVID-19 government interventions Nils Haug ORCID: orcid.org/0000-0002-5130-91931,2 na1, Lukas Geyrhofer ORCID: orcid.org/0000-0002-8043-29752 na1, Alessandro Londei ORCID: orcid.org/0000-0002-5748-95783, Elma Dervic ORCID: orcid.org/0000-0001-7168-33101,2, Amélie Desvars-Larrive ORCID: orcid.org/0000-0001-7671-696X2,4, Vittorio Loreto ORCID: orcid.org/0000-0002-2506-22892,3,5, Beate Pinior ORCID: orcid.org/0000-0001-8554-59632,4, Stefan Thurner1,2,6 & Peter Klimek ORCID: orcid.org/0000-0003-1187-67131,2 Nature Human Behaviour volume 4, pages1303–1312(2020)Cite this article 189k Accesses 5037 Altmetric Assessing the effectiveness of non-pharmaceutical interventions (NPIs) to mitigate the spread of SARS-CoV-2 is critical to inform future preparedness response plans. Here we quantify the impact of 6,068 hierarchically coded NPIs implemented in 79 territories on the effective reproduction number, Rt, of COVID-19. We propose a modelling approach that combines four computational techniques merging statistical, inference and artificial intelligence tools. We validate our findings with two external datasets recording 42,151 additional NPIs from 226 countries. Our results indicate that a suitable combination of NPIs is necessary to curb the spread of the virus. Less disruptive and costly NPIs can be as effective as more intrusive, drastic, ones (for example, a national lockdown). Using country-specific 'what-if' scenarios, we assess how the effectiveness of NPIs depends on the local context such as timing of their adoption, opening the way for forecasting the effectiveness of future interventions. In the absence of vaccines and antiviral medication, non-pharmaceutical interventions (NPIs) implemented in response to (emerging) epidemic respiratory viruses are the only option available to delay and moderate the spread of the virus in a population1. Confronted with the worldwide COVID-19 epidemic, most governments have implemented bundles of highly restrictive, sometimes intrusive, NPIs. Decisions had to be taken under rapidly changing epidemiological situations, despite (at least at the very beginning of the epidemic) a lack of scientific evidence on the individual and combined effectiveness of these measures2,3,4, degree of compliance of the population and societal impact. Government interventions may cause substantial economic and social costs5 while affecting individuals' behaviour, mental health and social security6. Therefore, knowledge of the most effective NPIs would allow stakeholders to judiciously and timely implement a specific sequence of key interventions to combat a resurgence of COVID-19 or any other future respiratory outbreak. Because many countries rolled out several NPIs simultaneously, the challenge arises of disentangling the impact of each individual intervention. To date, studies of the country-specific progression of the COVID-19 pandemic7 have mostly explored the independent effects of a single category of interventions. These categories include travel restrictions2,8, social distancing9,10,11,12 and personal protective measures13. Additionally, modelling studies typically focus on NPIs that directly influence contact probabilities (for example, social distancing measures18, social distancing behaviours 12, self-isolation, school closures, bans on public events20 and so on). Some studies focused on a single country or even a town14,15,16,17,18 while other research combined data from multiple countries but pooled NPIs into rather broad categories15,19,20,21, which eventually limits the assessment of specific, potentially critical, NPIs that may be less costly and more effective than others. Despite their widespread use, relative ease of implementation, broad choice of available tools and their importance in developing countries where other measures (for example, increases in healthcare capacity, social distancing or enhanced testing) are difficult to implement22, little is currently known about the effectiveness of different risk-communication strategies. An accurate assessment of communication activities requires information on the targeted public, means of communication and content of the message. Using a comprehensive, hierarchically coded dataset of 6,068 NPIs implemented in March–April 2020 (when most European countries and US states experienced their first infection waves) in 79 territories23, here we analyse the impact of government interventions on Rt using harmonized results from a multi-method approach consisting of (1) a case-control analysis (CC), (2) a step function approach to LASSO time-series regression (LASSO), (3) random forests (RF) and (4) transformers (TF). We contend that the combination of four different methods, combining statistical, inference and artificial intelligence classes of tools, also allows assessment of the structural uncertainty of individual methods24. We also investigate country-specific control strategies as well as the impact of selected country-specific metrics. All the above approaches (1–4) yield comparable rankings of the effectiveness of different categories of NPIs across their hierarchical levels. This remarkable agreement allows us to identify a consensus set of NPIs that lead to a significant reduction in Rt. We validate this consensus set using two external datasets covering 42,151 measures in 226 countries. Furthermore, we evaluate the heterogeneity of the effectiveness of individual NPIs in different territories. We find that the time of implementation, previously implemented measures, different governance indicators25, as well as human and social development affect the effectiveness of NPIs in countries to varying degrees. Our main results are based on the Complexity Science Hub COVID-19 Control Strategies List (CCCSL)23. This dataset provides a hierarchical taxonomy of 6,068 NPIs, coded on four levels, including eight broad themes (level 1, L1) divided into 63 categories of individual NPIs (level 2, L2) that include >500 subcategories (level 3, L3) and >2,000 codes (level 4, L4). We first compare the results for NPI effectiveness rankings for the four methods of our approach (1–4) on L1 (themes) (Supplementary Fig. 1). A clear picture emerges where the themes of social distancing and travel restrictions are top ranked in all methods, whereas environmental measures (for example, cleaning and disinfection of shared surfaces) are ranked least effective. We next compare results obtained on L2 of the NPI dataset—that is, using the 46 NPI categories implemented more than five times. The methods largely agree on the list of interventions that have a significant effect on Rt (Fig. 1 and Table 1). The individual rankings are highly correlated with each other (P = 0.0008; Methods). Six NPI categories show significant impacts on Rt in all four methods. In Supplementary Table 1 we list the subcategories (L3) belonging to these consensus categories. Fig. 1: Change in Rt (ΔRt) for 46 NPIs at L2, as quantified by CC analysis, LASSO and TF regression. The left-hand panel shows the combined 95% confidence intervals of ΔRt for the most effective interventions across all included territories. The heatmap in the right-hand panel shows the corresponding Z-scores of measure effectiveness as determined by the four different methods. Grey indicates no significantly positive effect. NPIs are ranked according to the number of methods agreeing on their impacts, from top (significant in all methods) to bottom (ineffective in all analyses). L1 themes are colour-coded as in Supplementary Fig. 1. Table 1 Comparison of effectiveness rankings on L2 A normalized score for each NPI category is obtained by rescaling the result within each method to range between zero (least effective) and one (most effective) and then averaging this score. The maximal (minimal) NPI score is therefore 100% (0%), meaning that the measure is the most (least) effective measure in each method. We show the normalized scores for all measures in the CCCSL dataset in Extended Data Fig. 1, for the CoronaNet dataset in Extended Data Fig. 2 and for the WHO Global Dataset of Public Health and Social Measures (WHO-PHSM) in Extended Data Fig. 3. Among the six full-consensus NPI categories in the CCCSL, the largest impacts on Rt are shown by small gathering cancellations (83%, ΔRt between −0.22 and –0.35), the closure of educational institutions (73%, and estimates for ΔRt ranging from −0.15 to −0.21) and border restrictions (56%, ΔRt between −0.057 and –0.23). The consensus measures also include NPIs aiming to increase healthcare and public health capacities (increased availability of personal protective equipment (PPE): 51%, ΔRt −0.062 to −0.13), individual movement restrictions (42%, ΔRt −0.08 to −0.13) and national lockdown (including stay-at-home order in US states) (25%, ΔRt −0.008 to −0.14). We find 14 additional NPI categories consensually in three of our methods. These include mass gathering cancellations (53%, ΔRt between −0.13 and –0.33), risk-communication activities to inform and educate the public (48%, ΔRt between –0.18 and –0.28) and government assistance to vulnerable populations (41%, ΔRt between −0.17 and –0.18). Among the least effective interventions we find: government actions to provide or receive international help, measures to enhance testing capacity or improve case detection strategy (which can be expected to lead to a short-term rise in cases), tracing and tracking measures as well as land border and airport health checks and environmental cleaning. In Fig. 2 we show the findings on NPI effectiveness in a co-implementation network. Nodes correspond to categories (L2) with size being proportional to their normalized score. Directed links from i to j indicate a tendency that countries implement NPI j after they have implemented i. The network therefore illustrates the typical NPI implementation sequence in the 56 countries and the steps within this sequence that contribute most to a reduction in Rt. For instance, there is a pattern where countries first cancel mass gatherings before moving on to cancellations of specific types of small gatherings, where the latter associates on average with more substantial reductions in Rt. Education and active communication with the public is one of the most effective 'early measures' (implemented around 15 days before 30 cases were reported and well before the majority of other measures comes). Most social distancing (that is, closure of educational institutions), travel restriction measures (that is, individual movement restrictions like curfew and national lockdown) and measures to increase the availability of PPE are typically implemented within the first 2 weeks after reaching 30 cases, with varying impacts on Rt; see also Fig. 1. Fig. 2: Time-ordered NPI co-implementation network across countries. Nodes are categories (L2), with colours indicating the theme (L1) and size being proportional to the average effectiveness of the intervention. Arrows from nodes i to j denote that those countries which have already implemented intervention i tend to implement intervention j later in time. Nodes are positioned vertically according to their average time of implementation (measured relative to the day where that country reached 30 confirmed cases), and horizontally according to their L1 theme. The stacked histogram on the right shows the number of implemented NPIs per time period (epidemic age) and theme (colour). v.p., vulnerable populations; c.e., certain establishments; quarantine f., quarantine facilities. Within the CC approach, we can further explore these results on a finer hierarchical level. We show results for 18 NPIs (L3) of the risk-communication theme in Supplementary Information and Supplementary Table 2. The most effective communication strategies include warnings against travel to, and return from, high-risk areas (ΔRCCt = −0.14 (1); the number in parenthesis denotes the standard error) and several measures to actively communicate with the public. These include to encourage, for example, staying at home (ΔRCCt = −0.14 (1)), social distancing (ΔRCCt = −0.20 (1)), workplace safety measures (ΔRCCt = −0.18 (2)), self-initiated isolation of people with mild respiratory symptoms (ΔRCCt = −0.19 (2)) and information campaigns (ΔRCCt = −0.13 (1)) (through various channels including the press, flyers, social media or phone messages). Validation with external datasets We validate our findings with results from two external datasets (Methods). In the WHO-PHSM dataset26 we find seven full-consensus measures (agreement on significance by all methods) and 17 further measures with three agreements (Extended Data Fig. 4). These consensus measures show a large overlap with those (three or four matches in our methods) identified using the CCCSL, and include top-ranked NPI measures aiming at strengthening the healthcare system and testing capacity (labelled as 'scaling up')—for example, increasing the healthcare workforce, purchase of medical equipment, testing, masks, financial support to hospitals, increasing patient capacity, increasing domestic production of PPE. Other consensus measures consist of social distancing measures ('cancelling, restricting or adapting private gatherings outside the home', adapting or closing 'offices, businesses, institutions and operations', 'cancelling, restricting or adapting mass gatherings'), measures for special populations ('protecting population in closed settings', encompassing long-term care facilities and prisons), school closures, travel restrictions (restricting entry and exit, travel advice and warning, 'closing international land borders', 'entry screening and isolation or quarantine') and individual movement restriction ('stay-at-home order', which is equivalent to confinement in the WHO-PHSM coding). 'Wearing a mask' exhibits a significant impact on Rt in three methods (ΔRt between −0.018 and –0.12). The consensus measures also include financial packages and general public awareness campaigns (as part of 'communications and engagement' actions). The least effective measures include active case detection, contact tracing and environmental cleaning and disinfection. The CCCSL results are also compatible with findings from the CoronaNet dataset27 (Extended Data Figs. 5 and 6). Analyses show four full-consensus measures and 13 further NPIs with an agreement of three methods. These consensus measures include heterogeneous social distancing measures (for example, restriction and regulation of non-essential businesses, restrictions of mass gatherings), closure and regulation of schools, travel restrictions (for example, internal and external border restrictions), individual movement restriction (curfew), measures aiming to increase the healthcare workforce (for example, 'nurses', 'unspecified health staff') and medical equipment (for example, PPE, 'ventilators', 'unspecified health materials'), quarantine (that is, voluntary or mandatory self-quarantine and quarantine at a government hotel or facility) and measures to increase public awareness ('disseminating information related to COVID-19 to the public that is reliable and factually accurate'). Twenty-three NPIs in the CoronaNet dataset do not show statistical significance in any method, including several restrictions and regulations of government services (for example, for tourist sites, parks, public museums, telecommunications), hygiene measures for public areas and other measures that target very specific populations (for example, certain age groups, visa extensions). Country-level approach A sensitivity check of our results with respect to the removal of individual continents from the analysis also indicates substantial variations between world geographical regions in terms of NPI effectiveness (Supplementary Information). To further quantify how much the effectiveness of an NPI depends on the particular territory (country or US state) where it has been introduced, we measure the heterogeneity of NPI rankings in different territories through an entropic approach in the TF method (Methods). Figure 3 shows the normalized entropy of each NPI category versus its rank. A value of entropy close to zero implies that the corresponding NPI has a similar rank relative to all other NPIs in all territories: in other words, the effectiveness of the NPI does not depend on the specific country or state. On the other hand, a high value of the normalized entropy signals that the performance of each NPI depends largely on the geographical region. Fig. 3: Normalized entropies versus rank for all NPIs at level L2. Each NPI is colour coded according to its theme of belonging (L1), as indicated in the legend. The blue curve represents the same information obtained from a reshuffled dataset of NPIs. The values of the normalized entropies for many NPIs are far from one, and are also below the corresponding values obtained through temporal reshuffling of NPIs in each country. The effectiveness of many NPIs therefore is, first, significant and, second, depends on the local context (combination of socio-economic features and NPIs already adopted) to varying degrees. In general, social distancing measures and travel restrictions show a high entropy (effectiveness varies considerably across countries) whereas case identification, contact tracing and healthcare measures show substantially less country dependence. We further explore this interplay of NPIs with socio-economic factors by analysing the effects of demographic and socio-economic covariates, as well as indicators for governance and human and economic development in the CC method (Supplementary Information). While the effects of most indicators vary across different NPIs at rather moderate levels, we find a robust tendency that NPI effectiveness correlates negatively with indicator values for governance-related accountability and political stability (as quantified by World Governance Indicators provided by the World Bank). Because the heterogeneity of the effectiveness of individual NPIs across countries points to a non-independence among different NPIs, the impact of a specific NPI cannot be evaluated in isolation. Since it is not possible in the real world to change the sequence of NPIs adopted, we resort to 'what-if' experiments to identify the most likely outcome of an artificial sequence of NPIs in each country. Within the TF approach, we selectively delete one NPI at a time from all sequences of interventions in all countries and compute the ensuing evolution of Rt compared to the actual case. To quantify whether the effectiveness of a specific NPI depends on its epidemic age of implementation, we study artificial sequences of NPIs constructed by shifting the selected NPI to other days, keeping the other NPIs fixed. In this way, for each country and each NPI, we obtain a curve of the most likely change in Rt versus the adoption time of the specific NPI. Figure 4 shows an example of the results for a selection of NPIs (see Supplementary Information for a more extensive report on other NPIs). Each curve shows the average change in Rt versus the adoption time of the NPI, averaged over the countries where that NPI has been adopted. Figure 4a refers to the national lockdown (including stay-at-home order implemented in US states). Our results show a moderate effect of this NPI (low change in Rt) as compared to other, less drastic, measures. Figure 4b shows NPIs with the pattern 'the earlier, the better'. For those measures ('closure of educational institutions', 'small gatherings cancellation', 'airport restrictions' and many more shown in Supplementary Information), early adoption is always more beneficial. In Fig. 4c, 'enhancing testing capacity' and 'surveillance' exhibit a negative impact (that is, an increase) on Rt, presumably related to the fact that more testing allows for more cases to be identified. Finally, Fig. 4d, showing 'tracing and tracking' and 'activate case notification', demonstrates an initially negative effect that turns positive (that is, toward a reduction in Rt). Refer to Supplementary Information for a more comprehensive analysis of all NPIs. Fig. 4: Change in Rt as a function of the adoption time of selected NPIs, averaged over countries where thosee NPIs had been adopted. a, National lockdown (including stay-at-home order in US states). b, A selection of three NPIs displaying 'the earlier the better' behaviour—that is, their impact is enhanced if implemented at earlier epidemic ages. c, Enhance laboratory testing capacity and Surveillance. d, Tracing and tracking and Activate case notification. Negative (positive) values indicate that the adoption of the NPI has reduced (increased) the value of Rt. Shaded areas denote s.d. Our study dissects the entangled packages of NPIs23 and quantifies their effectiveness. We validate our findings using three different datasets and four independent methods. Our findings suggest that no NPI acts as a silver bullet on the spread of COVID-19. Instead, we identify several decisive interventions that significantly contribute to reducing Rt below one and that should therefore be considered as efficiently flattening the curve facing a potential second COVID-19 wave, or any similar future viral respiratory epidemics. The most effective NPIs include curfews, lockdowns and closing and restricting places where people gather in smaller or large numbers for an extended period of time. This includes small gathering cancellations (closures of shops, restaurants, gatherings of 50 persons or fewer, mandatory home working and so on) and closure of educational institutions. While in previous studies, based on smaller numbers of countries, school closures had been attributed as having little effect on the spread of COVID-19 (refs. 19,20), more recent evidence has been in favour of the importance of this NPI28,29; school closures in the United States have been found to reduce COVID-19 incidence and mortality by about 60% (ref. 28). This result is also in line with a contact-tracing study from South Korea, which identified adolescents aged 10–19 years as more likely to spread the virus than adults and children in household settings30. Individual movement restrictions (including curfew, the prohibition of gatherings and movements for non-essential activities or measures segmenting the population) were also amongst the top-ranked measures. However, such radical measures have adverse consequences. School closure interrupts learning and can lead to poor nutrition, stress and social isolation in children31,32,33. Home confinement has strongly increased the rate of domestic violence in many countries, with a huge impact on women and children34,35, while it has also limited the access to long-term care such as chemotherapy, with substantial impacts on patients' health and survival chance36,37. Governments may have to look towards less stringent measures, encompassing maximum effective prevention but enabling an acceptable balance between benefits and drawbacks38. Previous statistical studies on the effectiveness of lockdowns came to mixed conclusions. Whereas a relative reduction in Rt of 5% was estimated using a Bayesian hierarchical model19, a Bayesian mechanistic model estimated a reduction of 80% (ref. 20), although some questions have been raised regarding the latter work because of biases that overemphasize the importance of the most recent measure that had been implemented24. The susceptibility of other modelling approaches to biases resulting from the temporal sequence of NPI implementations remains to be explored. Our work tries to avoid such biases by combining multiple modelling approaches and points to a mild impact of lockdowns due to an overlap with effects of other measures adopted earlier and included in what is referred to as 'national (or full) lockdown'. Indeed, the national lockdown encompasses multiple NPIs (for example, closure of land, sea and air borders, closure of schools, non-essential shops and prohibition of gatherings and visiting nursing homes) that countries may have already adopted in parts. From this perspective, the relatively attenuated impact of the national lockdown is explained as the little delta after other concurrent NPIs have been adopted. This conclusion does not rule out the effectiveness of an early national lockdown, but suggests that a suitable combination (sequence and time of implementation) of a smaller package of such measures can substitute for a full lockdown in terms of effectiveness, while reducing adverse impacts on society, the economy, the humanitarian response system and the environment6,39,40,41. Taken together, the social distancing and movement-restriction measures discussed above can therefore be seen as the 'nuclear option' of NPIs: highly effective but causing substantial collateral damages to society, the economy, trade and human rights4,39. We find strong support for the effectiveness of border restrictions. The role of travelling in the global spread of respiratory diseases proved central during the first SARS epidemic (2002–2003)42, but travelling restrictions show a large impact on trade, economy and the humanitarian response system globally41,43. The effectiveness of social distancing and travel restrictions is also in line with results from other studies that used different statistical approaches, epidemiological metrics, geographic coverage and NPI classification2,8,9,10,11,13,19,20. We also find a number of highly effective NPIs that can be considered less costly. For instance, we find that risk-communication strategies feature prominently amongst consensus NPIs. This includes government actions intended to educate and actively communicate with the public. The effective messages include encouraging people to stay at home, promoting social distancing and workplace safety measures, encouraging the self-initiated isolation of people with symptoms, travel warnings and information campaigns (mostly via social media). All these measures are non-binding government advice, contrasting with the mandatory border restriction and social distancing measures that are often enforced by police or army interventions and sanctions. Surprisingly, communicating on the importance of social distancing has been only marginally less effective than imposing distancing measures by law. The publication of guidelines and work safety protocols to managers and healthcare professionals was also associated with a reduction in Rt, suggesting that communication efforts also need to be tailored toward key stakeholders. Communication strategies aim at empowering communities with correct information about COVID-19. Such measures can be of crucial importance in targeting specific demographic strata found to play a dominant role in driving the spread of COVID-19 (for example, communication strategies to target individuals aged <40 years44). Government food assistance programmes and other financial supports for vulnerable populations have also turned out to be highly effective. Such measures are, therefore, not only impacting the socio-economic sphere45 but also have a positive effect on public health. For instance, facilitating people's access to tests or allowing them to self-isolate without fear of losing their job or part of their salary may help in reducing Rt. Some measures are ineffective in (almost) all methods and datasets—for example, environmental measures to disinfect and clean surfaces and objects in public and semi-public places. This finding is at odds with current recommendations of the WHO (World Health Organization) for environmental cleaning in non-healthcare settings46, and calls for a closer examination of the effectiveness of such measures. However, environmental measures (for example, cleaning of shared surfaces, waste management, approval of a new disinfectant, increased ventilation) are seldom reported by governments or the media and are therefore not collected by NPI trackers, which could lead to an underestimation of their impact. These results call for a closer examination of the effectiveness of such measures. We also find no evidence for the effectiveness of social distancing measures in regard to public transport. While infections on buses and trains have been reported47, our results may suggest a limited contribution of such cases to the overall virus spread, as previously reported48. A heightened public risk awareness associated with commuting (for example, people being more likely to wear face masks) might contribute to this finding49. However, we should note that measures aiming at limiting engorgement or increasing distancing on public transport have been highly diverse (from complete cancellation of all public transport to increase in the frequency of traffic to reduce traveller density) and could therefore lead to widely varying effectiveness, also depending on the local context. The effectiveness of individual NPIs is heavily influenced by governance (Supplementary Information) and local context, as evidenced by the results of the entropic approach. This local context includes the stage of the epidemic, socio-economic, cultural and political characteristics and other NPIs previously implemented. The fact that gross domestic product is overall positively correlated with NPI effectiveness whereas the governance indicator 'voice and accountability' is negatively correlated might be related to the successful mitigation of the initial phase of the epidemic of certain south-east Asian and Middle East countries showing authoritarian tendencies. Indeed, some south-east Asian government strategies heavily relied on the use of personal data and police sanctions whereas the Middle East countries included in our analysis reported low numbers of cases in March–April 2020. By focusing on individual countries, the what-if experiments using artificial country-specific sequences of NPIs offer a way to quantify the importance of this local context with respect to measurement of effectiveness. Our main takeaway here is that the same NPI can have a drastically different impact if taken early or later, or in a different country. It is interesting to comment on the impact that 'enhancing testing capacity' and 'tracing and tracking' would have had if adopted at different points in time. Enhancing testing capacity should display a short-term increase in Rt. Counter-intuitively, in countries testing close contacts, tracing and tracking, if they are effective, would have a similar effect on Rt because more cases will be found (although tracing and tracking would reduce Rt in countries that do not test contacts but rely on quarantine measures). For countries implementing these measures early, indeed, we find a short-term increase in Rt (when the number of cases was sufficiently small to enable tracing and testing of all contacts). However, countries implementing these NPIs later did not necessarily find more cases, as shown by the corresponding decrease in Rt. We focus on March and April 2020, a period in which many countries had a sudden surge in cases that overwhelmed their tracing and testing capacities, which rendered the corresponding NPIs ineffective. Assessment of the effectiveness of NPIs is statistically challenging, because measures were typically implemented simultaneously and their impact might well depend on the particular implementation sequence. Some NPIs appear in almost all countries whereas in others only a few, meaning that we could miss some rare but effective measures due to a lack of statistical power. While some methods might be prone to overestimation of the effects from an NPI due to insufficient adjustments for confounding effects from other measures, other methods might underestimate the contribution of an NPI by assigning its impact to a highly correlated NPI. As a consequence, estimates of ΔRt might vary substantially across different methods whereas agreement on the significance of individual NPIs is much more pronounced. The strength of our study, therefore, lies in the harmonization of these four independent methodological approaches combined with the usage of an extensive dataset on NPIs. This allows us to estimate the structural uncertainty of NPI effectiveness—that is, the uncertainty introduced by choosing a certain model structure likely to affect other modelling works that rely on a single method only. Moreover, whereas previous studies often subsumed a wide range of social distancing and travel restriction measures under a single entity, our analysis contributes to a more fine-grained understanding of each NPI. The CCCSL dataset features non-homogeneous data completeness across the different territories, and data collection could be biased by the data collector (native versus non-native) as well as by the information communicated by governments (see also ref. 23). The WHO-PHSM and CoronaNet databases contain a broad geographic coverage whereas CCCSL focuses mostly on developed countries. Moreover, the coding system presents certain drawbacks, notably because some interventions could belong to more than one category but are recorded only once. Compliance with NPIs is crucial for their effectiveness, yet we assumed a comparable degree of compliance by each population. We tried to mitigate this issue by validating our findings on two external databases, even if these are subject to similar limitations. We did not perform a formal harmonization of all categories in the three NPI trackers, which limits our ability to perform full comparisons among the three datasets. Additionally, we neither took into account the stringency of NPI implementation nor the fact that not all methods were able to describe potential variations in NPI effectiveness over time, besides the dependency on the epidemic age of its adoption. The time window is limited to March–April 2020, where the structure of NPIs is highly correlated due to simultaneous implementation. Future research should consider expanding this window to include the period when many countries were easing policies, or maybe even strenghening them again after easing, as this would allow clearer differentiation of the correlated structure of NPIs because they tended to be released, and implemented again, one (or a few) at a time. To compute Rt, we used time series of the number of confirmed COVID-19 cases50. This approach is likely to over-represent patients with severe symptoms and may be biased by variations in testing and reporting policies among countries. Although we assume a constant serial interval (average timespan between primary and secondary infection), this number shows considerable variation in the literature51 and depends on measures such as social distancing and self-isolation. In conclusion, here we present the outcome of an extensive analysis on the impact of 6,068 individual NPIs on the Rt of COVID-19 in 79 territories worldwide. Our analysis relies on the combination of three large and fine-grained datasets on NPIs and the use of four independent statistical modelling approaches. The emerging picture reveals that no one-size-fits-all solution exists, and no single NPI can decrease Rt below one. Instead, in the absence of a vaccine or efficient antiviral medication, a resurgence of COVID-19 cases can be stopped only by a suitable combination of NPIs, each tailored to the specific country and its epidemic age. These measures must be enacted in the optimal combination and sequence to be maximally effective against the spread of SARS-CoV-2 and thereby enable more rapid reopening. We showed that the most effective measures include closing and restricting most places where people gather in smaller or larger numbers for extended periods of time (businesses, bars, schools and so on). However, we also find several highly effective measures that are less intrusive. These include land border restrictions, governmental support to vulnerable populations and risk-communication strategies. We strongly recommend that governments and other stakeholders first consider the adoption of such NPIs, tailored to the local context, should infection numbers surge (or surge a second time), before choosing the most intrusive options. Less drastic measures may also foster better compliance from the population. Notably, the simultaneous consideration of many distinct NPI categories allows us to move beyond the simple evaluation of individual classes of NPIs to assess, instead, the collective impact of specific sequences of interventions. The ensemble of these results calls for a strong effort to simulate what-if scenarios at the country level for planning the most probable effectiveness of future NPIs, and, thanks to the possibility of going down to the level of individual countries and country-specific circumstances, our approach is the first contribution toward this end. NPI data We use the publicly available CCCSL dataset on NPIs23, in which NPIs are categorized using a four-level hierarchical coding scheme. L1 defines the theme of the NPI: 'case identification, contact tracing and related measures', 'environmental measures', 'healthcare and public health capacity', 'resource allocation', 'returning to normal life', 'risk communication', 'social distancing' and 'travel restriction'. Each L1 (theme) is composed of several categories (L2 of the coding scheme) that contain subcategories (L3), which are further subdivided into group codes (L4). The dataset covers 56 countries; data for the United States are available at the state level (24 states), making a total of 79 territories. In this analysis, we use a static version of the CCCSL, retrieved on 17 August 2020, presenting 6,068 NPIs. A glossary of the codes, with a detailed description of each category and its subcategories, is provided on GitHub. For each country, we use the data until the day for which the measures have been reliably updated. NPIs that have been implemented in fewer than five territories are not considered, leading to a final total of 4,780 NPIs of 46 different L2 categories for use in the analyses. Second, we use the CoronaNet COVID-19 Government Response Event Dataset (v.1.0)27 that contains 31,532 interventions and covers 247 territories (countries and US states) (data extracted on 17 August 2020). For our analysis, we map their columns 'type' and 'type_sub_cat' onto L1 and L2, respectively. Definitions for the entire 116 L2 categories can be found on the GitHub page of the project. Using the same criterion as for the CCCSL, we obtain a final total of 18,919 NPIs of 107 different categories. Third, we use the WHO-PHSM dataset26, which merges and harmonizes the following datasets: ACAPS41, Oxford COVID-19 Government Response Tracker52, the Global Public Health Intelligence Network (GPHIN) of Public Health Agency of Canada (Ottawa, Canada), the CCCSL23, the United States Centers for Disease Control and Prevention and HIT-COVID53. The WHO-PHSM dataset contains 24,077 interventions and covers 264 territories (countries and US states; data extracted on 17 August 2020). Their encoding scheme has a heterogeneous coding depth and, for our analysis, we map 'who_category' onto L1 and either take 'who_subcategory' or a combination of 'who_subcategory' and 'who_measure' as L2. This results in 40 measure categories. A glossary is available at: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/phsm. The CoronaNet and WHO-PHSM datasets also provide information on the stringency of the implementation of a given NPI, which we did not use in the current study. To estimate Rt and growth rates of the number of COVID-19 cases, we use time series of the number of confirmed COVID-19 cases in the 79 territories considered50. To control for weekly fluctuations, we smooth the time series by computing the rolling average using a Gaussian window with a standard deviation of 2 days, truncated at a maximum window size of 15 days. Regression techniques We apply four different statistical approaches to quantify the impact of a NPI, M, on the reduction in Rt (Supplementary Information). Case-control analysis considers each single category (L2) or subcategory (L3) M separately and evaluates in a matched comparison the difference, ΔRt, in Rt between all countries that implemented M (cases) and those that did not (controls) during the observation window. The matching is done on epidemic age and the time of implementation of any response. The comparison is made via a linear regression model adjusting for (1) epidemic age (days after the country has reached 30 confirmed cases), (2) the value of Rt before M takes effect, (3) total population, (4) population density, (5) the total number of NPIs implemented and (6) the number of NPIs implemented in the same category as M. With this design, we investigate the time delay of τ days between implemention of M and observation of ΔRt, as well as additional country-based covariates that quantify other dimensions of governance and human and economic development. Estimates for Rt are averaged over delays between 1 and 28 days. Step function Lasso regression In this approach we assume that, without any intervention, the reproduction factor is constant and deviations from this constant result from a delayed onset by τ days of each NPI on L2 (categories) of the hierarchical dataset. We use a Lasso regularization approach combined with a meta parameter search to select a reduced set of NPIs that best describe the observed ΔRt. Estimates for the changes in ΔRt attributable to NPI M are obtained from country-wise cross-validation. RF regression We perform a RF regression, where the NPIs implemented in a country are used as predictors for Rt, time-shifted τ days into the future. Here, τ accounts for the time delay between implementation and onset of the effect of a given NPI. Similar to the Lasso regression, the assumption underlying the RF approach is that, without changes in interventions, the value of Rt in a territory remains constant. However, contrary to the two methods described above, RF represents a nonlinear model, meaning that the effects of individual NPIs on Rt do not need to add up linearly. The importance of a NPI is defined as the decline in predictive performance of the RF on unseen data if the data concerning that NPI are replaced by noise, also called permutation importance. Transformer modelling Transformers54 have been demonstrated as models suitable for dynamic discrete element processes such as textual sequences, due to their ability to recall past events. Here we extended the transformer architecture to approach the continuous case of epidemic data by removing the probabilistic output layer with a linear combination of transformer output, whose input is identical to that for RF regression, along with the values of Rt. The best-performing network (least mean-squared error in country-wise cross-validation) is identified as a transformer encoder with four hidden layers of 128 neurons, an embedding size of 128, eight heads, one output described by a linear output layer and 47 inputs (corresponding to each category and Rt). To quantify the impact of measure M on Rt, we use the trained transformer as a predictive model and compare simulations without any measure (reference) to those where one measure is presented at a time to assess ΔRt. To reduce the effects of overfitting and multiplicity of local minima, we report results from an ensemble of transformers trained to similar precision levels. Estimation of R t We use the R package EpiEstim55 with a sliding time window of 7 days to estimate the time series of Rt for every country. We choose an uncertain serial interval following a probability distribution with a mean of 4.46 days and a standard deviation of 2.63 days56. Ranking of NPIs For each of the methods (CC, Lasso regression and TF), we rank the NPI categories in descending order according to their impact—that is, the estimated degree to which they lower Rt or their feature importance (RF). To compare rankings, we count how many of the 46 NPIs considered are classified as belonging to the top x ranked measures in all methods, and test the null hypothesis that this overlap has been obtained from completely independent rankings. The P value is then given by the complementary cumulative distribution function for a binomial experiment with 46 trials and success probability (x/46)4. We report the median P value obtained over all x ≤ 10 to ensure that the results are not dependent on where we impose the cut-off for the classes. Co-implementation network If there is a statistical tendency that a country implementing NPI i also implements NPI j later in time, we draw a direct link from i to j. Nodes are placed on the y axis according to the average epidemic age at which the corresponding NPI is implemented; they are grouped on the x axis by their L1 theme. Node colours correspond to themes. The effectiveness scores for all NPIs are re-scaled between zero and one for each method; node size is proportional to the re-scaled scores, averaged over all methods. Entropic country-level approach Each territory can be characterized by its socio-economic conditions and the unique temporal sequence of NPIs adopted. To quantify the NPI effect, we measure the heterogeneity of the overall rank of a NPI amongst the countries that have taken that NPI. To compare countries that have implemented different numbers of NPIs, we consider the normalized rankings where the ranking position is divided by the number of elements in the ranking list (that is, the number of NPIs taken in a specific country). We then bin the interval [0, 1] of the normalized rankings into ten sub-intervals and compute for each NPI the entropy of the distribution of occurrences of that NPI in the different normalized rankings per country: $$S(\mathrm{NPI}\,)=-\frac{1}{\mathrm{log}\,(10)}\sum _{i}{P}_{i}\mathrm{log}\,({P}_{i}),$$ where Pi is the probability that the NPI considered appeared in the ith bin in the normalized rankings of all countries. To assess the confidence of these entropic values, results are compared with expectations from a temporal reshuffling of the data. For each country, we keep the same NPIs adopted but reshuffle the time stamps of their adoption. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. The CCCSL dataset can be downloaded from http://covid19-interventions.com/. The CoronaNet data can be found at https://www.coronanet-project.org/. The WHO-PHSM dataset is available at https://www.who.int/emergencies/diseases/novel-coronavirus-2019/phsm. Snapshots of the datasets used in our study are available in the following github repository: https://github.com/complexity-science-hub/ranking_npis. Code availability Custom code for the analysis is available in the following github repository: https://github.com/complexity-science-hub/ranking_npis. Qualls, N. L. et al. Community mitigation guidelines to prevent pandemic influenza – United States, 2017. MMWR Recomm. Rep. 66, 1–34 (2017). Tian, H. et al. An investigation of transmission control measures during the first 50 days of the COVID-19 epidemic in China. Science 368, 638–642 (2020). Chen, S. et al. COVID-19 control in China during mass population movements at New Year. Lancet 395, 764–766 (2020). Lee, K., Worsnop, C. Z., Grépin, K. A. & Kamradt-Scott, A. Global coordination on cross-border travel and trade measures crucial to COVID-19 response. Lancet 395, 1593–1595 (2020). Chakraborty, I. & Maity, P. Covid-19 outbreak: migration, effects on society, global environment and prevention. Sci. Total Environ. 728, 138882 (2020). Pfefferbaum, B. & North, C. S. Mental health and the COVID-19 pandemic. N. Eng. J. Med. 383, 510–512. COVID-19 dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University of Medicine (Johns Hopkins University of Medicine, accessed 4 June 2020); https://coronavirus.jhu.edu/map.html. Chinazzi, M. et al. The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak. Science 368, 395–400 (2020). Arenas, A., Cota, W., Granell, C. & Steinegger, B. Derivation of the effective reproduction number R for COVID-19 in relation to mobility restrictions and confinement. Preprint at medRxiv https://doi.org/10.1101/2020.04.06.20054320 (2020). Wang, J., Tang, K., Feng, K. & Lv, W. When is the COVID-19 pandemic over? Evidence from the stay-at-home policy execution in 106 Chinese cities. Preprint at SSRN https://doi.org/10.2139/ssrn.3561491 (2020). Soucy, J.-P. R. et al. Estimating effects of physical distancing on the COVID-19 pandemic using an urban mobility index. Preprint at medRxiv https://doi.org/10.1101/2020.04.05.20054288 (2020). Anderson, S. C. et al. Estimating the impact of Covid-19 control measures using a Bayesian model of physical distancing. Preprint at medRxiv https://doi.org/10.1101/2020.04.17.20070086 (2020). Teslya, A. et al. Impact of self-imposed prevention measures and short-term government intervention on mitigating and delaying a COVID-19 epidemic. PLoS Med. https://doi.org/10.1371/journal.pmed.1003166 (2020). Kraemer, M. U. et al. The effect of human mobility and control measures on the COVID-19 epidemic in China. Science 497, 493–497 (2020). Prem, K. & Liu, Y. et al. The effect of control strategies to reduce social mixing on outcomes of the COVID-19 epidemic in Wuhan, China: a modelling study. Lancet Public Health 5, e261–e270 (2020). Gatto, M. et al. Spread and dynamics of the COVID-19 epidemic in Italy: effects of emergency containment measures. Proc. Natl Acad. Sci. USA 117, 10484–10491 (2020). Lorch, L. et al. A spatiotemporal epidemic model to quantify the effects of contact tracing, testing, and containment. Preprint at arXiv https://arxiv.org/abs/2004.07641 (2020). Dehning, J. & Zierenberg, J. et al. Inferring change points in the spread of COVID-19 reveals the effectiveness of interventions. Science 369, eabb9789 (2020). Banholzer, N. et al. Impact of non-pharmaceutical interventions on documented cases of COVID-19. Preprint at medRxiv https://doi.org/10.1101/2020.04.16.20062141 (2020). Flaxman, S. et al. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature 584, 257–261 (2020). Hsiang, S. et al. The effect of large-scale anti-contagion policies on the COVID-19 pandemic. Nature 584, 262–267 (2020). Nachega, J., Seydi, M. & Zumla, A. The late arrival of coronavirus disease 2019 (Covid-19) in Africa: mitigating pan-continental spread. Clin. Infect. Dis. 71, 875–878 (2020). Desvars-Larrive, A. et al. A structured open dataset of government interventions in response to COVID-19. Sci. Data 7, 285 (2020). Bryant, P. & Elofsson, A. The limits of estimating COVID-19 intervention effects using Bayesian models. Preprint at medRxiv https://doi.org/10.1101/2020.08.14.20175240 (2020). Protecting People and Economies: Integrated Policy Responses to COVID-19 (World Bank, 2020); https://openknowledge.worldbank.org/handle/10986/33770 Tracking Public Health and Social Measures: A Global Dataset (World Health Organization, 2020); https://www.who.int/emergencies/diseases/novel-coronavirus-2019/phsm Cheng, C., Barceló, J., Hartnett, A. S., Kubinec, R. & Messerschmidt, L. COVID-19 government response event dataset (CoronaNet v.1.0). Nat. Hum. Behav. 4, 756–768 (2020). Auger, K. A. et al. Association between statewide school closure and COVID-19 incidence and mortality in the US. JAMA 324, 859–870 (2020). Liu, Y. et al. The impact of non-pharmaceutical interventions on SARS-CoV-2 transmission across 130 countries and territories. Preprint at medRxiv https://doi.org/10.1101/2020.08.11.20172643 (2020). Park, Y., Choe, Y. et al. Contact tracing during coronavirus disease outbreak. Emerg. Infect. Dis. 26, 2465–2468(2020). Adverse Consequences of School Closures (UNESCO, 2020); https://en.unesco.org/covid19/educationresponse/consequences Education and COVID-19: Focusing on the Long-term Impact of School Closures (OECD, 2020); https://www.oecd.org/coronavirus/policy-responses/education-and-covid-19-focusing-on-the-long-term-impact-of-school-closures-2cea926e/ Orben, A., Tomova, L. & Blakemore, S.-J. The effects of social deprivation on adolescent development and mental health. Lancet Child Adolesc. Health 4, 634–640 (2020). Taub, A. A new covid-19 crisis: domestic abuse rises worldwide. The New York Times https://www.nytimes.com/2020/04/06/world/coronavirus-domestic-violence.html (6 April 2020). Abramian, J. The Covid-19 pandemic has escalated domestic violence worldwide. Forbes https://www.forbes.com/sites/jackieabramian/2020/07/22/the-covid-19-pandemic-has-escalated-global-domestic-violence/#57366498173e (22 July 2020). Tsamakis, K. et al. Oncology during the COVID-19 pandemic: challenges, dilemmas and the psychosocial impact on cancer patients (review). Oncol. Lett. 20, 441–447 (2020). Raymond, E., Thieblemont, C., Alran, S. & Faivre, S. Impact of the COVID-19 outbreak on the management of patients with cancer. Target. Oncol. 15, 249–259 (2020). Couzin-Frankel, J., Vogel, G. & Weiland, M. School openings across globe suggest ways to keep coronavirus at bay, despite outbreaks. Science https://www.sciencemag.org/news/2020/07/school-openings-across-globe-suggest-ways-keep-coronavirus-bay-despite-outbreaks# (2020). Vardoulakis, S., Sheel, M., Lal, A. & Gray, D. Covid-19 environmental transmission and preventive public health measures. Aust. N. Z. J. Public Health 44, 333–335 (2020). Saadat, S., Rawtani, D. & Hussain, C. M. Environmental perspective of Covid-19. Sci. Total Environ. 728, 138870 (2020). Covid-19 Government Measures Dataset (ACAPS, 2020); https://www.acaps.org/covid19-government-measures-dataset Brockmann, D. & Helbing, D. The hidden geometry of complex, network-driven contagion phenomena. Science 342, 1337–1342 (2013). Guan, D. et al. Global supply-chain effects of Covid-19 control measures. Nat. Hum. Behav. 4, 577–587 (2020). Malmgren, J., Guo, B. & Kaplan, H. G. Covid-19 confirmed case incidence age shift to young persons aged 0–19 and 20–39 years over time: Washington State March–April 2020. Preprint at medRxiv https://doi.org/10.1101/2020.05.21.20109389 (2020). Gentilini, U., Almenfi, M., Orton, I. & Dale, P. Social Protection and Jobs Responses to COVID-19 (World Bank, 2020); https://openknowledge.worldbank.org/handle/10986/33635 Cleaning and Disinfection of Environmental Surfaces in the Context of COVID-19 (World Health Organization, 2020); https://www.who.int/publications/i/item/cleaning-and-disinfection-of-environmental-surfaces-inthe-context-of-covid-19 Shen, J. et al. Prevention and control of COVID-19 in public transportation: experience from China. Environ. Pollut. 266, 115291 (2020). Islam, N. et al. Physical distancing interventions and incidence of coronavirus disease 2019: natural experiment in 149 countries. BMJ 370, m2743 (2020). Liu, X. & Zhang, S. Covid-19: face masks and human-to-human transmission. Influenza Other Respir. Viruses 14, 472–473 (2020). 2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE (Johns Hopkins University of Medicine, 2020); https://github.com/CSSEGISandData/COVID-19 Griffin, J. et al. A rapid review of available evidence on the serial interval and generation time of COVID-19. Preprint at medRxiv https://doi.org/10.1101/2020.05.08.20095075 (2020). Hale, T., Webster, S., Petherick, A., Phillips, T. & Kira, B. Oxford COVID-19 Government Response Tracker (Blavatnik School of Government & University of Oxford, 2020); https://www.bsg.ox.ac.uk/research/research-projects/coronavirus-government-response-tracker Zheng, Q. et al. HIT-COVID, a global database tracking public health interventions to COVID-19. Sci. Data 7, 286 (2020). Vaswani, A. et al. in Advances in Neural Information Processing Systems 30 (eds Guyon, I. et al.) 5998–6008 (Curran Associates, 2017). Cori, A., Ferguson, N. M., Fraser, C. & Cauchemez, S. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am. J. Epidemiol. 178, 1505–1512 (2013). Valka, F. & Schuler, C. Estimation and interactive visualization of the time-varying reproduction number Rt and the time-delay from infection to estimation. Preprint at medRxiv https://doi.org/10.1101/2020.09.19.20197970 (2020). We thank A. Roux for her contribution to the coding of the interventions recorded in the dataset used in this study. We thank D. Garcia, V. D. P. Servedio and D. Hofmann for their contribution in the early stage of this work. N.H. thanks L. Haug for helpful discussions. This work was funded by the Austrian Science Promotion Agency, the FFG project (no. 857136), the WWTF (nos. COV 20-001, COV 20-017 and MA16-045), Medizinisch-Wissenschaftlichen Fonds des Bürgermeisters der Bundeshauptstadt Wien (no. CoVid004) and the project VET-Austria, a cooperation between the Austrian Federal Ministry of Social Affairs, Health, Care and Consumer Protection, the Austrian Agency for Health and Food Safety and the University of Veterinary Medicine, Vienna. The funders had no role in the conceptualization, design, data collection, analysis, decision to publish or preparation of the manuscript. These authors contributed equally: Nils Haug, Lukas Geyrhofer, Alessandro Londei. Medical University of Vienna, Section for Science of Complex Systems, CeMSIIS, Vienna, Austria Nils Haug, Elma Dervic, Stefan Thurner & Peter Klimek Complexity Science Hub Vienna, Vienna, Austria Nils Haug, Lukas Geyrhofer, Elma Dervic, Amélie Desvars-Larrive, Vittorio Loreto, Beate Pinior, Stefan Thurner & Peter Klimek Sony Computer Science Laboratories, Paris, France Alessandro Londei & Vittorio Loreto Unit of Veterinary Public Health and Epidemiology, Institute of Food Safety, Food Technology and Veterinary Public Health, University of Veterinary Medicine, Vienna, Austria Amélie Desvars-Larrive & Beate Pinior Physics Department, Sapienza University of Rome, Rome, Italy Vittorio Loreto Santa Fe Institute, Santa Fe, NM, USA Stefan Thurner Nils Haug Lukas Geyrhofer Alessandro Londei Elma Dervic Amélie Desvars-Larrive Beate Pinior Peter Klimek N.H., L.G., A.L., V.L. and P.K. conceived and performed the analyses. V.L., S.T. and P.K. supervised the study. E.D. contributed additional tools. N.H., L.G., A.L., A.D.-L., B.P. and P.K. wrote the first draft of the paper. A.D.-L. supervised data collection on NPIs. All authors discussed the results and contributed to revision of the final manuscript. Correspondence to Peter Klimek. The authors declare no competing interests. Peer review information Peer review reports are available. Primary handling editor: Stavroula Kousta. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Extended data Extended Data Fig. 1 Main results for the CCCSL dataset. Normalised scores (relative effect within a method) of the NPI categories in CCCSL, averaged over the four different approaches. Extended Data Fig. 2 Main results for the CoronaNet dataset. Normalised scores (relative effect within a method) of the NPI categories in CoronaNet, averaged over the four different approaches. Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 3. Extended Data Fig. 3 Main results for the WHO-PHSM dataset. Normalised scores (relative effect within a method) of the NPI categories in WHO-PHSM, averaged over the four different approaches. Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 4. Extended Data Fig. 4 Measure effectiveness in the WHO-PHSM dataset. Analogue to Fig. 1 of the main text if the analysis is done on the WHO-PHSM dataset. Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 4. Extended Data Fig. 5 Measure effectiveness in the CoronaNet dataset(part 1). Analogue to Fig. 1 of the main text if the analysis is done on the CoronaNat dataset (continued in Extended Data Fig. 6). Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 3. Extended Data Fig. 6 Measure effectiveness in the WHO-PHSM dataset (part 2). Analogue to Fig. 1 of the main text if the analysis is done on the CoronaNat dataset (continued from Extended Data Fig. 5). Full names of the abbreviated L2 categories can be looked up in SI; Supplementary Table 3. Supplementary Methods, Supplementary Results, Supplementary Discussion, Supplementary Figs. 1–26 and Supplementary Tables 1–6. Haug, N., Geyrhofer, L., Londei, A. et al. Ranking the effectiveness of worldwide COVID-19 government interventions. Nat Hum Behav 4, 1303–1312 (2020). https://doi.org/10.1038/s41562-020-01009-0 Calling for pan-European commitment for rapid and sustained reduction in SARS-CoV-2 infections Viola Priesemann , Melanie M Brinkmann , Sandra Ciesek , Sarah Cuschieri , Thomas Czypionka , Giulia Giordano , Deepti Gurdasani , Claudia Hanson , Niel Hens , Emil Iftekhar , Michelle Kelly-Irving , Peter Klimek , Mirjam Kretzschmar , Andreas Peichl , Matjaž Perc , Francesco Sannino , Eva Schernhammer , Alexander Schmidt , Anthony Staines & Ewa Szczurek The Lancet (2021) Mitigation Policies and COVID-19–Associated Mortality — 37 European Countries, January 23–June 30, 2020 James A. Fuller , Avi Hakim , Kerton R. Victory , Kashmira Date , Michael Lynch , Benjamin Dahl & Olga Henao MMWR. Morbidity and Mortality Weekly Report (2021) Implication d'un service d'hygiène hospitalière dans la gestion de la Covid-19 Ludwig-Serge Aho-Glélé Oxymag (2021) Effect of COVID-19 Non-Pharmaceutical Interventions and the Implications for Human Rights Seung-Hun Hong , Ha Hwang & Min-Hye Park International Journal of Environmental Research and Public Health (2020) COVID-19 and human behaviour Policy for signed comments Nature Human Behaviour ISSN 2397-3374 (online) Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Language, Statistics, & Category Theory, Part 3 Welcome to the final installment of our mini-series on the new preprint "An Enriched Category Theory of Language," joint work with John Terilla and Yiannis Vlassopoulos. In Part 2 of this series, we discussed a way to assign sets to expressions in language — words like "red" or "blue" – which served as a first approximation to the meanings of those expressions. Motivated by elementary logic, we then found ways to represent combinations of expressions — "red or blue" and "red and blue" and "red implies blue" — using basic constructions from category theory. I like to think of Part 2 as a commercial advertising the benefits of a category theoretical approach to language, rather than a merely algebraic one. But as we observed in Part 1, algebraic structure is not all there is to language. There's also statistics! And far from being an afterthought, those statistics play an essential role as evidenced by today's large language models discussed in Part 0. Happily, category theory already has an established set of tools that allow one to incorporate statistics in a way that's compatible with the considerations of logic discussed last time. In fact, the entire story outlined in Part 2 has a statistical analogue that can be repeated almost verbatim. In today's short post, I'll give lightning-quick summary. It all begins with a small, yet crucial, twist. Entropy + Algebra + Topology = ? Today I'd like to share a bit of math involving ideas from information theory, algebra, and topology. It's all in a new paper I've recently uploaded to the arXiv, whose abstract you can see on the right. The paper is short — just 11 pages! Even so, I thought it'd be nice to stroll through some of the surrounding mathematics here. To introduce those ideas, let's start by thinking about the function $d\colon[0,1]\to\mathbb{R}$ defined by $d(x)=-x\log x$ when $x>0$ and $d(x)=0$ when $x=0$. Perhaps after getting out pencil and paper, it's easy to check that this function satisfies an equation that looks a lot like the product rule from Calculus: Functions that satisfy an equation reminiscent of the "Leibniz rule," like this one, are called derivations, which invokes the familiar idea of a derivative. The nonzero term $-x\log x$ above may also look familiar to some of you. It's an expression that appears in the Shannon entropy of a probability distribution. A probability distribution on a finite set $\{1,\ldots,n\}$ for $n\geq 1$ is a sequence $p=(p_1,\ldots,p_n)$ of nonnegative real numbers satisfying $\sum_{i=1}^np_i=1$, and the Shannon entropy of $p$ is defined to be Now it turns out that the function $d$ is nonlinear, which means we can't pull it out in front of the summation. In other words, $H(p)\neq d(\sum_ip_i).$ Even so, curiosity might cause us to wonder about settings in which Shannon entropy is itself a derivation. One such setting is described in the paper above, which shows a correspondence between Shannon entropy and derivations of (wait for it...) topological simplices! Part 1 of this mini-series opened with the observation that language is an algebraic structure. But we also mentioned that thinking merely algebraically doesn't get us very far. The algebraic perspective, for instance, is not sufficient to describe the passage from probability distributions on corpora of text to syntactic and semantic information in language that wee see in today's large language models. This motivated the category theoretical framework presented in a new paper I shared last time. But even before we bring statistics into the picture, there are some immediate advantages to using tools from category theory rather than algebra. One example comes from elementary considerations of logic, and that's where we'll pick up today. Let's start with a brief recap.
CommonCrawl
From 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 — To 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 https://akjournals.com/search?access=all&f_0=author&pageSize=10&q_0=P.+Evans&sort=relevance&t=Mathematics Author or Editor: P. Evans x Mathematics and Statistics x Sort by RelevanceArticle A-ZArticle Z-ADate - Old to RecentDate - Recent to OldAuthor A-ZAuthor Z-AJournal A-ZJournal Z-A Almost every sequence integrates Acta Mathematica Hungarica Volume 117: Issue 1-2 Authors: M. Evans and P. Humke The purpose of this paper is to discuss a first-return integration process which yields the Lebesgue integral of a bounded measurable function f: I → R defined on a compact interval I. The process itself, which has a Riemann flavor, uses the given function f and a sequence of partitions whose norms tend to 0. The "first-return" of a given sequence \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \usepackage{bbm} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\bar x$$ \end{document} is used to tag the intervals from the partitions. The main result of the paper is that under rather general circumstances this first return integration process yields the Lebesgue integral of the given function f for almost every sequence A pathological approximately smooth function Community structure and patterns of scientific collaboration in Business and Management Authors: T. S. Evans, R. Lambiotte, and P. Panzarasa This paper investigates the role of homophily and focus constraint in shaping collaborative scientific research. First, homophily structures collaboration when scientists adhere to a norm of exclusivity in selecting similar partners at a higher rate than dissimilar ones. Two dimensions on which similarity between scientists can be assessed are their research specialties and status positions. Second, focus constraint shapes collaboration when connections among scientists depend on opportunities for social contact. Constraint comes in two forms, depending on whether it originates in institutional or geographic space. Institutional constraint refers to the tendency of scientists to select collaborators within rather than across institutional boundaries. Geographic constraint is the principle that, when collaborations span different institutions, they are more likely to involve scientists that are geographically co-located than dispersed. To study homophily and focus constraint, the paper will argue in favour of an idea of collaboration that moves beyond formal co-authorship to include also other forms of informal intellectual exchange that do not translate into the publication of joint work. A community-detection algorithm for formalising this perspective will be proposed and applied to the co-authorship network of the scientists that submitted to the 2001 Research Assessment Exercise in Business and Management in the UK. While results only partially support research-based homophily, they indicate that scientists use status positions for discriminating between potential partners by selecting collaborators from institutions with a rating similar to their own. Strong support is provided in favour of institutional and geographic constraints. Scientists tend to forge intra-institutional collaborations; yet, when they seek collaborators outside their own institutions, they tend to select those who are in geographic proximity. The implications of this analysis for tie creation in joint scientific endeavours are discussed. Approximate high order smoothness Authors: Z. Buczolich, M. J. Evans, and P. D. Humke Symmetric monotonicity Authors: C. L. Belna, M. J. Evans, and P. D. Humke
CommonCrawl
== Transcript == {{transcript blurb |bloglink=https://theportal.group/into-the-impossible-eric-weinstein-geometric-unity-revealed/ |ai=[https://www.otter.ai/ Otter.ai] |source=[https://www.youtube.com/watch?v=uFirZANoiHI YouTube] |madeby=Aardvark#5610 |firsteditors= |laterrevisor=Aardvark#5610 |editors=Aardvark#5610 |furthercontributors=pyrope#5830 }} ===Introduction=== 00:00:15<br> '''Brian Keating:''' [Inaudible] And we are pleased to bring you a longtime friend of the campus, a friend of this cosmologist, a friend of physics, and that is none other than Dr. Eric Weinstein, who today is joining us from an undisclosed location, but maybe we'll get into that. We have already 114 people watching with many thumbs up. One thumbs down from my mother, Mom, how could you do that? That's wrong, Mom. Don't do that. It's just to me she said, not to you. Eric, how are you doing today? 00:00:51<br> '''Eric Weinstein:''' I'm well Brian, good to be with you. 00:00:52<br> '''Brian Keating:''' It's great to be with you. It's been four months exactly, or three months exactly, since we last conversed via this medium when we had on our mutual friends Max Tegmark and Garrett Lisi. And that was of course very enjoyable for me, to go over this, go over some of the long standing questions I've been having in this exploration of the multiverse of brilliant minds that grace me with their presence on the Into the Impossible podcast. 00:01:22<br> I have been not a stranger to the work that you've been working on. Some say it is the work of a lifetime. Some say it is revolutionary, and could have tremendous implications. Some have questions about it, because of its far reaching implications. And we're talking about universal theories of everything, perhaps a new one created by today's guest, Eric Weinstein. And that goes by the moniker Geometric Unity (GU), and I've been fascinated with this ever since I heard about it probably ten years ago, almost ten years ago now. And today, I thought it would be fun to get Eric on the show, as he has promised to at least be interested in coming on to discuss recent developments that the listenership of this fine podcast would be interested to know. And as you know Eric, we go deep. ===Should a Theory of Physics Say Anything about God?=== 00:02:15<br> So first of all, I want to say thank you, and I want to ask you what is new in the theories of everything space? In particular, we're hearing a lot of talk nowadays from people like Michio Kaku, who will be a guest on my podcast next week, about "the God equation". And my first question to you, which is always of interest to me personally, is why does a theory of physics have anything to say about God, or any relevance to God whatsoever? Before we get into the nitty gritty details. 00:02:48<br> '''Eric Weinstein:''' Well, there are two things that I think that are up, which are—one is man's rigorous attempt to understand his circumstance necessarily intersects you with God, which is a traditional explanation for why is everything here, and the other is that God sells. Part of the problem is that if you name something "the really important particle that we just discovered", that's not going to sell as many books as "the God particle". And so we want to know God's thoughts, we want God particles, and then we back away from them. We claim, "No no no, I didn't mean God particle, I meant god damn particle." This is a game that we play with the public, where we try to amp the public up and get them hot and bothered, and once they're sufficiently in a lather, we try to educate them about the real nature of the universe. So think about it from a computer perspective as syntactic sugar. We're pouring God all over something hoping that people will swallow it. 00:03:49<br> '''Brian Keating:''' And of course, I'm holding up on the screen right now—on my screen I'm sharing a highlighted section from that great work of literature known as A Brief History of Time. This was one of the books that got me interested in cosmology and astronomy by the late great Stephen Hawking, who passed away exactly three years ago on Einstein's birthday, on Pi Day, at least here in the United States the 14th of March. By the way, do you know what other famous figure he shares his demise date with Eric? 00:04:21<br> '''Eric Weinstein:''' I don't. 00:04:22<br> '''Brian Keating:''' A Jewish intellectual by the name of Karl Marx. So Karl had an impact on universal capitalism, and Einstein, of course, was born that day, and Hawking died that day. And in the final paragraph of the book, he says if we can discover, if we can all have part in a final theory, "Then we shall all, philosophers, scientists, and just ordinary people, be able to take part in the discussion of the question of why is it that we and the universe exist. If we find the answer to that, it would be the ultimate triumph of human reason—for then we would know the mind of God." And, as you said, these things sell. It was rumored that he said every equation cuts your audience/readership in half, every mention of God doubles it. So at some level, there's conservation here, but I was always taught— 00:05:15<br> '''Eric Weinstein:''' [Inaudible] 00:05:18<br> '''Brian Keating:''' Sorry? 00:05:18<br> '''Eric Weinstein:''' Go ahead. 00:05:19<br> '''Brian Keating:''' I was always taught physics is not for "Why" questions, and yet there it is. He's bringing up "Why" questions. What do you make of that? Can physics provide the "Why"? 00:05:30<br> '''Eric Weinstein:''' I feel like, when you are talking in these terms, you are reasonably confident that the person is not trying to read the mind of God, because one wouldn't trifle with God in such a way. I believe that in some sense, if you're really worried that the telephone is connected, you'd probably speak about this differently. You might be humorous, you might—I don't know. There's something about the fact that we talk in this way, and it feels to me like when Moses is seeing a bush that is not consumed by flame but appears to be in flame, he knows pretty well that you should be a little careful. And I just, I don't understand the impulse to constantly God-ify everything. ===Experiment and the Scientific Method=== 00:06:23<br> '''Brian Keating:''' And certainly, I'll have on Michio Kaku next week on the Into the Impossible podcast, and I hope maybe we'll get a cameo from you. But in his book, he writes something very provocative. And he says at the end of his book, he quotes those lines from Stephen Hawking, which is kind of like this infinite regress, which kind of strains credulity, so to speak. But he says, at one point he says, "It's not fair to test string theory, to ask to test string theory experimentally, because we don't know its final principles." But the same, I claim, could have been said about quantum mechanics. Do we know the final principles of quantum mechanics? Does that immunize it from experimental test? 00:07:10<br> '''Eric Weinstein:''' Again, these are the same questions over and over again. There's something very wrong about the simplistic nature of the scientific method, and the relationship between experiment and theory, and instance and idea. We're effectively playing through the exact same set of problems: where we hold up one theory to some sort of experimental threshold, we give a pass to another theory. We're all the time pretending that we're not actually doing what we're really doing, which is observing who believes in what theory. 00:07:42<br> One of the reasons string theory got such a boost is that the brilliance of the initial volunteers for the first string revolution around 1984 were so good that we were inclined to give them a huge pass, at least at first. And then, we have this differential application where the string theorists become paradoxically the most persnickety about what is a prediction, because they don't want to give up the fact that they aren't really making predictions. So if you, for example, predict internal quantum numbers of the next particles to be found, but you don't come up with an energy threshold, and you don't say what will invalidate your theory, they get angry. Because, in fact, what we've done is we've given them an asymmetric relationship with the scientific method through special pleading. 00:08:35<br> So we have a really unfortunate situation, which is that we have highly simplistic Popperians, highly simplistic devotees of the scientific method—and I really think that people need to go back to Dirac's 1963 Scientific American article to understand that the real issue is very weird, and we haven't really talked about it. There were three big names in the 20th century in my mind, who contributed something like physical law. And leaving Dr. Mills out of it for the moment, I would say that Einstein, Dirac, and Yang tower, not necessarily that they're the best physicists, although I think I could make a pretty good claim in all three cases, but that physical law is different than the consequences of physical law. And the people who seem to do well with physical law employ mechanisms that would drive Sabine Hossenfelder to distraction. They talk a lot about beauty, and elegance, and simplicity. And what Dirac said was don't force people who come up with new physical laws to play the game of agreement with experiment, because the instance of an idea can easily be off and not agree with experiment. 00:09:55<br> And then you have a problem whereby you're pushing people initially. The instant you open your mouth, "Say what it is that would invalidate your theory so we'll know that you're wrong if you're wrong." And I don't know who this is intended to fool. It's completely irresponsible. And what it is, is an attempt to constantly take anyone who would come forward with an idea and put them instantaneously on the defensive. I think that the right thing to do is to sit people down and say you're supposed to be adults. If we look at our history, everybody who has proposed new physical law and gotten it right had errors. Einstein didn't get the divergence free part. He was vague before that with Grossman. Famously, Dirac's theory of quantum electrodynamics took almost 20 years before the renormalization revolution supplied the ability to compute with it. We had a confusion between the bare and the dressed mass. And famously, the degeneracy between the electron and the proton: we had two particles that Dirac claimed to be anti-particles, because he was too timid to suggest a positron and an antiproton, which Heisenberg [inaudible] has the mass asymmetry. Yang's theory, if left massless, wouldn't come up with the right rates for beta decay if you didn't impart mass to the W and Z, to the intermediate vector bosons. 00:11:29<br> So I think that you have a situation by which new ideas are always not properly instantiated, and the community that is constantly trying to make sure that... I think that the idea is that people are foolish enough to play this game with the most aggressive members of the community, because the implication is if you won't come up with a testable prediction that invalidates your theory, you're anti scientific and we have no time for this. And so people, well like, you know, with the \(\text{SU}(5)\) theory, they immediately said okay, well it predicts proton decay. Well, grand unification is a larger idea, and some versions and instantiations do predict proton decay, and some do not. So what are you going to say about that? I think that the problem is that we're not in an adult phase where we've faced up to the fact that we have almost 50 years of stagnation, and what you're seeing with this proliferation of new claimants to have fundamental theories is, in part, that string theory has finally weakened itself, and the aging of the particular cohort—which is Baby Boomers, who are the string theory proponents—they've gotten weak enough that effectively other people feel emboldened. And I think Stephen Wolfram said this recently, that in a previous era, he would have expected to have been attacked. But we've been waiting around for so long that perhaps the political economy of unification and wild ideas has changed somewhat. ===Approaches to a Theory of Everything=== 00:13:07<br> '''Brian Keating:''' And before we get off the subject of the "Why" questions, I do like a framework that I've heard you, and almost no one besides you, portray laws of fundamental physics, and that's using the good old fashioned mechanism we were all taught in high school journalism: the "Five W" approach. And I wonder if we could start there, with why that is a good deconstructivist approach to ascertaining the realm of validity of a physical law, of a purported new theory, a theory of everything—which I dislike that moniker as you know. But nevertheless, can you talk about that framework and how, for our up and coming but bright listeners, of which there are many currently watching right now, how you approach that using the "Five W's", why that's so important, and then maybe that will segue into a description of the actual physical instantiation of that framework. 00:14:07<br> '''Eric Weinstein:''' I will point out that how has the "W" on the end. Yeah, I think that... I usually do it as "Where" and "When", "Who" and "What", "How" and "Why". And let's just say, first of all, what we generally speaking mean by a theory. What we're usually talking about is a way in which waves can propagate and interact in various media. All right? The theories of the world are theories of waves and interaction. Waves imply a medium. So the "Where and When" is sort of a particular kind of a substrate, usually, which Einstein imbued with the name spacetime, "Where" being space, "When" being time. The "Who" and the "What" I take to be fractional spin and integral spin particles; every particle that we know of that's fundamental is one or the other. So, let's say that the "What" is the fermionic, fractional spin particles, and the "Who" is the integral spin, generally speaking force particles, non-gravitational—but then we also have to throw in the Higgs and the metric for spin-0 and spin-2. And then there's the "How" and the "Why", the "How" would be the equations of motion, and the "Why" would be the Lagrangian that generates the equations of motion. 00:16:03<br> And so, in some sense, it's not surprising then that the theory has to conform to the basic idea of when you're trying to tell somebody something, these are the questions that we want to ask, and it's a surprisingly tight mapping. And I just find that people can better remember that, because very often what we've done is we've taught people to focus on the wrong things when we talk about fundamental physics. They're overly focused on entanglement. They're very focused on quantum measurement. They have no idea about bundles, they don't have ideas about symmetry groups or why symmetry groups are important. And so, for some reason, when people learn about theories of everything, they're very animated, but they're very animated as to on the grounds of what has sold books recently. ====Limits and Methods for Constructing the Universe==== 00:16:54<br> '''Brian Keating:''' That's right. And we have no shortage of multiverses, double-slit experiments, spooky action at a distance, and other invocations of this gentleman [Albert Einstein]. I point out that Einstein is Weinstein with[out] a W. Okay, you have a fascination with W's, obviously. So, I want to go starting with Einstein, to something that I know is very influential to you. And it's sort of a provocative question that has inspired you, apparently. And that was a question, a stylized question, posed to Ernst Strauss by Albert Einstein, regarding the amount of freedom present in our field theoretic universe. What is that question? 00:17:43<br> '''Eric Weinstein:''' Well, the question is how much freedom is there in what we take to be the Standard Model—and I'm sorry, I'm using a term of art accidentally. How much freedom is there to construct the universe? And is this one of many that could have been constructed, or is it effectively unique? And are we talking about the God concept, if you will, as a design constraint, where things are the way they are because they could not be otherwise? And I think that it's a very interesting question, because in some sense, I don't know that he meant it this way, but I took it to be a research program. 00:18:35<br> '''Brian Keating:''' And in terms of it providing this direction for you, is the question itself the research direction? Or is the overarching theme, of sort of freedom, flexibility within physical laws, the programmatic kind of marching orders that you took unto yourself? 00:18:55<br> '''Eric Weinstein:''' It's an interesting question. I mean, I think that what I don't understand is that people talk about theories of everything casually, as if a theory of everything is sort of—it may not be a very artful term. It's sort of theories of all the rules, not what can be played with once one knows all the rules. I guess what I take it to mean is that we have a problem of even conceiving of what a non-effective theory would be. Well, what is an ultimate theory? I mean, I think that in large measure I see two kind of canonical versions. One of them I would sort of associate with Garrett Lisi's E8 idea, although I don't believe that that works. You start with something incredibly rich that exists by necessity, like a large exceptional Lie group, or maybe a large finite group, or something that is somehow distinguished. And then you attempt to milk it for peculiarities that can be identified with our world, and that's how you get the richness of our world. Whether or not you believe in Garrett's theory, I do think it's emblematic of an approach. 00:20:08<br> Another approach is closer to embryology, where you start with something that is deceptively simple, like a single fertilized egg. Then you ask, does that attempt, in some sense, to bootstrap itself into the totality of existence? And that's much closer to what I ended up doing. I mean, I considered Garrett's E8 thing before I ever met Garrett, because E8 is spinorial, it's chiral, it has lots of stylized things that seem to fit our world, but I couldn't figure out how to really make it into a theory, and then I went the other direction. I think it's pointless to ask why is there something rather than nothing, unless I'm mistaken. I think that the point of a fundamental theory is to get the scientists to accept the initial input is so uninteresting to go beyond that they put down their pens, and the theologians and philosophers take over. 00:21:08<br> If you imagine that the initial input to the universe is just 4 dimensions, for example, I don't think that many scientists would be motivated to say "Why are there 4 dimensions?" at a scientific level, because that sort of begs the—it's not enough of a clue for anything to proceed scientifically. I mean, maybe all versions have multiple dimensions, maybe there's 17 dimensions too. So I think that in large measure, the gambit that I'm trying to follow, as misguided as it sounds, is, "Is four dimensions on its own, in the form of a manifold with a few extra mild conditions," like a single unique spin structure, something like that... Orientable. "Is a nice 4-dimensional manifold sufficient to start the universe from effectively no other major assumptions?" And that's how crazy this is. 00:22:17<br> '''Brian Keating:''' So when you say "this", we're talking about Geometric Unity. A reminder, we're talking to Eric Weinstein, Dr. Eric Weinstein, proprietor of The Portal Podcast. And you can find his YouTube channel at nobani88, which is a cryptic reference to the year I had my first kiss. I don't know why it's called that, but it should be The Portal, we'll get that fixed. Eric, if in the meantime, could you tilt your webcam down just a tiny bit, so your head is not at the bottom of the of the frame? That would make it—Yes, very good. Very nice. ====What is Fundamental?==== 00:22:47<br> So, what is fundamental? I've had these conversations just recently on my podcast with Dr. Stephen Meyer, who you know is a proponent of the intelligent design hypothesis, I'm not going to get into that. I am a critic of that, and then we are yet good friends. But he makes the case that in things like the Guth Vilenkin conjecture, or in the Lawrence Krauss universe from nothing, we always start with the laws of nature and instantiation thereof. So too with debates I've had with Sean Carroll, a friend of mine, and a greatly respected mentor in the field. That God could have chosen to start the universe with an empty Hilbert space is his conjecture, and therefore, there's a simpler universe than the one we inhabit. We're not going to talk about Sean necessarily, we're not going to talk about Stephen Meyer. But I want to talk about what is the fundamental element, the yealm, the thing from which emerges spacetime? Or is the spacetime, or observerse if we can go there now, is that truly fundamental, or is it emergent? What comes first, the observerse or the observer? 00:23:59<br> '''Eric Weinstein:''' Well first of all, I mean, let me just say a few words. What we're talking about is crazy. And I think it's really important to just own up to the fact that for people who want sober physics, this is probably not the channel for you today. Now... No, I mean, I take this stuff very seriously because I don't like the bullshitty aspect. And we're using April 1st as a contrivance, because I think that many people are induced to self-inhibit, because particular members of the community are incredibly aggressive in making it extremely expensive to explore ideas. And I'd like to think that living outside the community, I could start a tradition to make it at least inexpensive one day a year to throw the middle finger to those people who like to play Simon Says games, or reputational destruction games. Now— 00:25:01<br> '''Brian Keating:''' A purge. A purge for physics. 00:25:05<br> '''Eric Weinstein:''' Well, there should be many more such days, and I'd love to get there. But let's at least start with one a year. So, this is my second year round trying to hit this. Look, I believe, that at some level, that the initial ingredient may just be a 4-dimensional manifold. And then things emerge from that. A 4-dimensional manifold with a little bit of extra structure—but that's why this is crazy. 00:25:37<br> '''Brian Keating:''' So it starts from very modest inputs, and from such modest inputs comes a rather extravagant universe. Let's talk about the inputs. I don't know how closely you want to follow, if you want to share screens or anything like that, we're free to do that. What are the inputs? There are the players, the matter players, there are the gauge bosons, there are new predictions, there are new concepts that Geometric Unity has provided. And so the question, I guess first of all, is how close do we want to follow this prescription of what has been portrayed in the past? And/or do we want to talk about what is new in the preceding year since the last episode of April Fool's purge podcasting began with The Portal special episode? 00:26:34<br> '''Eric Weinstein:''' Well, it's very interesting to consider that we've had a year where there's been a fair amount of interest in it. And, let's be honest, very little of the interest has been particularly detailed. I would have thought that maybe what I said was ununderstandable. And then, oddly, a paper purporting to critique the theory managed to demonstrate that they had understood fairly well what I had said, and that it was understandable. Unfortunately, there was one named author and a [sic] imaginary friend, and I don't respond well to people posing behind pseudonyms. But part of that was constructive. And, you know—I'm attempting to share a bit of screen now. 00:27:45<br> '''Brian Keating:''' Okay. Here, I'll add that. Okay. So now it's full, you've got the full screen on the paper. ====Forgotten Problems in Physics==== 00:27:59<br> '''Eric Weinstein:''' So effectively, what I'm asking is, can a manifold \(X^4\) produce the baroque structure of the Standard Model? Now—and gravity. And if you think back to the famous mug popular in the CERN gift shop, there really isn't that much going on in the Standard Model if you group terms in particular ways. But there's a lot of weirdness. Why the Lorentz group, why \(\text{SU}(3) \times \text{SU}(2) \times \text{U}(1)\) for the internal symmetries generating the forces, why three families? I thought that something that many younger viewers may not be aware of is that things really changed around 1983, '84. If you think about the original anomaly cancellation of Green and Schwarz in 1984, I believe, you could ask what was physics like right before that moment? And I think it's absolutely shocking, because we don't realize the extent to which the string theorists really redefined what the major problems in physics were. I think most people in the post-string era somehow believe that the major issue is quantum gravity. And I don't really, I just find it astounding, because that's really what the string theorists were selling. 00:29:34<br> So this is from Murray Gell-Mann's address to the second Shelter Island conference, where they're trying to recapture the magic from Ram's Head Inn after World War II, when the young physicists were invited to—feeling that they had done well on the engineering project that was the Manhattan Project, they were buoyed in their confidence. And years later in 1983, Murray Gell-Mann says well, what are the big problems? "As usual, solving the problems of one era has shown up the critical questions for the next. The first ones that come to mind looking at the standard theory of today are," and then, I think this is absolutely shocking and indicates the extent to which the current generation has really given up on doing what we would typically have called physics, relegating the things that are relevant to the physical universe that we see usually to the realm of particle phenomenology. 00:30:32<br> Okay, so what are his big questions? Why this particular structure for the families? In particular, why flavor chiral with left- and right-handed particles being treated differently by the weak force, rather than say vectorlike ones left and right transformable into being treated the same? Next, why three families? That generalizes Robbie's famous question "Who ordered that?" as if the universe was a Jewish deli, commenting on the muon. How many sets of Higgs bosons are there? We talk about the Higgs boson, but maybe there are multiple sets and there are multiple different scales at which symmetry is broken and mass is imparted through soft mass mechanisms. Lastly, why \(\text{SU}(3) \times \text{SU}(2) \times \text{U}(1)\)? Remember, \(\text{SU}(3)\) is the color force for the strong force, but \(\text{SU}(2)\) here is weak isospin, which has not yet become the W and Z's. And this \(\text{U}(1)\) is weak hypercharge, which has not yet become electromagnetism through symmetry breaking. And in some sense, I just feel sort of sad that we don't think of these as questions because we know not to ask them. 00:31:42<br> And somehow we got convinced that we were being called to quantize gravity, not necessarily—if gravity is geometric, you could just as easily have said should we be geometrizing the quantum? And if we geometrize the quantum, you would notice that this era would have been triumphant, because that's really what happened. We didn't do a lot of physics, but we really did put the framework of physics—that is quantum field theory, quantum measurement, classical field theory—all in very geometric frameworks. In fact, I would say that there were three really big revolutions, although we don't talk in these terms. One was the discovery by Simons and Yang of the Wu–Yang dictionary, I'm blanking on Wu's name, which Is Singer was also instrumental in taking to Oxford. Then there's the geometric quantization revolution, where the quantum was understood to be intrinsically geometric because the Heisenberg uncertainty relations should emerge from the curvature tensor of a prequantum line bundle, but the sections being the states of a vector space once polarization is taken into account. And then lastly, the geometric quantum field theory revolution, in which we came to understand the quantum field theory really isn't about the physical world, that it gets applied in one particular set of inputs to the physical world, but it's actually a mature mathematical enhancement of bordism theory from topology, strangely. So, those three major revolutions all went exactly counter to quantized gravity. They said, "Let's geometrize the quantum instead," and so they did. 00:33:26<br> '''Brian Keating:''' And did—how successful should we regard this? The resulting byproduct or lack thereof progress, lack thereof in the intervening...? 00:33:38<br> '''Eric Weinstein:''' This is very unpleasant to have to say this, but I think that we are talking about a great era with heroes. The top hero among them is undoubtedly Ed Witten. But I do believe that Yang and Simons, I think Yang and Simon's discovery of Ehresmannian bundle theory, which has a precursor—and I'm blanking on the gentleman's name (Robert Hermann), all the self published books from from the '60s. It'll come to me, but there was a man in Boston who probably got there a little bit earlier. And then I would say that you have accidental physicists. Dan Quillen, for example, did a huge amount to talk about connections on determinant bundles and the like, which come out of various quantization procedures, particularly with Berezin integration of fermion sectors. So I think that a lot of things got done to shore up what we do to mature input into a quantum theory. It just, it wasn't physics, per se. It was sort of the mathematics of physics. And I think that that was very frustrating, which is, you know, it's sort of, to physicists it's yeoman's work. They wanted to go to Stockholm, and they ended up winning the first Fields Medal won by a physicist, and I think—it's weird. It's like, what is your time? Your time is whatever it is that can be done. And they thought their time was to quantize gravity. "Well guess again," nature said, "we have something incredibly important." So I feel like I'm trying to rescue their legacy. They want to go down as string theorists for the most part. And they want to say that string theory was the most successful of any claimant, even though it wasn't very successful. And, my feeling is— 00:35:39<br> '''Brian Keating:''' Now, can you say it's not—Go ahead. 00:35:44<br> '''Eric Weinstein:''' Well, yes, I feel like we can say that it's not very successful, because they gave us the terms in which we should evaluate it. You know, I remember being told "Give us 10 years, we'll have the whole thing cleaned up. Don't worry your pretty little head, we'll be fine," or, "We have a finite number of theories to check." And then lo and behold, there's a continuum, or why is it called string theory when there are branes involved? And it was because if you asked once upon a time, they'd say, "Well, it's not like mathematicians think about higher-dimensional objects beyond strings." There was an explanation for why there were no branes. And, you know, that—yes, string theory has failed in its own terms. Now is it salvageable, are there pivots beyond? Yeah, sure. I'm not saying that they didn't stumble on a tremendous amount of structure, maybe that structure ultimately carries the day. But I do think that the idea that they're entitled to this many pivots without having to become self-reflective is preposterous. And I think many people feel that way, and they know that they might pay for such a statement with their career. And since I've prepaid, it falls to people like me and to you, perhaps, to say look, the string theorists weren't able to confront their failure. ====The Grand Nature of Physics==== 00:37:11<br> '''Brian Keating:''' When we talk about these things in rather, some say, grandiose terms, I think sometimes we do lose sight. 00:37:19<br> '''Eric Weinstein:''' [Inaudible] I really don't want to use the word grandiose. Like, are we going to talk about grandiose unified theory? Let's be honest about it. Physics is the most honest way to ask the most grand questions in the universe. 00:37:36<br> '''Brian Keating:''' Absolutely. 00:37:36<br> '''Eric Weinstein:''' If physics is grandiose, then we've got real problems. Then grand doesn't exist. And if grand doesn't exist, then grandiose doesn't exist. So, my feeling is no. This is the actual grand quest, and we're not going to back off it and be pussies about it. This is not grandiose, this is the real deal. 00:37:54<br> '''Brian Keating:''' I was thinking, speaking of myself, being a self-aggrandizement of seeking these ultimate questions, but we do, and I was gonna give physics a good deal of credit, because we do ask these ultimate questions. And yet, of course, day to day basis, I remember wanting to help you out as little [a] role I could play in the exposition of this magnificent opus that you're working on, and saying, "Eric, this is great, but I got a bunch of kids that I gotta go pick up." And you said "Well, maybe that's why we'll never get off the planet, because guys and girls"— 00:38:28<br> '''Eric Weinstein:''' Everybody has to pick up their dry cleaning. 00:38:30<br> '''Brian Keating:''' Every time you gotta pick up your dry cleaning—but when we lose sight of it, I find with my colleagues, and I'll speak, because I doubt many of them are listening. I really don't feel like they're that curious, intellectually. I think it is a job. I think their their job is the dry cleaning. And I can sort of prove that in some ways, because I often hear them say things like well, Eric is a showman, he's a podcaster. He's a host, and he's had training, and he's very smooth, and he can speak well. And I say "Well, do you think he do you think he emerged from the womb like that? And by the way, Mister or Missus Professor, Doctor Professor, you have got a lot of training in quantum field theory and string theory yourself. That was presumably a challenge for you. You didn't emerge womb-like, you know, from the caverns of the womb, knowing quantum field theory, so you had to work at that." So it's all about prioritization. Why do you think physicists aren't more troubled by the lack of progress, that our mutual friend Sabine has pointed out, in the last 50 years, at least in fundamental physics? My colleagues will rightfully point out tremendous advances in cosmological theory, in condensed matter theory, etc. But why isn't that more troubling? I think the answer is we're not that curious. You have a vision of us that's maybe more more refined than I think we deserve, and that's because you're not a professional physicist. 00:39:54<br> '''Eric Weinstein:''' Look, I feel very similar about my feelings about physics as an outsider to the way I view the UK. When I go to the UK, very often they they seem to be defeated, because they lost their empire which they should never have had in the first place. But my feeling is if you really look at the UK, it's an amazing place, and any outsider should be able to see that. I guess what I think about here is that any outsider who really takes physics seriously should be able to see that this is our premier community, intellectually. It is the most accomplished of intellectual communities. And it's also very badly behaved, and it's fallen on hard times. 00:40:39<br> It's like seeing a grand family that's forgotten itself, because it has to constantly submit to the arXiv. We now have the snarXiv, as you know. The snarXiv is filled with papers that are indistinguishable as a Turing test from arXiv papers. I think I looked for like, I don't know, the Gell-Mann–Nishijima formula on hep-th, and I realized that people really weren't doing physics. You know, there's certain things that you would have to do if you were going to do physics. I don't mean to say that no physics is going on, but my God, it's really people that just don't believe anymore. I think that when you're talking about almost 50 years of a particular kind of failure in fundamental physics, where theories and predictions effectively become accepted as being the likely explanations for the universe. We're getting to the point where everybody who's contributed to the Standard Model after this year will be over 70. ===Understanding Geometric Unity=== 00:41:44<br> '''Brian Keating:''' What do you say to the younger people who say they can't understand it, they can't comprehend Geometric Unity. Our friend Sabine, she can't understand it. Is it too complex for her? 00:41:57<br> '''Eric Weinstein:''' No, there's a bunch of different games. One game is the "I can't understand all this fancy pants stuff." Another game is "Be hyper specific so we can invalidate you." There's another game, which is, "Well, we know that you don't know quantum field theory really well, so what energy level do these things kick in at?" And I find all of this incredibly dispiriting and exhausting, because it's also transparent. We can say what Geometric Unity actually is. We can draw a picture and people can get it. And in fact, I was talking to my good buddy Joe Rogan earlier today, and a particular group of people that listen to my podcast put up a site for Joe called pullthatupjamie.com. If you want to navigate to pullthatupjamie.com—in part, this is below Sabine's level. But I'm happy to, you know, if you got her on the horn, she could understand what's being said. 00:43:09<br> '''Brian Keating:''' Yeah, I have no doubt about that. The question is, when we talk in the language of bundles, of fibers, etc, at what level do people kind of lose the physics for the geometry, for the pure mathematical? And I think— 00:43:26<br> '''Eric Weinstein:''' Let's walk the first step, and then let's watch people who are technically capable claim that they can't follow what's going on, because I don't think it's true. 00:43:35<br> '''Brian Keating:''' So... 00:43:38<br> '''Eric Weinstein:''' You have \(X^n\) for a manifold of n-dimensions. Make it orientable with a particular orientation, make it have a unique spin structure, whatever you need to do to set it up as a decent manifold. Replace that manifold, momentarily, by the bundle of all metric tensors pointwise on the same space. And that way, spacetime would be a particular section of that bundle. Let me see if I can find a... 00:44:19<br> So the first thing is that the observerse replaces spacetime. And, again, you're not trying to kill off Einstein, you're trying to recover Einstein from a different structure. So I'm looking... 00:44:43<br> Okay. So right here, I've got a 4-dimensional manifold. Imagine that I'm interested in looking at the bundle of all pointwise metrics, which is going to be—if the base space is 4-dimensional, make \(4 = n\)—it will be of dimension \(n^{\frac{n^2 + 3n}{2}}\). So \(4^2\) is 16, plus \(3n\), \(3 \times 4 = 12\). So \(16 + 12 = 28\), divided by 2 is 14. If you have a \((1,3)\) metric downstairs, I believe that you are naturally courting a \((7, 7)\) or \((9, 5)\) metric upstairs. And that is the first step in GU, which is that you replace a single space with one particular metric by a pair of spaces, a total space and a base space of a fiber bundle—this is in the strong form of GU—and physics mostly happens upstairs on the bundle of all metrics, not downstairs on the particular space that got you started. 00:46:02<br> Here, \(U^4\) is an open set in \(X^4\). Okay, so effectively, what are we saying? We're saying that physics is going to dance on not only the space of four coordinates, typically \(x\), \(y\), \(z\), and \(t\), or thinking in a coordinate-independent fashion, simply four parameters, it's also going to dance on the space of rulers and protractors at every given point. And so that structure is the beginning of GU, and then you can recover Einstein, spacetime, by simply saying that if I have a section of that bundle, that's a spacetime metric. 00:46:55<br> '''Brian Keating:''' So when you say in the simplest form, or in the reduced form of GU, what do you mean? 00:47:03<br> '''Eric Weinstein:''' Well, I gave three forms of GU. One form is the trivial form, in which you have the second space \(Y\) the same as the first space \(X\). That means that you can easily recover everything Einstein did as a form of Geometric Unity by trivially making the observerse irrelevant. You're just repeating the same space twice, and you've got one map between them called the identity, and now you're back in your old world. So without loss of generality, you cover that. Another one is a completely general world, which I think—What did we call it here... Well, I called the middle one the Einsteinian one, where you actually make the second space \(Y\) the space of metrics. And that's the one that I think is the most interesting, but I don't want to box myself in, because I don't want to play these games of Simon's "You said this," or "You said that." You know, I can play the lawyerly game as well as anyone if that's what we are really trying to do. I thought we were trying to do physics. 00:48:19<br> The thing that I'm trying to get at here is that I believe you and I are somehow having a pullback of a 14-dimensional conversation right now. My guess is that there is a space, with a \((7, 7)\) metric, probably more likely than a \((9, 5)\) metric, on 14 dimensions, where not only are the waves that are relevant going over the original coordinates \(x_1\) through \(x_4\), they're also going through four ruler coordinates on the tangent bundle of the original \(x\) coordinates. So there are 4 rulers to measure the 4 directions, and then there are also going to be 6 protractors. Because if you name the directions John, Paul, George, and Ringo, you'd have John with Paul, John with George, John with Ringo, Paul with George, Paul with Ringo, George with Ringo. Right? And so, those 6 protractors are actually degrees of freedom for the fields, and the fields live on that space. 00:49:30<br> Then the question is why do we perceive 4 dimensions and complicated fields? And the answer is pullbacks. When you have a metric, you have a map from the base space into the total space, so Einstein—we don't think of it this way—is embedding a lifeless space which is without form, \(X^4\), into a 14-dimensional space before Geometric Unity ever even got on the scene, and giving him the ability to pull back information, which he may say is only happening on that tiny little slice, that little filament that is the 4-dimensional manifold swimming in a 14-dimensional world with a 10-dimensional normal bundle. But why not imagine that actually the fields are actually spread out over all 14 dimensions, and then all you're seeing is pullback information downstairs. Now the metric is doing something new that it wasn't doing before. It's pulling back data that is natural to \(Y^{14}\) as if it was natural on X, but I call this invasive fields versus native fields, just because some species are invasive, and some species are endemic, or native. The interesting thing about the bundle of all spinors, sorry, the bundle of all metrics, is that it almost has a metric on it. I don't know if I've ever heard anyone mention this. 00:51:02<br> '''Brian Keating:''' The space—repeat that. The space of all metrics almost has a metric on it? 00:51:08<br> '''Eric Weinstein:''' Yeah, nearly. 00:51:09<br> '''Brian Keating:''' Explain? 00:51:09<br> '''Eric Weinstein:''' So in other words, assume that you haven't chosen a metric on \(X^4\). What you have then is you have a 10-dimensional subspace along the fibers, which we can call the vertical space. And that 10-dimensional space at every point upstairs, every point is, in fact, a metric downstairs, being by construction. So that means it imparts a metric on 10-dimensional vectors along the fibers. Now those are symmetric 2-tensors, effectively, because it's a space of metrics. You have this really interesting space here, call that \(V\). Well that \(V\) has a Frobenius metric based on the particular metric at which you are looking at the tangent space, which has got a 10-dimensional subspace picked out. If you map that 10-dimensional subspace into the 14-dimensional tangent space of the manifold \(Y^{14}\), you can take a quotient and call that \(H\). And that \(H\) will also have a metric because it's isomorphic to the dual of the pullback of the cotangent bundle downstairs. And the cotangent bundle has a metric because at that point that you picked in \(Y^{14}\) is itself a metric downstairs. 00:52:40<br> So now you've got a metric on \(V\), you've got a metric on \(H^*\), and you just don't know how \(H^*\) becomes the complement to \(V\) and \(T\). That's the only piece of data you're missing for a metric. So you've got a 4-metric, you've got a 10-metric, the 10-metric is sitting inside of the tangent. The 4-metric is naturally sitting inside of the cotangent bundle. They're weirdly complementary, you've got a metric on the nose but for one piece of data, which we call a connection. So up to a connection, the manifold \(Y^{14}\) has a metric on it without ever having chosen a metric because it's made out of metric data. 00:53:21<br> Now spinors have a really interesting property, which I would call an exponential property. That is, the spinor of a direct sum is the tensor product of the spinors on the summands. 00:53:36<br> '''Brian Keating:''' That's not true for any spin, or is that true for any spin, or just half integer, or...? 00:53:43<br> '''Eric Weinstein:''' Well, that's true for any—no, it's true for the spin representation. It's not true generically, for any representation. But it allows you to build the spinors on what should be the total space, because now you've got a 4-dimensional... So, I think it's here at 3.12. If the spinors of a sum are the tensor products of the spinors on the summands, and I create a new bundle, which is the 10-dimensional vertical bundle inside the tangent bundle direct sum the 4-dimensional bundle inside the cotangent bundle, then the spinors on that thing—which is isomorphic and in fact semi-canonically isomorphic to both the tangent bundle and the cotangent bundle, being chimeric, it's isomorphic, but it's not fully canonically. It's only semi-canonically. So spinors on that will be identifiable with the spinors on \(Y\) as soon as you have a connection that completes this and makes it fully canonically isomorphic. 00:54:49<br> So the take home message, there is a spin bundle up on the bundle of all metrics, which is nearly the spinors on the tangent bundle, that exists without making a metric choice. And if you're really serious about quantum gravity, you should be very freaked out about the idea that once you quantize the metric, you've got a whole lot of pain, because the electron and the hadron bundles, and all the spin-1/2 matter, the medium in which these particles are disturbances, are excitations, doesn't really exist in the absence of a metric choice. If you allow the metric to become quantum and allow it to blink out, the spin-1, spin-0, and spin-2 particles may be indeterminate between observations. But the bundle itself, the medium, is indeterminate between observations of the metric for fermions. So now you're in a really different conceptual world. Everybody should want to free fermion bundles from dependence on the metric if they're serious about letting the metric blink out in some supposed quantum gravity regime. ===Implications, Expectations, and Communication=== 00:56:05<br> '''Brian Keating:''' Let me ask you about that for a second. So it seems like this is a huge, you know, "Huge if true," I always like to say. 00:56:13<br> '''Eric Weinstein:''' Well we say that, but I don't know whether I just missed one hell of a meeting. I just don't understand why everybody isn't worried about— 00:56:20<br> '''Brian Keating:''' So this is huge, right? This is, what you're saying is that you can get spinors— 00:56:24<br> '''Eric Weinstein:''' If I haven't made a boneheaded mistake. 00:56:27<br> '''Brian Keating:''' Well, this is where I'm going to. I don't think you have, but I'm just a simple experimental cosmologist, okay? 00:56:33<br> '''Eric Weinstein:''' I'm just a podcast host. 00:56:33<br> '''Brian Keating:''' I traffic in [the] nuts and bolts of cosmological experiments, telescopes, as you know, detectors and fields. I am out of my depth in many cases, but this struck me like a freaking thunderbolt that you were deriving—essentially, spinors can be defined without choosing a metric. That is new. I don't think that any critic, any anonymous, pseudonymous, or anonanononymous person can really criticize that. I mean, that's just a fact. So why wouldn't physici—if it's not true, it would be, you know, almost surprising, but if it is true, why haven't physicists noticed this before and why aren't they making a bigger deal out of it? Partially, it might be your fault, because you haven't published this. 00:57:18<br> '''Eric Weinstein:''' Blame the victim. 00:57:21<br> '''Brian Keating:''' Who else? 00:57:22<br> '''Eric Weinstein:''' So what I usually hear about this is, people say "Oh, you don't understand, Jean-Pierre Bourguignon told us how to move spinors under variation of the metric." But he's varying the metric continuously, there's always a metric present. What if there's no metric for a little while? 00:57:46<br> '''Brian Keating:''' Which could be the universe before before God intervened. 00:57:50<br> '''Eric Weinstein:''' Are you going to do a Feynman integral over all variations of the metric? I mean, I don't know what kind of pain you're signing up for, but I'd certainly rather free—look, here's the basic statement. If we're serious about quantum gravity, we should be very serious about trying to get fermions that don't require their bundles to be dependent on the existence of a metric at all times. And I'm sure that either there's a brilliant explanation that I don't understand, and I'm eager to hear it, or it's a key sign that the community really dropped the ball. Remember, for example, that the Bohm–Aharonov effect—I'm sure that when, when Aharonov–Bohm said "Hey, shouldn't there be an effect of this zero field strength?" they probably thought 'Have I lost my mind?' I'm sure that Yang and Lee, when they proposed that maybe the weak force was left-right asymmetric, probably thought, 'Are we were going to be laughed at, did we just not understand what everybody else understood?' Physics gets things really spectacularly wrong occasionally, and I'm curious to know if this is one of those moments. 00:59:04<br> '''Brian Keating:''' Yeah. I mean, you might also say oh, there's 26 dimensions in heterotic string theory. That can't be right. No, it's only 10, or 11, or 5-brane, m-brane theory. I want to ask another question, which is frequently used in criticisms, both anonymous and nonymous, which is that this doesn't— 00:59:23<br> '''Eric Weinstein:''' I, can I actually, can I just say something? I really don't want to talk about anonymous trolls with PhDs criticizing the theory. And I also don't want to talk about non-constructive hit jobs on new theories. Last time I checked, physics was in a crisis that some people were admitting to and other people were sweeping under the rug. 00:59:43<br> '''Brian Keating:''' Okay, well— 00:59:44<br> '''Eric Weinstein:''' If you have a crisis—wait wait wait—if you have a crisis, for God's sakes [sic], open it up. We don't need one more talk from the same crowd of people who have been keynoting every conference of note for the last 30 years who haven't got the new ideas. Let's at least hear crazier, weirder, wilder people. And if you guys don't have the guts and courage to do it from inside the community, hear it from a podcast host. 01:00:10<br> '''Brian Keating:''' Okay, well, this is my podcast, and I do want to respond to these criticisms, because for me, I don't find them legitimate. And you can choose to be silent as is your want. No, it's rare to— 01:00:23<br> '''Eric Weinstein:''' No, I wish to punish dysfunctional cowards who attempt to snipe, pretending to be helpful. You can do better at it. 01:00:35<br> '''Brian Keating:''' I can do better as well. But I do want to say that this is maybe a general comment, not for pseudonymous and anonymous people, bananymous. But this is a general complaint that I've heard: it has to reproduce quantum theory. And I think, forget about that with regard to GU, it could be said about other theories, loop quantum gravity, etc. First of all, I think GU does produce what we would say is a relativistic quantum field theory in the Dirac equation, which is manifestly resplendent and produced and predicted. So I don't want to hear from you just yet, Eric, I do want to get your response. But this notion that a theory of everything has to subsume anything—I said this to our mutual friend, Stephon Alexander, professor at Brown University and esteemed cosmologist, and close friend to both Eric and myself, I said, "Look, I don't think it's valid to say that any theory of everything, string theory or whatever, has to predict every manifestation of physics," and this is where I take issue, and I make truck with Professor Kaku, who says things like, "The one-inch-long God equation will predict everything." I don't think that's possible, (A) I don't think it's useful to think about the goal of physics is to predict every phenomenon in physics. 01:01:54<br> '''Eric Weinstein:''' Because it's an incautious statement. Really what you're trying to say is that there's stuff that you should be able to read off in the basic setup of theory directly, and there's stuff that you should work your ass off in order to get from the theory. Now, you know, we don't see quarks running around free the way you might imagine, naively, you would if you were looking at the hadronic part of the Standard Model Lagrangian, and so you have to work pretty hard, I would imagine, in order to find these bound states that we call protons and neutrons, and try to understand infrared slavery, etc, etc. Now, that's part of the hazard of saying I can predict everything. No, even computationally, you don't think so. Really, it's just a question of, we should be able to recover everything that we've already done. And actually, I think that that's pretty fair. 01:02:48<br> '''Brian Keating:''' So even— 01:02:49<br> '''Eric Weinstein:''' I think there's a dumb way of doing it, where you try to say, "Show me this, or then you don't have anything." And I have to say, I encounter a tremendous amount of that from people who are old enough to drink, and it gives me pause as to who's raising the young. That's not the issue. The issue is, they're right, they should be saying "Look, here's what we know how to do, and you should be looking to recover what we already know how to do from what you're saying," and I think that's actually fair. There's a question of should you be able to do everything on day one? Should you be able to do it when you've been cut off for 27 years working completely on your own under totally weird circumstances, where every month you feel you get farther and farther away from the literature, and your brain hasn't spoken this language in a million years? Those are questions that I feel like—that's really sad, because people don't understand what the cost of isolation is. I do think, however, that working in a context with competent people who aren't constantly trying to rename everything after themselves, there's no question that that's a reasonable and fair thing, if we had a collegial world based on a desire to advance our understanding. And I'm happy if I fail at that with a collection of constructive colleagues to say that that's a black mark against the theory, that's fine. 01:04:20<br> '''Brian Keating:''' Now, when I look at the corresponding, shall we say, implications against string theory, I would say things like the swamp land, the multiverse problem, these may be issues that cause stillbirth in many people's minds. I've talked to you about Paul Steinhardt, the Einstein Professor of Natural Science at Princeton—he regards the string theory as essentially bad for society, not just for physics, not just for science, but bad for society because of the extravagance in a truest sense of the word, in a bad sense of the word, of the multiverse and string landscape. Now I know you're shaking your head—go ahead. 01:04:58<br> '''Eric Weinstein:''' No no no. Let me be very clear about it. We're wimping out from what needs to be said, and it's really important the community gets it right. I don't think string theory is a problem. string theory can't harm anyone, string theory doesn't—it's the string theorists when they're in their triumphalist mode, that it's an insufferable state of being. But even then, you know, I'm sure Feynman was insufferable, and I think Murray Gell-Mann was insufferable, and Pauli was pretty insufferable. We've had insufferable members of our community for a very long time, and we should not be getting rid of insufferable people. The problem is, what happens when people become insufferable and they don't constantly check in with the unforgiving nature of the universe. I mean, Pauli predicted the neutrino in an insufferable fashion. 01:05:58<br> '''Brian Keating:''' And apologized. He apologized profusely, "I've done something which should never be done." Now, I asked you though, should string theory—let's just be neutral to GU for a second. Should string theory, from string theory, emerge the Aharonov–Bohm effect? I mean, a true theory of everything, it would, right? 01:06:19<br> '''Eric Weinstein:''' Look, and if it took a while to recover certain features of the world that you had in an effective theory—I mean, look, let's put it this way. If you look at Marshallian demand in economic theory, should you be able to predict that from the Lagrangian of the universe? No, it's in a different strata of the world. You should be able to predict things that are within the adjacent strata of the theory, and then you might have to appeal to some higher effective theory. 01:07:00<br> Look, I want to defend both the string theorists and string theory. These are incredibly smart people who found some real structure, and who never knew when to quit when it came to trumpeting just how much better string theory is than everything else. Even there, they had a point. They were smarter and deeper, in general, than everyone else. They just weren't as good as they claimed to be, and they weren't as successful as they claimed to be, and what they did succeed that they didn't want to take credit for, because it was really mathematics done in physics departments rather than physics. 01:07:34<br> So we have a problem that sociologically, nobody wants to say that the Institute for Advanced Study has the smartest guys around and a lot of what they do isn't physics, in standard terms, it's the mathematics of physics. These are uncomfortable truths, just the same way that it's uncomfortable that we're taking seriously somebody who's been out of the field for 27 years. But these are end times, we're having end time conversations. I think that it's—we don't need to be mean about it. I think it just needs to be more honest. ===Concept Animations=== 01:08:08<br> '''Brian Keating:''' Okay. With that, I give some applause here. Let's see if we can hear that. [Applause sound effect] Got some applause, Eric. A smattering. That just was a smattering. I want to take a pause for the cause, and to have a pause to recognize our guest today is the esteemed Dr. Eric Weinstein, who is a seeker after truth, a seeker after my own heart in the authentic tradition of the old one—his namesake, Albert Wein... Now they say this is not a serious podcast until you break out the puppets. Now I know Rogan has a supply of bows and M16's, and all sorts of other things. I don't have any of those accoutrements, I only have my sock puppets and my gelt Nobel Prize. But, I do want to say that this is a special conversation with Eric, because it really fulfills a promise that was made basically a year ago, and then again about six months ago on this podcast, which is to release a stunning amount of new technical details, and you've really surpassed that. 01:09:12<br> Our mutual friend James Altucher, podcaster extraordinaire, he says that you should never under promise and over deliver, you should never under promise and under deliver. You should over promise and over deliver. Meaning that if you say you're going to get it done in three months and get a million customers, you should get it done in one month and get ten million customers, or as one Peter Thiel said once, what do you think will require ten years but could be done in six months. So, what you've done is released a tremendous amount of technical information that will be fully released at some point to the public. But also, I want to take our audience through some of these delightful animations. I put the link in the chat for now, but I'm going to share my screen right now. Hopefully you can see it as well, Eric. These are now movies I want you to animate—I'll put you in the lower corner. Let's see if I can do that, I'll do that in a second. Let's see, I'll add Eric. Nope, I'll swap these. There we go. I'm going to swap Eric, if you're willing to swap. There we go. I want to walk us through—which which one of these many videos should we should we take a look at? I was fascinated by the Shiab, but that's just my— 01:10:27<br> '''Eric Weinstein:''' Let's do the first three. 01:10:29<br> '''Brian Keating:''' Okay, so the first one is called an Ein— 01:10:31<br> '''Eric Weinstein:''' Go to Einstein's Great Insight. We're going to do this for people who are somewhat physics-minded, but who like to complain that none of this is understandable. By the way, there are some names associated with these videos. Brooke Dallas has been shepherding the project. Brandon Stone has been incredibly helpful technically. Boqu, a mysterious German man who animates many of these things. There's a list of people who've contributed. Tim, the mirthless swagman, from Australia, a math student down there. So, what they've done is they've tried to interpret what it is that I'm saying, because I tend, because of learning issues, to not think symbolically—stop, Brian. 01:11:14<br> '''Brian Keating:''' Yeah. 01:11:14<br> '''Eric Weinstein:''' Let's blow that up. Full screen 01:11:17<br> '''Brian Keating:''' It is full screen here, yeah. ====Ship in a Bottle Animations==== 01:11:19<br> '''Eric Weinstein:''' Okay. That ship that you're seeing is called curvature. It has three masts because it has three irreducible components, usually. One mast is called Weyl curvature, one mast is called traceless Ricci curvature, and one mass is called Ricci scalar. The first greatest insight of the 20th century was the way in which we could feed back the curvature of the Levi-Civita connection into being a co-vector field on the space of all metrics. This is depicted as a boat going into a bottle that has a rather wide opening. So let's run the animation. 01:12:05<br> '''Brian Keating:''' Okay. 01:12:06<br> '''Eric Weinstein:''' So we've got a metric. The metric has a connection, the connection produces curvature that's Riemannian. We find that by identities, it's got three components. It tries to go towards metrics and the Weyl curvature is snapped off. Afterwards, the scalar curvature is lowered somewhat, or adjusted, by scalar curvature over 2 times \(g_{\mu \nu}\). And so symbolically, what we've done is we've said Einstein threw away the Weyl curvature, readjusted the Ricci scalar curvature, and fed metric information through to the Levi-Civita connection, through to the Riemann curvature tensor, and then played these projection games to feed it back to the space of metrics. And that particular combination is perpendicular to the action of the diffeomorphism group on the space of all metrics, leading to a divergence free condition via our friend the Bianchi identity. 01:13:03<br> Now, why can't we do that and feed this information back to the space of connections rather than the space of metrics because we would love to link spacetime games with gauge potential games. So, let's see whether General Relativity and gauge theory have an incompatibility problem as we try to play the same game. We start off with the Riemann curvature tensor, but now the neck is narrower. What's really going on is that this is kind of evocative of trying to feed it into the space of connections, but the gauge group acts differently on two different factors: namely, if connections are ad-valued 1-forms and curvature is an ad- or Lie-algebra-valued 2-form. 01:13:57<br> The problem here is the gauge transformations act on the Lie algebra component and don't touch the form component. But Einsteinian projection, or contraction, or summing over \(g_{\mu \nu}\) indices, is democratic: it deals simultaneously with the form piece and the Lie algebra piece. So if you treat only the Lie algebra piece under a gauge transformation and you don't touch the form piece, then contraction followed by gauge transformation will never be the same thing as gauge transformation followed by contraction. And so that's the puzzle, which is if Geometric Unity is really about the idea of trying to say maybe it's not so much quantizing gravity, maybe it's a fight between the different geometry of Riemann and Ehresmann, because gauge transformations are Ehresmannian geometry but contractions are Riemannian geometry. 01:14:57<br> So here's a GU approach, how do you get geometric harmony between General Relativity and gauge theory when you have the ship in a bottle problem? This is almost a tight analogy. You've got the curvature tensor, you apply a gauge transformation to two of the masts and you pass them through into ad-valued \((d-1)\)-forms, and then you do an inverse gauge transformation, which is exactly how you do the ship in the bottle trick—by the way, Brian gave me a wonderful ship in a bottle, thank you very much—raising the mast inside. And then you can potentially, if need be, adjust one of the two masts again in order to get agreement. 01:15:40<br> So in part, the idea is how do you get harmony? What you need to do is to promote the gauge transformations initially to field content in order to make sure that you're carrying around enough information, effectively, to ensure that contraction is compatible with gauge transformation. Now, that is a very tight idea of how these operators function inside of the theory. ====Gauge Theories as Calculus Done Right Animations==== 01:16:18<br> [Keating pulls up "Penrose-like steps" video] 01:16:19<br> Well this is just—for some reason, whenever we talk about gauge theories, we don't give people very concrete examples. Many of you who are not professionals will not know what a gauge theory is. May I make a recommendation, Brian? 01:16:31<br> '''Brian Keating:''' Yeah, of course. 01:16:33<br> '''Eric Weinstein:''' Let's go to another animation, which is something like Gauge Theories as Calculus Done Right, and blow that up as big as it can be before starting the animation. Okay, start the animation. Let's imagine that we have a salary that is constant in dollar terms over time, [and] that somebody is facing inflationary pressures on their basket of goods. Now the question is—pause please. What we now have is a $10 an hour salary, and if we claim that it's constant, constant means derivative equals zero. But, we know that it's not constant purchasing power. So we have two notions of constancy, how are they related? Let's go back to that please. 01:17:32<br> We do a gauge transformation. And what you see—pause please. You now see that these little hash lines are the reference levels that we call a connection, and we decide that rise over run should not be measured from a naive horizontal, but should be measured instead from a custom reference level represented by the hash marks. Now, if you let it go a little bit, and then stop it, stop. Now you see that derivative equals zero, if we measure rise over run above the hash marks, is a salary that keeps pace with inflation. And the current $10 an hour is actually a negative derivative because the rise over run is measured beneath those hash lines. That situation is actually an application of gauge theory to a very simple problem in economics, completely depicted by stretching the fibers in the x-y plane. And if you look online right now and say "What's a gauge theory?" you'll be bamboozled by a bunch of stuff that nobody can understand unless they're actually insiders. 01:18:49<br> So I think it's very interesting that again, just as it was elementary to ask the question, "What happens to the fermion medium while we're blinking out the supposedly quantum metric?" why is it that we don't actually explain to anyone what a gauge transformation and visualize it? I'm very proud of our team for taking this very simple example and showing what a gauge field is—it's those little hash lines, effectively. Those things in higher dimensions would be the electromagnetic potential, which becomes the photon under quantization. And if you're thinking about QED (quantum electrodynamics), effectively, the electron is a function and the photon is a derivative, because what you're specifying is the levels above which you're going to measure rise over run. Now you can go back to the original floating plane. 01:19:41<br> '''Brian Keating:''' Floating plane... 01:19:42<br> '''Eric Weinstein:''' What you were doing before. ===Digression on Academic Misbehavior=== 01:19:43<br> '''Brian Keating:''' I just want to take a second here. This is Brian Keating now speaking. So, if you look up Juan Maldacena, you will find only one podcast that he's ever been on, and that is the Into the Impossible podcast. If you look up gauge theory and an intuitive way to understand gauge theory, something like that, you'll come up with this really brilliant economic analogy that sounds like Eric has copied from from Juan Maldacena. And in fact, this came up recently, where people were talking about inflation-stabilized items and Bitcoin and so forth, and then—it was very frustrating to me, and I imagine much more so for Eric, although he doesn't have to comment, he's too much of a gentleman. This is Eric's work. This gauge theory applied to economic transactions. 01:20:33<br> '''Eric Weinstein:''' Eric and Pia. 01:20:34<br> '''Brian Keating:''' Eric and Pia Malaney. Yep, Pia Malaney, of course, the beautiful talented wife of Eric Weinstein. Eric is known as the husband of Pia Malaney, mostly. This work is brilliant and is deserving of attention in its own right, independent of the brilliance of it as an analogy to explain a very complicated subject such as gauge theory, or a very simple subject like calculus, as Eric is now explaining to us. I wanted to say that, you don't have to respond if you don't want to Eric, I find it very frustrating when I see "Oh, Eric, you've got to learn what Maldacena said," I'm like F you. That's very frustrating to me. 01:21:17<br> '''Eric Weinstein:''' That's what was hurtful, because Juan knew that he had gotten this—knew about Pia Malaney, he needed to reference her. He did reference her, but in a very slight, minimal way in a [Inaudible] version. 01:21:33<br> '''Brian Keating:''' It's a footnote. It's a footnote. He knows better than that. 01:21:37<br> '''Eric Weinstein:''' The problem that I'm having with it is that the professional community does not understand that it has impulses that it hasn't faced, which is that it tends to brutalize those that it doesn't need to cite, that it doesn't see. It just doesn't see people. And so to have—look, I'm a huge Juan Maldacena fan as are we all, but I'm not going to sit around and have people say "What you really need to do is to listen to Juan Maldacena, whose brilliance knows no bounds. He did something really profound about markets and gauge theory," because quite frankly, Pia Malaney deserved to have an entire career built around it. I think it could easily be the most deep insight in mathematical economics in the last 25 to 50 years. Please show me another, given that the Marginal Revolution, originally, was the penetration of differential calculus into economics. Her thesis, which is largely joint work, but was not even allowed to be what it was supposed to be, rebased the field of economics on gauge theory as the correct form of calculus. I'll tell you what, I don't really want to bitch about Juan Maldacena, but what I would really love to do is to have Juan Maldacena, who showed so much excitement when I confronted him about this—he says, "Oh, you know who that is?" because he had no idea who Malaney was. It would be really great if Juan Maldacena did this work, and I won't say another word this podcast about it. 01:23:16<br> '''Brian Keating:''' Okay. And I will say only one word because it's my podcast, and I can do whatever the hell I want. I had on Cumrun Vafa, as you know, who wrote a book called Puzzles to Unwrap the Universe, in which he cites Juan Maldacena. I called him on that. I said this is actually original work by Pia Malaney, Eric Weinstein, and it almost doesn't matter. And I find that very frustrating, because the very same people—and you don't have to respond. Please don't respond. Again, I'm a blowhard on my own podcast. It's one of our prerogatives. We get so little of these things and treats in life. But I find it very disingenuous of the community. I love Cumrun too, but to say that "This isn't serious Eric, you have to cite this paper, you have to put out a paper about GU, you've only done things on Joe Rogan," I find that disingenuous. You don't have to respond. Let's go on. 01:24:09<br> '''Eric Weinstein:''' What I will say is this. When you have gatekeepers in the form of advisors, if you have job market meetings, where people wield incredible power and they hold other people's careers in the palm of their hand—if you use these places to crush people, you have no right to comment after the fact as to why are these people behaving bizarrely and strangely. Because in essence, whether you submit things to journals and have a perfectly reasonable relationship with peer review, or whether you find that peer review is basically a tool to exclude you, and your insights, and your claims from the world, depends in large measure on who you are, where you're coming from. It's human dependent, it's not independent of who submits and how protected they are. 01:24:58<br> The thing that I want to get across is that the community is producing trauma in people and then claiming that it's paranoia. You have to recognize that trauma and paranoia look exactly the same when you can't see what the source of it is. If you want to understand what happened to this theory, read The Physics of Wall Street by James Weatherall, chapter 10 and the epilogue. It's rather clear about the fact that four gentlemen and one lady tried to steal a trillion dollars over 10 years by pretending to fix the CPI because social security and tax brackets were indexed. They came up with 1.1% adjustment that would be needed, and then they broke into two teams to find exactly out the 1.1% that they wanted. This was admitted to by Robert Gordon. And, the most brilliant thesis that probably came through Harvard in terms of mathematical economics was destroyed so that Daniel Patrick Moynihan and Bob Packwood could have a back end run around the third rail of politics, which is slashing benefits and raising taxes, using economists to destroy, funnily enough, a bright promising woman of color from the developing world in an essentially all male field. These people should pay with their reputation. ===Concept Animations 2=== 01:26:31<br> '''Brian Keating:''' Okay, I want to lighten things up again. Let's talk about Jeffrey Epstein. No, I'm just kidding. I made you laugh, come on. That's a big accomplishment. 01:26:41<br> '''Eric Weinstein:''' That was good. I like it. 01:26:42<br> '''Brian Keating:''' Alright, let's look at one last video here. Let me call up a—let's go to the videotape as they say. Go here, I'm gonna go to Safari, Rastafari... Nope, it's not coming up. Oh, maybe that's because it already thinks that we've done it, let's see here. All right, zoom out. I'm gonna... 01:27:07<br> '''Eric Weinstein:''' Do you want to do an observerse one, down at the bottom? 01:27:12<br> '''Brian Keating:''' Yeah, I'm trying to get up the—tell me when to stop here. Well you can't see it, right? 01:27:20<br> '''Eric Weinstein:''' I can't see anything. 01:27:21<br> '''Brian Keating:''' Let me let me get my screen back here. Let me kill that. Let me kill... what else is going on here? Screen share, show Safari. Show primary display, secondary display, there we go. Can you see that? 01:27:40<br> '''Eric Weinstein:''' Yeah. 01:27:41<br> '''Brian Keating:''' Okay. So at the bottom, I see 5D Observerse, Spinor Dance... Which one would you like? Observerse 5D? 01:27:53<br> '''Eric Weinstein:''' Let's do 5D. Yeah, I think I'll explain what they're trying to depict. It's not exactly how I would have done it, but keep in mind that these are artists who've been trying to learn what this is by bypassing typical—okay, so pause it. 01:28:15<br> '''Brian Keating:''' Yeah. 01:28:16<br> '''Eric Weinstein:''' Can we get rid of that bottom bar? 01:28:18<br> '''Brian Keating:''' Yeah, it's, they need to disable it on their side, but I can kill it off here. There we go. 01:28:23<br> '''Eric Weinstein:''' Okay. And can you blow that up? Or is that as blown up as it can go? 01:28:29<br> '''Brian Keating:''' Let's see here. I think it's fairly blown up. ====5D Observerse Animation and Pati–Salam Connection==== 01:28:33<br> '''Eric Weinstein:''' Alright. Imagine that that torus that you see in the lower left corner of the screen is a 2-dimensional model, toy model, of spacetime. So going around through the center is like Groundhog Day, you come back to the same place and it's a repeating time cycle, and space is simply a circle. Now in such a world, we would normally think of quantum field theory or gravity as taking place on that object. You'd have fields, you'd have effectively functions called sections on that object, and what you're seeing here is something that's very hard to picture because it's 5-dimensional, but one trick here is because the torus has a property called parallelizability... The object on the right is a depiction of a metric. Each point that isn't on one of those two sheets is a potential metric at any given point on the torus. So in other words, if a metric is a symmetric non-degenerate 2-tensor, if you think of it as a matrix, it would be of the form \(\begin{bmatrix} x & z \\ z & y \\ \end{bmatrix}\). Non-degenerate means that \(xy - z^2 \ne 0\). So that's what's cutting out that variety, if you will, the zeros of the of the determinant would be points, given that there are 3 degrees of freedom in the metric. 01:30:21<br> So instead of actually having a metric spacetime, GU would say replace the torus by the entire space in that sort of hourglassy region. So the top region would be like space-space metrics, the bottom below that sort of weird, diaphanous scarf is time-time metrics, and the weird middle region, which is sort of around that singularity, would be space-time metrics. Every way you can stick that donut into that middle region without touching one of those two sheets is a valid spacetime metric. And what GU would do is to say don't only dance on the points of the 2-dimensional torus—again, the surface is 2-dimensional, even though it seems to be 3-dimensional to naive investigation—you should actually have fields that are dancing on all of the points of the torus and, simultaneously, all of the points in that middle region of what we call the Diablo diagram, no to the right. To the right. Yep. 01:31:36<br> So every point in that region is in play, and if you mapped—imagine that the stuff in that weird hourglassy region on the far right was like very warm and on the far left was very cold. Then if you map the torus in to the far left region, it would show up as being cold. If you mapped it into the far right region, you'd see it as being very hot. So every way of mapping the torus in pulls back different information from that hourglassy region. That is in large measure, in part, one of the things that may be going on with the illusion of many worlds, is that what you're seeing is that the metric may be capable of pulling back data that is dancing on the space of all metrics as well as the space of all points on the original manifold \(X\). So in this case, you've got 2 degrees of freedom on the torus, you've got 3 degrees of freedom around the hourglass, and \(2 + 3 = 5\). 01:32:38<br> Now notice that thing up in the top left, which is a ruler-protractor combination that I just gave a copy [of] to Joe Rogan. Those two sliders are recalibrations of what it means to be one unit. And that protractor is a recalibration of what you're going to define to be 90 degrees. So every way of keeping that bottom arm in a single horizontal position, moving the top arm, and moving the two sliders, that's 3 degrees of freedom in the space of metrics. So that's a different depiction of the space of metrics. 01:33:15<br> So the big take home from the restrictive version of GU that we're exploring here is that if you allow fields to dance on the space of metric apparatus—measurement apparatus—then the paradoxes of measurement start to make a lot more sense. You could also, potentially, try to keep the metric classical, because we have two spaces. We have a space downstairs \(X\), which is just the torus, and we have a space upstairs, which is the torus, in this case, cross the hourglass region, as long as it doesn't touch the two sheets. So you've got a 5-dimensional manifold hovering over a 2-dimensional manifold, and fields on the 5-dimensional manifold will be perceived on the 2-dimensional manifold when you pull them back via a particular Einsteinian spacetime as fields on the tangent bundle of what you will call spacetime, together with fields on the normal bundle inside of the 5 dimensions. 01:34:18<br> The normal bundle of a 2-dimensional manifold in a 5-dimensional space is 3-dimensional, so you're gonna see fields that look like spinors on 2 dimensions tensor spinors on 3 dimensions. If you were in 4 dimensions, make that torus in your mind represent a 4-dimensional spacetime, then that Diablo region would be a 10-dimensional region of metrics, because 4x4 matrices that are symmetric have \(\frac{4^2 + 4}{2}\) [Inaudible] for different degrees of freedom. In other words, you get a 10-dimensional normal bundle. 01:34:57<br> Now you'll notice that if you have ordinary spinors on 14-dimensional space and you pull them back via a metric, which is a mapping of 4 into 14, it looks like spinors on the 4-dimensional space tensor spinors on the 10-dimensional normal bundle. If the normal bundle inherits the Frobenius metric from \(X(1,3)\), and you glue in the trace piece in the right way—well, if you glue it in the wrong way, you'd get a \((7,3)\) metric on the normal bundle. But if you glue it in the right way, you'd get a \((6,4)\) metric on the normal bundle. 01:35:35<br> \(\text{Spin}(6,4)\) is a sort of nasty non-compact group, so you might want to break to its maximal compact subgroup like Witten and Bar-Natan discuss. And the interesting thing about \(\text{Spin}(6,4)\) is that it has different names. By low-dimensional isomorphisms, \(\text{Spin}(6)\) is the same thing as \(\text{SU}(4)\). \(\text{Spin}(4)\) is the same thing as \(\text{SU}(2) \times \text{SU}(2)\). And \(\text{SU}(4) \times \text{SU}(2) \times \text{SU}(2)\) is the Pati–Salam theory. So you can argue that ordinary spinors on the induced metric in 14 dimensions, glued in the right way, pull back as Pati–Salam. And I don't know if anyone's ever discussed the connection between Einstein and Pati and Salam. 01:36:29<br> '''Brian Keating:''' No. No. 01:36:31<br> '''Eric Weinstein:''' Well no, I can't say no, I don't know of it. 01:36:33<br> '''Brian Keating:''' I don't know, that's what I'm saying. People have brought it up, but yes, has it? 01:36:39<br> '''Eric Weinstein:''' Has anyone? I don't know. 01:36:41<br> '''Brian Keating:''' I don't know. Yeah. 01:36:42<br> '''Eric Weinstein:''' So the point is that spinors on 14 look like spinors on 4 tensor spinors on some version of 10. 01:36:50<br> '''Brian Keating:''' Yeah. 01:36:51<br> '''Eric Weinstein:''' And whether you're talking about \(\text{Spin}(10)\) models, \(\text{SU}(5)\) models, or \(\text{SU}(4) \times \text{SU}(2) \times \text{SU}(2)\), which is \(\text{Spin}(6) \times \text{Spin}(4)\), isn't that exactly what we see in the Standard Model? So Frank Wilczek—let me just see if I can find this beautiful quote from him, because he definitely brought this up. And what I recently did when I had him on my podcast, which we haven't released—so, if we go over to my screen share... 01:37:30<br> '''Brian Keating:''' Give me one second. Let me do this. Here we go. And... There we go. Yep. 01:37:42<br> '''Eric Weinstein:''' Let me read it. "A particularly intriguing feature of \(\text{SO}(10)\)," which is really \(\text{Spin}(10)\)spin 10, or it could be \(\text{Spin}(6,4)\), "is its spinor representation, used to house the quarks and leptons, in which the states have a simple representation in terms of basis states labeled by a set of "+" and "-" signs. Perhaps this suggests composite structure." Now here's the sentence that just floored me. "Alternatively, one could wonder whether the occurrence of spinors both in internal space and in space-time is more than a coincidence." And then he pulls back immediately, "These are just intriguing facts; they are not presently incorporated in any compelling theoretical framework as far as I know." Geometric Unity is that compelling framework. 01:38:26<br> '''Brian Keating:''' Awesome. Very interesting. So as we wrap up, I do want to see if there are any other videos you'd like to show that would help the reader, or again, I'm going to put this in the chat box so people can peruse it. I did put it in the actual YouTube box description, so people can find that at their leisure. Let me see here. Oh I see what's going on. ===Geometric Unity Document=== 01:38:56<br> '''Eric Weinstein:''' Well I should say that I... Look, let's be honest. I said I was going to release a document, and clearly we haven't. Okay, April Fool's. April Fool's. 01:38:57<br> '''Brian Keating:''' Uh oh, the big reveal! 01:39:17<br> '''Eric Weinstein:''' Go to geometricunity.org. 01:39:21<br> '''Brian Keating:''' geometricunity.org. 01:39:22<br> '''Eric Weinstein:''' And, yeah, call that up. And then Brian, why don't you be the first to put your email address in to request a copy? I wouldn't call it a paper, I'd call it a draft. One of the things I'm looking to do is I'm looking to get constructive feedback from people who want to help me succeed, as opposed to people who just want to be dicks and take me down, because that's just, to be honest, not very interesting to me, and I've had a little taste of that, and I'm not that interested. What I would love is to bring your positive energy. Download it, read it, recognize that more or less, I've been cobbling this together from a million and one different scraps, and that my ability to talk in this way has been degrading for years because I have no one to talk to. I'm not in a department, I'm doing this completely on my own. And I was a little bit frightened to figure out just how much I'd forgotten. 01:40:20<br> So we're still finding scraps of paper, and files on old discs, and things like that. I hope that the notation is getting more and more standard, that there are fewer errors. But there's clearly, you know, this is basically me going back to 1983, '84, and all the time in between, where mostly I didn't talk about this with anybody. And this has been really terrifying, because you know, I'm not a physicist, I don't come from this community. I revere the community. I don't think the community has been behaving well recently, I don't love saying that. But I think the community is in a desperate situation, and let's find out whether I have anything to say or I'm just blowing hot air. I'm not afraid of that. But you know, what would really be meaningful to me is for people to bring kindness, benefit of the doubt, hope, and recognition that it's pretty tough to try to do all this on your own, Be constructive and take a look. I think there are two email addresses on the paper in draft form, one for technical feedback and one for general feedback. So I hope that there's a lot of food for thought. 01:41:43<br> I do think that—let me just close this out. I think it's a coherent story. I think it's the first time I've ever heard a coherent story about how a very simple beginning would produce something that would look like our world. There are things that I would call predictions in it, that talk about what internal quantum numbers you would expect to find, likely next in terms of, there's much more matter, there's matter that should be dark, there's matter that might be luminous but not at the right energy level yet. You would have to, in order to compute with it, be able to figure out what fields have acquired VEVs (vacuum expectation values) and where we are in anthropic spaces in some places. But the internal coherence is much sharper than a few, you know—there's still some things that I'm trying to locate my favorite version of. One is the Shiab operator. I know how to produce Shiab operators in general, but I had a sheet of paper with—do you remember paper with feeds, with holes on either side? 01:42:44<br> '''Brian Keating:''' Oh yeah, loose leaf, oh feed, oh printer paper. Printer paper. 01:42:47<br> '''Eric Weinstein:''' Not loose leaf. Printer paper. 01:42:49<br> '''Brian Keating:''' Dot matrix. 01:42:50<br> '''Eric Weinstein:''' Yeah. So I did some calculations in representation theory that came up with the projections that I used to use that I'm looking for. And the thing that I remember is that they've got yellow highlighter and these perforated holes on either side. I haven't been able to find it yet. ===Conclusion=== 01:43:08<br> So it's a very long process, taking about 37 years of speculation, sometimes more active than others, and trying to put it in one document. So I would really appreciate it if people wanted to take a gander through it. Try to see some of the ideas, and recognize that if we are going to get off this planet with its hydrogen bombs and crazy leaders, and diversify and take some some bets, rockets are not going to do it. There is no real "Mars or Bust", or "Occupy Mars" strategy. There's one quote that keeps coming back to me, "Our home is in the stars or not at all." If we're gonna sit here on a hot, crowded planet with thermonuclear weapons, maybe we have hundreds of years, but we don't have thousands. If we're going to get off this planet and go someplace interesting, we're going to have to recognize that we don't have the source code yet in Einstein, and it's very limiting, and we're going to have to actually say, "What is the source code?" And if it turns out that we can find it, we're gonna have to be good stewards, and we're not going to do the same thing that we've been doing by handing the stuff over to leaders who don't take seriously the burdens of godlike powers that we the technical people bestowed. So Brian, thanks for having me on, and it's a pleasure to interact with your audience. 01:44:29<br> '''Brian Keating:''' Eric, it's a pleasure to have you on the show. As always, you're welcome back anytime. I do love the fact that you made this promise back in early or late December of 2020, that year that may it soon be forgotten. 01:44:47<br> '''Eric Weinstein:''' I didn't promise, I said I was gonna try. I said I was gonna try. 01:44:52<br> '''Brian Keating:''' That's right. Well you succeeded, you succeeded for sure, Eric. I want to thank you for your generosity of time, and spirit, and advice that you've given to me. I hope I can help to serve you in this, wherever this project may take you. It's now out of your hands, it's into the world, and it's going to hopefully sprout many many delightful new discoveries for the benefit of all mankind as our friend Alfred Nobel so warmly engendered upon the world. Eric, best of luck, congratulations. We'll do a part three next year on this date, on this auspicious date, and let it forever be known as a day of famy, not infamy, for years to come in physics, if we can follow the lead of the generous, the mercurial, the genierrific, Eric Weinstein. Thank you so much, Eric. Enjoy the day, and we look forward to seeing you on Joe Rogan... tomorrow? Or when will that podcast be out? 01:45:45<br> '''Eric Weinstein:''' I think so, ''L'Shana Haba'ah'' in the electron layer. 01:45:49<br> '''Brian Keating:''' Okay, ''inshallah''. Goodnight everybody. Please do subscribe and like this podcast, we have Michio Kaku coming up. John Mather, winner of the 2006 Nobel Prize—[Video cuts]—Magnificent ideas to the space, to make it safe for new ideas and for creativity, because we do have this one universe, this one life, and it is eminently precious. So for now, thanking you all, enjoy the rest of your evening, and thanking you Eric, here's a musical outro from our friend Miguel Tully, proprietor of the Yeti Tears podcast, Spotify, and YouTube channel. Good night, everybody. [[Category:Eric Weinstein Content]] [[Category:Geometric Unity]] [[Category:Guest Appearances]] [[Category:YouTube]] [[Category:Video]]
CommonCrawl
Eur. Phys. J. C (2008) 53: 21-39 Regular Article - Experimental Physics Measurement of αs with radiative hadronic events G. Abbiendi2, C. Ainsley5, P.F. Åkesson7, G. Alexander21, G. Anagnostou1, K.J. Anderson8, S. Asai22,23, D. Axen27, I. Bailey26, E. Barberio7,49, T. Barillari32, R.J. Barlow15, R.J. Batley5, P. Bechtle25, T. Behnke25, K.W. Bell19, P.J. Bell1, G. Bella21, A. Bellerive6, G. Benelli4, S. Bethke32, O. Biebel31, O. Boeriu9, P. Bock10, M. Boutemeur31, S. Braibant2, R.M. Brown19, H.J. Burckhart7, S. Campana4, P. Capiluppi2, R.K. Carnegie6, A.A. Carter12, J.R. Carter5, C.Y. Chang16, D.G. Charlton1, C. Ciocca2, A. Csilling29, M. Cuffiani2, S. Dado20, M. Dallavalle2, A. De Roeck7, E.A. De Wolf7,52, K. Desch25, B. Dienes30, J. Dubbert31, E. Duchovni24, G. Duckeck31, I.P. Duerdoth15, E. Etzion21, F. Fabbri2, P. Ferrari7, F. Fiedler31, I. Fleck9, M. Ford15, A. Frey7, P. Gagnon11, J.W. Gary4, C. Geich-Gimbel3, G. Giacomelli2, P. Giacomelli2, M. Giunta4, J. Goldberg20, E. Gross24, J. Grunhaus21, M. Gruwé7, A. Gupta8, C. Hajdu29, M. Hamann25, G.G. Hanson4, A. Harel20, M. Hauschild7, C.M. Hawkes1, R. Hawkings7, G. Herten9, R.D. Heuer25, J.C. Hill5, D. Horváth29,36, P. Igo-Kemenes10, K. Ishii22,23, H. Jeremie17, P. Jovanovic1, T.R. Junk6,42, J. Kanzaki22,23,54, D. Karlen26, K. Kawagoe22,23, T. Kawamoto22,23, R.K. Keeler26, R.G. Kellogg16, B.W. Kennedy19, S. Kluth32, T. Kobayashi22,23, M. Kobel3,53, S. Komamiya22,23, T. Krämer25, A. Krasznahorkay Jr.30,38, P. Krieger6,45, J. von Krogh10, T. Kuhl25, M. Kupper24, G.D. Lafferty15, H. Landsman20, D. Lanske13, D. Lellouch24, J. Letts48, L. Levinson24, J. Lillich9, S.L. Lloyd12, F.K. Loebinger15, J. Lu27,35, A. Ludwig3,53, J. Ludwig9, W. Mader3,53, S. Marcellini2, A.J. Martin12, T. Mashimo22,23, P. Mättig46, J. McKenna27, R.A. McPherson26, F. Meijers7, W. Menges25, F.S. Merritt8, H. Mes6,34, N. Meyer25, A. Michelini2, S. Mihara22,23, G. Mikenberg24, D.J. Miller14, W. Mohr9, T. Mori22,23, A. Mutter9, K. Nagai12, I. Nakamura22,23,55, H. Nanjo22,23, H.A. Neal33, S.W. O'Neale1, A. Oh7, M.J. Oreglia8, S. Orito22,23, C. Pahl32, G. Pásztor4,40, J.R. Pater15, J.E. Pilcher8, J. Pinfold28, D.E. Plane7*, O. Pooth13, M. Przybycień7,47, A. Quadt32, K. Rabbertz7,51, C. Rembser7, P. Renkel24, J.M. Roney26, A.M. Rossi2, Y. Rozen20, K. Runge9, K. Sachs6, T. Saeki22,23, E.K.G. Sarkisyan7,43, A.D. Schaile31, O. Schaile31, P. Scharff-Hansen7, J. Schieck32, T. Schörner-Sadenius7,59, M. Schröder7, M. Schumacher3, R. Seuster13,39, T.G. Shears7,41, B.C. Shen4, P. Sherwood14, A. Skuja16, A.M. Smith7, R. Sobie26, S. Söldner-Rembold15, F. Spano8,57, A. Stahl13, D. Strom18, R. Ströhmer31, S. Tarem20, M. Tasevsky7,37, R. Teuscher8, M.A. Thomson5, E. Torrence18, D. Toya22,23, I. Trigger7,56, Z. Trócsányi30,38, E. Tsur21, M.F. Turner-Watson1, I. Ueda22,23, B. Ujvári30,38, C.F. Vollmer31, P. Vannerem9, R. Vértesi30,38, M. Verzocchi16, H. Voss7,50, J. Vossebeld7,41, C.P. Ward5, D.R. Ward5, P.M. Watkins1, A.T. Watson1, N.K. Watson1, P.S. Wells7, T. Wengler7, N. Wermes3, G.W. Wilson15,44, J.A. Wilson1, G. Wolf24, T.R. Wyatt15, S. Yamashita22,23, D. Zer-Zion4, L. Zivkovic20 and The OPAL Collaboration 1 School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK 2 Dipartimento di Fisica dell' Università di Bologna and INFN, 40126, Bologna, Italy 3 Physikalisches Institut, Universität Bonn, 53115, Bonn, Germany 4 Department of Physics, University of California, Riverside, CA, 92521, USA 5 Cavendish Laboratory, Cambridge, CB3 0HE, UK 6 Ottawa-Carleton Institute for Physics, Department of Physics, Carleton University, Ottawa, Ontario, K1S 5B6, Canada 7 CERN, European Organisation for Nuclear Research, 1211, Geneva 23, Switzerland 8 Enrico Fermi Institute and Department of Physics, University of Chicago, Chicago, IL, 60637, USA 9 Fakultät für Physik, Albert-Ludwigs-Universität Freiburg, 79104, Freiburg, Germany 10 Physikalisches Institut, Universität Heidelberg, 69120, Heidelberg, Germany 11 Department of Physics, Indiana University, Bloomington, IN, 47405, USA 12 Queen Mary and Westfield College, University of London, London, E1 4NS, UK 13 III Physikalisches Institut, Technische Hochschule Aachen, Sommerfeldstrasse 26–28, 52056, Aachen, Germany 14 University College London, London, WC1E 6BT, UK 15 School of Physics and Astronomy, Schuster Laboratory, The University of Manchester, Manchester, M13 9PL, UK 16 Department of Physics, University of Maryland, College Park, MD, 20742, USA 17 Laboratoire de Physique Nucléaire, Université de Montréal, Montréal, Québec, H3C 3J7, Canada 18 Department of Physics, University of Oregon, Eugene, OR, 97403, USA 19 Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire, OX11 0QX, UK 20 Department of Physics, Technion-Israel Institute of Technology, Haifa, 32000, Israel 21 Department of Physics and Astronomy, Tel Aviv University, Tel Aviv, 69978, Israel 22 International Centre for Elementary Particle Physics and Department of Physics, University of Tokyo, Tokyo, 113-0033, Japan 23 Kobe University, Kobe, 657-8501, Japan 24 Particle Physics Department, Weizmann Institute of Science, Rehovot, 76100, Israel 25 Institut für Experimentalphysik, Universität Hamburg/DESY, Notkestrasse 85, 22607, Hamburg, Germany 26 Department of Physics, University of Victoria, P.O. Box 3055, Victoria, BC, V8W 3P6, Canada 27 Department of Physics, University of British Columbia, Vancouver, BC, V6T 1Z1, Canada 28 Department of Physics, University of Alberta, Edmonton, AB, T6G 2J1, Canada 29 Research Institute for Particle and Nuclear Physics, 1525, Budapest, P.O. Box 49, Hungary 30 Institute of Nuclear Research, 4001, Debrecen, P.O. Box 51, Hungary 31 Sektion Physik, Ludwig-Maximilians-Universität München, Am Coulombwall 1, 85748, Garching, Germany 32 Max-Planck-Institut für Physik, Föhringer Ring 6, 80805, München, Germany 33 Department of Physics, Yale University, New Haven , CT, 06520, USA 34 TRIUMF, Vancouver, V6T 2A3, Canada 35 University of Alberta, Edmonton, Alberta, Canada 36 Institute of Nuclear Research, Debrecen, Hungary 37 Institute of Physics, Academy of Sciences of the Czech Republic, 18221, Prague, Czech Republic 38 Department of Experimental Physics, University of Debrecen, Debrecen, Hungary 39 MPI München, München, Germany 40 Research Institute for Particle and Nuclear Physics, Budapest, Hungary 41 Dept of Physics, University of Liverpool, Liverpool, L69 3BX, UK 42 Dept. Physics, University of Illinois at Urbana-Champaign, Urbana-Champaign, USA 43 The University of Manchester, Manchester, M13 9PL, UK 44 Dept of Physics and Astronomy, University of Kansas, Lawrence, KS, 66045, USA 45 Dept of Physics, University of Toronto, Toronto, Canada 46 Bergische Universität, Wuppertal, Germany 47 University of Mining and Metallurgy, Cracow, Poland 48 University of California, San Diego, USA 49 The University of Melbourne, Victoria, Australia 50 IPHE Université de Lausanne, 1015, Lausanne, Switzerland 51 IEKP Universität Karlsruhe, Karlsruhe, Germany 52 Physics Department, University of Antwerpen, 2610, Antwerpen , Belgium 53 Technische Universität, Dresden, Germany 54 High Energy Accelerator Research Organisation (KEK), Tsukuba, Ibaraki, Japan 55 University of Pennsylvania, Philadelphia, Pennsylvania, USA 56 TRIUMF, Vancouver, Canada 57 Columbia University, New York, USA 58 CERN, Geneva, Switzerland 59 DESY, Hamburg, Germany * e-mail: [email protected] Revised: 12 October 2007 Hadronic final states with a hard isolated photon are studied using data taken at centre-of-mass energies around the mass of the Z boson with the OPAL detector at LEP. The strong coupling αs is extracted by comparing data and QCD predictions for event shape observables at average reduced centre-of-mass energies ranging from 24 GeV to 78 GeV, and the energy dependence of αs is studied. Our results are consistent with the running of αs as predicted by QCD and show that within the uncertainties of our analysis event shapes in hadronic Z decays with hard and isolated photon radiation can be described by QCD at reduced centre-of-mass energies. Combining all values from different event shape observables and energies gives αs(MZ)=0.1182±0.0015(stat.)±0.0101(syst.). © Springer-Verlag / Società Italiana di Fisica, 2008 A study of event shapes and determinations of $\vec{\alpha_s}$ using data of e$^{\vec +}$e$^{\vec-}$ annihilations at $\mathbf{\sqrt{s} = 22}$ to 44 GeV The measurement of from event shapes with the DELPHI detector at the highest LEP energies Eur. Phys. J. C (2004) 37: 1-23 Determination of α S using OPAL hadronic event shapes at and resummed NNLO calculations Eur. Phys. J. C (2011) 71: 1733 Study of moments of event shapes and a determination of α S using e+e− annihilation data from JADE Eur. Phys. J. C (2009) 60: 181-196 Measurement of event shape distributions and moments in e + e at 91-209 GeV and a determination of
CommonCrawl
Explain how (or if) a box full of photons would weigh more due to massless photons I understand that mass-energy equivalence is often misinterpreted as saying that mass can be converted into energy and vice versa. The reality is that energy is always manifested as mass in some form, but I struggle with some cases: Understood Nuclear Decay Example In the case of a simple nuclear reaction, for instance, the total system mass remains the same since the mass deficit (in rest masses) is accounted for in the greater relativistic masses of the products per $E=\Delta m c^2$. When a neutron decays and you are left with a fast proton and a relativistic electron. If you could weigh those two without slowing them down, you would find it weighed as much as the original neutron. Light in a Box This becomes more difficult for me when moving to massless particles like photons. Photons can transmit energy from one heavy particle to another. When a photon is absorbed the relativistic mass (not the rest mass) of the (previously stationary) particle that absorbs it increases. But if my understanding is correct, the energy must still be manifested as mass somehow while the photon is in-flight, in spite of the fact that the photon does not have mass. So let's consider a box with the interior entirely lined with perfect mirrors. I have the tare weight of the box with no photons in it. When photons are present the box has an additional quantifiable amount of energy (quantified below) due to the in-flight photons. Say there are $N$ photons... obviously assume $N$ is large. $$\Delta m = \frac{ E }{ c^2 } = \frac{ N h }{ \lambda c}$$ Interactions are limited to reflections with the wall, which manifest as a constant pressure on the walls. If I hold this box in a constant gravitational field (like the surface of Earth) then there will be a gradient in the pressure that pushes down slightly. Is this correct? Wouldn't there still technically be mass as the photons are in-flight, which would cause its own gravitational field just as all matter does? How is this all consistent with the assertion that photons are massless? Is it really correct to say that photons don't have mass? It seems to be a big stretch. Please offer a more complete and physically accurate account of this mirror-box. energy electromagnetic-radiation mass Manishearth Alan RomingerAlan Rominger The statement that photons are massless means that photons do not have rest mass. In particular, this means that, in units where $c=1$, the magnitude of the photon 3-momentum must be equal to the total energy of the photons, rather than the standard relationship where $m^{2} = E^{2}-p^{2}$. But, you can create multi-photon systems where the net momentum is zero, since momentum adds as a vector. When you do this, however, since the energy of a non-bound state is always non-negative, the energies just add. So, this system looks just like the rest frame of a massive particle, which has energy associated with its mass and nothing else. The statement about gravity is a little bit more subtle, but all photon states will interact with the gravitational field, thanks to the positive results of the light-bending observations that have been made over the past century. So you don't even need a construction like this to get photons "falling" in a gravitational field. Jerry SchirmerJerry Schirmer $\begingroup$ Would it then be correct to say that the relativistic mass, not the rest mass, of a photon is $m=\frac{h}{\lambda c}$? Then all statements I would make about the mirror box could be done using this. I think that $(m_0 c^2)^2 = E^2 - (p c)^2$ (this might be what you had in mind) still holds, however, since $m_0=0$. $\endgroup$ – Alan Rominger May 31 '11 at 14:46 $\begingroup$ @Zassoundtsukushi: YOu can do that, but it's somewhat conceptually complicated to use the term "relativistic mass" eventually--the "relativistic mass" is really just the energy, and has properites closer to an allergy. $\endgroup$ – Jerry Schirmer May 31 '11 at 15:00 $\begingroup$ Yes, to the extent that your unit system allows it (like $\frac{MeV}{c^2}$ for mass) I agree that the energy of the photons can be looked at as "just the energy". My question is sufficiently answered by nothing that this photon relativistic mass or energy (which one you use is semantics) exhibits all of the properties expected of that quantity of mass. This means that it is affected by gravitational fields and warps space-time itself. This is more than I previously felt comfortable claiming, but the conceptual picture here seems to be consistent. $\endgroup$ – Alan Rominger May 31 '11 at 15:45 $\begingroup$ @Zassoundsukushi: the reason why it won't work out is technical--radiation gravitates, but not in the same way as matter, and rest mass and relativistic energies don't add together in the same way when you're creating systems out of many particles. At a first glance, you're ok using them semi-interchangeably, but realize that it's complication-prone if you plan on going deeper into this stuff. $\endgroup$ – Jerry Schirmer May 31 '11 at 16:28 $\begingroup$ @Jerry "not in the same way as matter" -> how so? Gravitation couples to stress energy tensor which doesn't really differ all that much between massless and massive systems... $\endgroup$ – Marek May 31 '11 at 19:04 There are no confusions in your understanding, everything you said is correct, and it is the nontrivial content of Einstein's E=mc^2 paper. These systems are the reason that "relativistic mass", as introduced by Tolman is pedagogically useful. The concept that we call "mass" in our day-to-day life is the energy of a system (in mass units), and when you only use the word mass to mean "rest-mass", the intuitive concept is changed somewhat. For the atomic fission, the fast moving fragments have energy which is equal to the initial bomb energy. If you weigh them without slowing them down (for example, if they are charged and you capture them by making them do circles in a magnetic field), the weight you would field on a scale once they are captured would increase by the relativistic mass (the energy over c^2). The photons in a spherical mirror box weigh the box down exactly as the relativistic mass of the photons inside. The pull of the Earth on these photons is on the relativistic mass. If you replaced the photons with a particle gas at the same pressure, and removed a little mass from the walls to make the total mass be positive, the gravitatonal field outside will be the same, this isn't true replacing the photons with a pressureless block with a weight equal to their relativistic mass, only because the pressure contributes to the gravitational field too. Ron MaimonRon Maimon I think that there is some confusion in your understanding of relativistic physics in the statement here: In the case of a simple nuclear reaction, for instance, the total system mass remains the same since the mass deficit (in rest masses) is accounted for in the greater relativistic masses of the products per E=Δmc2. When a neutron decays and you are left with a fast proton and a relativistic electron. If you could weigh those two without slowing them down, you would find it weighed as much as the original neutron. The correct statement is that the summed four vector of all the decay products would have the effective mass of a neutron. Masses are not conserved in relativistic physics, in an analogous way that lengths are not conserved when adding vectors in three dimensions. What is conserved is energy and momentum, a four vector whose measure is m*c, where m is the effective mass of the system, similar to the length of a three-vector after the addition of three-vectors . When a pi0 goes into two photons, it is true that the available energy for each gamma in the centre of mass system of the pio is half the mass of the pio, and the measure of the invariant mass of those two gammas will be the mass of the pio. When more particles are involved , lets say two pio's then the invariant mass of the four gammas four momenta is not additive to two pio masses. It is better to forget about convoluted arguments with masses and think of four momenta when in the relativistic regime. Now a box of photons will have a four momentum sum in measure equal to E*2/c*2 - |p|*2=m*2*c**2. (please see the wiki link for clear terminology) If the three vector momentum sums up to zero, the effective mass of the photons in the box will be E/c**2. Small but there to be weighed. anna vanna v $\begingroup$ There is no confusion, these statements are unambiguously 100% correct, and are the reason physicists used relativistic mass as a concept. $\endgroup$ – Ron Maimon Jul 4 '12 at 5:50 Not the answer you're looking for? Browse other questions tagged energy electromagnetic-radiation mass or ask your own question. Why can light (photons) bends in a curve through space without mass? If photons are deflected by a strong gravitational field, then how come photons do not have mass? If you could bottle a photon would it have mass? Does a photon exert a gravitational pull? Does a box containing photons have more inertia than an empty box? Does a "Photon Box" have gravitational mass? Is (rest) mass conserved in special relativity? Can light cause gravity? Does everything with mass or energy have a gravitational pull? What happens to light in a perfect reflective sphere? Does $E = mc^2$ apply to photons? How do we know that photons are exactly massless and travel exactly with speed $c$ in vacuum? If photons end up having a tiny mass, say $10^{-54}~\rm kg$, what would be the universal speed of massless particles? Reason why only massless particles can travel at speed of light? Proper mass and space-time wrap question If there is vacuum energy, does the vacuum have mass? Relativistic mass of components gives system rest mass? Is the energy of an electromagnetic radiation the sum of the energy of each photon? Does mass really follow the additive property? Why is the mass of a Hydrogen atom lower than the sum of masses its parts?
CommonCrawl
BMC Health Services Research Going beyond the mean: economic benefits of myocardial infarction secondary prevention Viktor von Wyl ORCID: orcid.org/0000-0002-8754-97971,2, Agne Ulyte1, Wenjia Wei1, Dragana Radovanovic1, Oliver Grübner1,3, Beat Brüngger1,4, Caroline Bähler1,4, Eva Blozik3,5, Holger Dressel1 & Matthias Schwenkglenks1 BMC Health Services Research volume 20, Article number: 1125 (2020) Cite this article Using the example of secondary prophylaxis of myocardial infarction (MI), our aim was to establish a framework for assessing cost consequences of compliance with clinical guidelines; thereby taking cost trajectories and cost distributions into account. Swiss mandatory health insurance claims from 1840 persons with hospitalization for MI in 2014 were analysed. Included persons were predominantly male (74%), had a median age of 73 years, and 71% were pre-exposed to drugs for secondary prophylaxis, prior to index hospitalization. Guideline compliance was defined as being prescribed recommended 4-class drug prophylaxis including drugs from the following four classes: beta-blockers, statins, aspirin or P2Y12 inhibitors, and angiotension-converting enzyme inhibitors or angiotensin receptor blockers. Health care expenditures (HCE) accrued over 1 year after index hospitalization were compared by compliance status using two-part regression, trajectory analysis, and counterfactual decomposition analysis. Only 32% of persons received recommended 4-class prophylaxis. Compliant persons had lower HCE (− 4865 Swiss Francs [95% confidence interval − 8027; − 1703]) and were more likely to belong to the most favorable HCE trajectory (with 6245 Swiss Francs average annual HCE and comprising 78% of all studied persons). Distributional analyses showed that compliance-associated HCE reductions were more pronounced among persons with HCE above the median. Compliance with recommended prophylaxis was robustly associated with lower HCE and more favorable cost trajectories, but mainly among persons with high health care expenditures. The analysis framework is easily transferrable to other diseases and provides more comprehensive information on HCE consequences of non-compliance than mean-based regressions alone. Unwarranted variation in health care provision, reflected by deviation from treatment recommendations, is an ubiquitous problem and is associated with inefficient resource allocation, suboptimal treatment outcomes, lower quality of care, and higher health care expenditures (HCE) [1, 2]. However, to identify which health care services are ineffective or appropriate for a specific patient is challenging. A crucial step towards improving efficiency of care is to establish a link between deviations from recommended care and inferior health and financial outcomes. Real-world studies of care provision are usually retrospective, observational, and relying on secondary data sources (that is, data initially collected for other purposes), which brings about risks of biases such as residual confounding [3]. Among the potential biases described in the literature the "healthy adherer bias" is of particular concern [4]. This bias circumscribes the effect that healthier persons tend to adhere better to prescribed treatments, for example because they are generally more health-conscious. Therefore, compliance may appear to exert beneficial effects on specific health outcomes when such benefits are driven by unmeasured comparator group differences. Compliance has different facets: It can involve 1) prescription compliance of physicians with recommended guidelines, or 2) drug refill, or 3) intake by patients. Moreover, health care needs, as well as treatment compliance are often incompletely captured by routine databases (e.g. health insurance claims) because persons not accessing care are not recorded and standardized diagnostic information is often missing. Neither are actual drug intake by patients commonly part of administrative or health claims databases. Additionally, there is currently no established methodological framework for investigations into HCE implications of (non-)compliance with recommended health care. The limited scientific literature on the topic is dominated by mean-based methods, that is, cost outcomes are aggregated to total cost averages and analyzed in regression frameworks. While certainly valid and appropriate under clearly defined circumstances (e.g. cost-effectiveness studies nested in randomized trials), such approaches tend to discard valuable information regarding cost distribution, timing of clinical events, or the existence of subgroups "falling outside the norm". Specifically, treatment recommendation compliance may not translate into health and cost benefits over the full disease-severity spectrum, but be limited to specific subgroups such as healthier persons without co-morbidities. Therefore, this study aimed to revisit the effect of prescription and prescription fill compliance (as covered by health insurance claims databases) on different monetary and health outcomes, using the well-described example of secondary prevention of myocardial infarction (MI). Pharmacological prevention after acute MI events is, in most circumstances, considered standard of care by major treatment recommendations [5,6,7,8]. Treatment recommendations state that prophylactic treatment should be initiated after hospital discharge, ideally including drugs from 4 classes. In particular, prophylactic treatment should contain dual antiplatelet therapy (DAPT) including aspirin and a P2Y12 inhibitor (prasugrel, ticagrelor, or clopidogrel), lipid-lowering drugs, particularly high-intensity statins (STAT), angiotension-converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARB), and beta-blockers (BB). However, real-world observations suggest that patients are frequently also prescribed treatment combinations with less than four drug classes (e.g. 3-class treatments based on STAT, ACE inhibitors/ARB, and BB) [5, 9]. The efficacy of these treatment combinations - in terms of prevention of further MIs, mortality, or MI-caused re-hospitalizations - has been demonstrated by randomized controlled trials. However, the real-life effectiveness, particularly in light of expectable imperfect compliance by physicians (in timely prescribing these drugs) and patients (by not taking all recommended drugs), as well as financial outcomes of taking secondary prevention, are much less explored [10]. Several studies have reported substantial net cost benefits (that is, overall reduced health care expenditures) among persons who complied with recommendations compared with non-compliers, despite compliers having accrued higher medication expenditures [3, 11,12,13]. Moreover, secondary prophylaxis has been associated with clinical benefits [9, 14, 15]. On the basis of health insurance claims data of persons who were hospitalized for an MI event, this study evaluated health and financial outcome differences between persons who were prescribed (and have filled prescriptions for) secondary prevention of MI as recommended by guidelines compared to others who were not. The database did not allow to determine, however, whether drugs were truly taken in by patients. Standard analyses were complemented by novel methods that cover additional outcome dimensions by looking at distributional cost differences and cost trajectory differences between compliance groups, thereby leading to further insights into the dynamics of health care provision. Overall approach This study evaluated the HCE implications of non-compliance to MI secondary prevention by applying recent methods from the causal inference framework to better control for healthy adherer bias (Fig. 1, analysis 1). HCE distributions (Fig. 1, analysis 2) and longitudinal perspectives (Fig. 1, analysis 3) were applied to provide a more comprehensive picture of (economic) consequences of compliance and to facilitate the investigation of subgroup-specific effects [16]. Study Flow Chart. Prior exposure was defined as having received P2Y12 inhibitors, ACE/ARB (angiotensin converting enzyme inhibitors/angiotensin-receptor blockers), Aspirin, or Beta blocker prior to Index date. Abbreviations: MI: Myocardial Infarction Setting and location All analyses were performed using health insurance claims data. Swiss mandatory health insurance characteristics are described elsewhere [17]. In brief, the insurance is comprehensive and reimburses costs using fixed fee (inpatient) and fee-for-service (outpatient) systems. There is a standard annual deductible of CHF 300 (that is, the insurance only covers HCE exceeding the deductible amount). Insurees can receive premium rebates for choosing higher deductibles (CHF 500 to 2500). Of note, the high deductibles and out-of-pocket payments of the Swiss system have been criticized for leading to foregone healthcare by some studies [18]. The analysis is based on anonymized, administrative claims data from mandatory health insurance, provided by Helsana Insurance Group. This health insurer covers approximately 1.2 million people (15% of the full population), representative for the Swiss population. The database also included information on enrollees' sociodemographic characteristics (including date of death), choice of insurance characteristics, as well as details on all reimbursed medical services (e.g. date of care provision, length of stay). As illustrated in Fig. 2, this analysis included persons who were insured with Helsana during 2014 and 2015. Persons were selected if they were hospitalized with an MI (index hospitalization) until Dec. 27, 2014, as indicated by the International Disease Classification Version 10 (ICD-10) codes I21 (acute MI) and I22 (subsequent MI). We excluded patients with incomplete insurance coverage in 2014, asylum seekers, patients living outside Switzerland, Helsana employees, patients living in nursing homes and receiving lump-sum reimbursement (which could mask some services received), and those not surviving until the end of the assessment period (Fig. 2) [18]. Study design and definitions of timelines. Abbreviations: MI: myocardial infarction; d: days Study perspective and time horizon This study analysed cost consequences from the view point of a Swiss health insurer. That is, only costs were considered that were reimbursable to insurees by mandatory health insurance. The cost contributions of Swiss cantons for inpatient stays (55% of inpatient costs) were not considered because they are handled directly between cantons and insurers and are therefore largely unaffected by insurance-scheme induced (dis-)incentives for compliance. Currency, price date, and conversion All HCE are expressed in Swiss Francs (CHF), with 1 CHF being the equivalent of 0.92 Euros or 1 US $ (as of October 2019). Because all outcome analyses encompass a 360-day time-frame, no discount rates were applied. Study data were anonymized before analysis. According to the national regulations, ethical approval was not required for this type of study. Outcome and explanatory variables Economic outcome variables This study analyzed HCE accrued during the outcome observation period (Fig. 2) from days 31 to 390 after discharge from the index hospitalization, considering all inpatient and outpatient services received during that period. HCE was categorized into outpatient treatment expenditures, inpatient care, drugs, and other costs (such as aids, home care). The primary outcome of the analyses was total HCE, but secondary analyses also looked into specific HCE categories (especially drug costs). Health outcome variables In addition, selected health outcomes were analyzed separately, namely deaths or hospitalizations occurring until 360 days during the outcome observation period (Fig. 2). Main explanatory variable: compliance The main explanatory variables were compliance to the recommended treatment, defined by reimbursement claims recorded within 30 days after the index date. The 30-day cut-off was chosen based on the reasoning that this time frame provided ample time for re-filling a prescription (as patients sometimes receive medications for a few days at hospital discharge). Treatment recommendation compliance was defined as having filled one or more prescriptions for a 4-class combination therapy including STAT, BBs, ACE, and either aspirin or P2Y12 inhibitors. In sensitivity analyses, persons who received combination therapy containing at least 3 out of the 4 classes were also considered to be treatment recommendation compliant. Other explanatory variables All analyses considered socio-demographic, morbidity-, and insurance-related factors. Included variables comprised age, sex, living in a French- or Italian-speaking (as opposed to German-speaking) canton, urbanity of place of living (categorized as rural, sub-urban, or urban), having a high annual deductible of >CHF 500, having at least one supplementary health insurance, being in a managed care model, having pre-existing chronic morbidities requiring regular medication intake (as identified by pharmaceutical cost groups), having used anticoagulation drugs (heparin, vitamin K antagonists, athrombin [19]), having had inpatient stays of at least 3 days or high medication expenditures of at least CHF 5000 in the year prior to the baseline hospitalization. Pharmaceutical cost groups (PCG) are a widely employed means to reliably derive the presence of certain co-morbidities on the basis of prescriptions of disease-specific drugs (e.g. against HIV) [20]. Furthermore, any use of medications used for MI prevention before the index hospitalization (screening period, Fig. 2) was recorded. Study subjects were classified by pre-exposure based on prior use of the drugs, such as P2Y12 inhibitor, ACE, ARB, BB, but not high-intensity statins because these are frequently prescribed without a direct link to an elevated MI risk. Because the majority of included patients (71%) were pre-exposed to MI prevention drugs (potentially indicating that the index hospitalization was not the first cardiovascular event), all analyses were performed on the full sample (n = 1840) and on a subset of patients without pre-exposure (n = 542). Three analyses were conducted to investigate different dimensions of cost differences (Fig. 1). Analysis 1 applied two-part models to annual HCE and medication-related HCE, to identify differences between the groups of compliers and non-compliers. Separate models were estimated for the two compliance definitions (4-class treatments and at least 3-class treatments in the main and sensitivity analysis, respectively). The choice of sensitivity analysis was motivated by a preliminary analysis of different exposure categories that indicated high numbers of 3-class combination prescriptions (Table 1). Table 1 Baseline Characteristics Two-part models consisted of a logit-part that models the probability of having non-zero HCE, based on covariates x for individual i [21, 22]. The second part included a generalized linear model that estimates the distribution of non-zero HCE. $$ \left({\overline{HCE}}_i\right|\ {x}_i\left)=\Pr \left({HCE}_i>0\right|\ {x}_i\right)\times \Pr \left({\overline{HCE}}_i\right|{HCE}_i>0,{x}_i\Big) $$ After initial explorations, a gamma distribution and log-link was chosen for this analysis [21]. The two-part regression estimates were then back-transformed into Swiss Francs [23]. Initial analyses led us to speculate that some persons may not receive prophylactic medications as recommended due to reasons that also influence the outcome of interest ("healthy adherer bias"). Moreover, a non-negligible number of persons died during the observation period. To mitigate these potential problems, inverse probability weighted models were estimated [24]. Weights were derived from a multivariable logistic regression on having drug reimbursement claims that indicate compliance with 4-class treatments (or 3- and 4-class treatments in sensitivity analysis) within 30 days of the index date as well as a separate multivariable regression model for having died after the end of the assessment period and before or at end of the observation period. Death during the observation period may have affected our analysis of cumulative HCE in two ways: 1) deceased persons contributed less observation months to the analysis, and 2) end-of-life costs tend to be disproportionally high. Therefore, imbalances in death rates across treatment compliance arms may translate into biased estimates of HCE differences between compliance groups. To mitigate these biases, person-specific weights were calculated as the inverse of the model-based predicted probabilities for receiving recommended drugs and for having died (whereby predictions from both regression models were multiplied to create a single weight). These combined weights were then applied to the two-part regression analysis. We hypothesized that non-compliance will lead to statistically significantly higher HCE after adjustments for group differences, healthy adherer bias and censoring due to death during the observation period (Hypothesis 1). Analysis step 2 addressed the issue of compliance effects possibly not being equal across the full HCE distribution (e.g. with an effect only visible in high-cost groups). Therefore, we applied a method for counterfactual decomposition to explore differences between the groups of compliant and non-compliant persons across the full HCE distribution [25]. This was achieved by estimating the quantile function of compliant and non-compliant persons across HCE distribution deciles (with 9 split points) and by adjusting for pre-specified covariates. HCE differences between compliant and non-compliant persons were then estimated per quantile, adjusted for population differences between the two groups [25]. Statistical significance was assessed by bootstrap-generated 95% confidence intervals. We hypothesized that HCE differences between compliant and non-compliant persons are not homogenous across the full cost spectrum (Hypothesis 2). In addition, analysis 3 compared longitudinal age- and sex-adjusted HCE trajectories between compliant and non-compliant groups, thereby only considering patients who survived the full observation period because the method does not allow for censoring [26]. Individual HCE trajectories of monthly aggregated, age- and sex-adjusted HCE were smoothed using fifth-degree polynomial linear regression and then k-means clustered, which produced a trajectory-based classification. The optimal number of trajectory groups was determined by an algorithm assessing explained variance and classification stability (supplementary Figure 1). We hypothesized that the trajectory analysis would yield groups that are associated with unfavorable HCE profiles (e.g. high overall costs, substantial spikes) and that non-compliant groups would tend to congregate more in such high-cost groups (Hypothesis 3). Hypothesis 3 was specifically tested using multinomial logistic regressions, with trajectory groups treated as outcome variables, compliance status (yes/no) as variable of interest and confounder adjustments for variables described above. As illustrated by Fig. 1, 1,840 patients had a hospitalization due to an MI and were discharged alive. Of these, 1298 (70.5%) had pre-exposure to secondary prevention drugs, possibly suggesting that the index hospitalization did not represent their first cardiovascular event. Overall, 175 of 1840 (9.5%) persons died during the outcome observation period. Table 1 illustrates further baseline characteristics, stratified by pre-exposure status. Of note, persons with pre-exposure were markedly older (median of 76 years compared to 61 years in the unexposed group), more frequently female (n = 497, 38% vs. n = 157, 29%), and had more pre-existing, medication-treated hypertension (n = 661, 51% vs. n = 48, 9%) and type 2 diabetes (n = 343, 26% vs. n = 39, 7%). Table 1 further shows data on compliance status, which – globally – varied from 54 to 66% for individual drugs. Uptake of 4- and 3-class combination prophylaxes was calculated at 32% (n = 595) and 27% (n = 486) of all patients, respectively. Median [interquartile range] HCE for the observation period were CHF 6779 [2329; 18,390] for the full population, of which a median of CHF 1633 [778; 3028] for medication expenditures. In the subpopulation of previously unexposed persons, corresponding figures for HCE and medication costs were CHF 3455 [1638; 9631] and CHF 1058 [593; 1942], respectively. Description of health outcomes during outcome observation period When grouping the full sample (n = 1840, including patients with pre-exposure) by compliance with 4-class combination therapy (3- or 4-class therapy in the sensitivity analysis, respectively), the number and percentage of persons dying during the outcome observation period was n = 154, 12% (n = 121, 16%) in the group of non-compliers (supplementary Table 1). By contrast, only n = 21, 3.5% (n = 54, 5%) of compliers died during the outcome observation period. Multivariable odds ratios from logistic regression also indicated a nearly 50% lower mortality among compliant persons (odds ratio in main analysis 0.52 [95% confidence interval 0.30; 0.91]; sensitivity analysis 0.64 [0.44; 0.92], supplementary Table 1). Furthermore, the risk for re-hospitalization during the observation period was similar between compliers (main analysis: n = 217, 36.5%; sensitivity analysis: n = 397, 36.7%) and non-compliers (n = 518, 41.6% and n = 338, 44.5%, respectively, supplementary Table 1). Mean-based comparisons Unadjusted total HCE differences for compliant and non-compliant persons are shown in Table 2, stratified by pre-exposure status to prophylactic drugs. Crude average HCE were CHF -5260 lower in persons receiving 4-class treatment. Moreover, the table presents results from standard two-part and inverse probability weighted models. In the main analysis, persons receiving 4-class combination treatment had overall HCE that were CHF -2144 lower compared to non-compliant persons in the unweighted, basic model. When applying inverse probability weights derived from multivariable logistic regression (supplementary Table 2), this difference increased to CHF -4865 and reached statistical significance, but was still smaller than the crude average difference. Estimated differences were somewhat lower when only analyzing persons without pre-exposure, with a crude average difference of CHF -2935, CHF -1708 in the unweighted and CHF -4048 in the weighted model, and did not reach statistical significance. HCE differences were nominally of similar (full population) or even larger (unexposed population) magnitude when considering the use of 3- or 4-class combination treatments as compliant (sensitivity analysis), but also did not reach statistical significance (Table 2). Table 2 Mean-based comparisons of HCE between compliers to recommended secondary prevention (column headings) and non-compliers Moreover, medication costs did not differ statistically significantly between compliance groups, but were nominally higher for compliers in the subgroup of persons without prior pre-exposure to prophylactic drugs (supplementary Table 3). Counterfactual distributional differences The results from the distributional analyses are presented in Fig. 3 and Table 3. Unadjusted distributions of HCE are displayed as split points for the HCE deciles in the second column of Table 3. For example, in the full population, the HCE deciles ranged from CHF 55 in the lowest to CHF 42,046 for the highest decile. Health care expenditure differences between compliers and non-compliers across the full health care expenditure distribution. The decomposition analysis took the following potential confounders into account: age, sex, living in a French-speaking or Italian-speaking canton, degree of urbanity of place of living, having a high deductible, participating in a managed care model, having at least one supplementary insurance, having had an inpatient stay in the screening period (other than the index hospitalization for myocardial infarction), having had high medication expenditures of at least CHF 5000 in the screening period, number of pharmaceutical cost groups (which are drug-prescription based indicators for co-morbidities). Percentiles represent the 9 points in the HCE distribution that split the full sample into 10 equally large parts (deciles) Table 3 Distributional cost composition analysis The counterfactual decomposition analysis was used to investigate HCE differences between compliance groups over the full HCE distribution (stratified by deciles), taking into account important confounders. In the main analysis, compliance-linked HCE differences were non-zero (indicating lower costs in compliant persons) and reached statistical significance from the fifth decile upward in the distribution (ranging from CHF 995 in the fifth to CHF 3516 in the ninth decile, left column in Table 3). The sensitivity analysis yielded qualitatively similar results that did not reach statistical significance. HCE differences were smaller among persons without pre-exposure but nominally also indicating smaller HCE in compliant persons in the second median half of the HCE distribution. Cost trajectories The cost trajectory analysis was used to group patients into distinct classes, based on longitudinal HCE patterns over a one-year period after the end of the assessment phase (Fig. 4). The k-means based algorithm identified four robust groups, which were reflected both in the full and the previously unexposed groups. Parametrizations with fewer or more groups were inferior with respect to percent correctly classified and residual sums of squares (supplementary Figure 1). Cost trajectories. Numbers in figure legend indicate: trajectory group number, proportion of the analyzed sample, average total health care expenditures over full 12-month period (standard deviation) In Fig. 4, the y-axis reflects average monthly HCE, the x-axis illustrates months since start of the outcome observation period. The analyses of the full and unexposed samples each suggested the presence of one large group (n = 1302, 78% and n = 429, 83%) with comparatively low average costs of CHF 6245 (full sample) and CHF 3851 (unexposed group) annually. The second largest groups comprised n = 245, 15% and n = 62, 12% of persons in the respective samples, but with substantially higher annual HCE of CHF 26,504 and CHF 19,584 and occasional peaks. The HCE spikes observed in group 2 (as well as in the high-cost groups 3 and 4) were largely driven by inpatient hospitalization costs (supplementary Figures 2 & 3), which constituted a high fraction of all HCE in all groups but the low-cost group 1. On the basis of these findings, we evaluated our hypothesis that taking 4-class combination treatment (resp. 3- or 4-class therapies in the sensitivity analysis) as recommended may be linked to a higher probability of belonging to the more favorable low-cost group 1. The main analysis (Table 4) of the full population provided evidence for the hypothesis, as suggested by the overall p-value below the specified threshold of 0.05 and decreasing relative risk ratios below 1 (0.89, 0.71, and 0.20 for trajectory groups 2 to 4 when using the low cost group as reference, respectively). The latter indicates that, after controlling for important confounders, individuals taking 4-class combinations have a decreasing probability of belonging to one of the high-cost groups. Repeating the analysis for the sample of previously unexposed individuals, as well as when applying the sensitivity analysis definition of compliance, yielded inconclusive results however, mostly because numbers in high-cost trajectory groups were quite small. Table 4 Comparison of cost trajectory groups Summary Table 5 illustrates the conclusions from the individual analyses. Recommended 4-class MI prevention was associated with robust HCE reductions in the full analysis sample, whilst compliance benefits among persons without prior exposure to MI prevention drugs were less clear, also due to a lack of statistical power. Table 5 Summary of results Using reimbursement claims data from 1840 Swiss insurees, this analysis attempted to dissect economic consequences of non-compliance with pharmacological secondary prevention after a myocardial infarction. Using different methods to mitigate the healthy adherer bias and to explore effects of compliance across the full HCE distribution, we observed robust benefits of compliance for patients receiving 4-class combination treatment. As shown in Table 5, compliance with 4-class combination treatment was generally associated with statistically significant reductions and more favorable cost trajectory outcomes when analyzing all insurees. Moreover, compliance also went along with a markedly decreased risk for death during the observation period (that is, up to 390 days after index hospitalization discharge), but not for hospital readmission. However, our sample included over 70% persons who likely had already experienced cardiovascular health problems prior to the index hospitalization and were pre-exposed to drugs for MI prevention. Therefore, we also performed sub-analyses on persons without pre-exposure, thereby assuming that the index hospitalization reflected the first myocardial infarction event. Although conclusions remained qualitatively the same (except in the trajectory analysis), the results no longer reached statistical significance, potentially due to the smaller sample size. Overall, the findings fall well in line with previous research in the same field suggesting health benefits [9, 27] and lower health care expenditures for persons with a myocardial infarction who comply with secondary prevention recommendations [3, 11, 12, 28]. Although conducted in a Swiss setting, the findings are likely transferrable to target populations with similar demographics in other social health insurance schemes (e.g. Germany or the Netherlands). From a methodological standpoint, the application of novel analytic approaches highlighted two important issues. First, the healthy adherer bias was indeed a confounding force and needs to be dealt with adequately in future studies. In particular, inverse probability weighted HCE difference estimates tended to be substantially larger than unweighted estimates; much so because a substantial number of persons died during the observation period, which was accounted for by our formulation of inverse probability weights. Of note, because our analysis omitted cantonal cost contributions to inpatient stay HCE, the estimates may even underestimate the actual HCE difference between compliant and non-compliant groups. Moreover, the distributional analyses revealed that HCE reductions were larger between compliers and non-compliers as the overall level of HCE increased. Along the same lines, compliers had - after confounder adjustment - a greater likelihood for a more favorable HCE trajectory over 1 year after the assessment period than non-compliers (Table 4). These additional results complement the analysis of HCE means based on two-part regressions and suggest that cost benefits of compliance to 4-class combination therapy may in fact be driven by a prevention of costly complications (as indicated by a lower probability for compliers to belong to a high cost trajectory, as well as the increasing cost difference between compliers and non-compliers with rising overall HCE). Because inpatient hospitalizations are a major driver of HCE and hospitalizations were more frequently observed in the non-compliant group, one could speculate that better compliance with recommended secondary MI prophylaxis may translate into a lower risk for worsening of cardiovascular problems (possibly requiring hospital interventions). However, our database included limited information to substantiate this hypothesis, and these aspects clearly warrant further investigations as they may shed further light on the mechanisms of compliance-associated cost benefits. The present analysis is – to our knowledge – one of the first to demonstrate a more differentiated cost effect of secondary MI prevention. Overall, the mix of methods we utilized has the potential to shed further light on the distribution and dynamics of HCE consequences of non-compliance. These methods have similar data requirements as standard multivariable analyses, can be implemented in standard software packages, and will easily translate to other disease domains as well. Furthermore, the findings have potential policy implications. Our results suggest that better compliance with secondary prevention treatments for myocardial infarction may lead to fewer health care expenditures. If further corroborated, this would imply that compliance with secondary MI prevention guidelines should be actively encouraged and monitored. Some limitations need to be mentioned. Because of the observational nature of the analysis there remains a risk for residual confounding; all the more because the administrative database used only contains limited clinical information. Given this residual risk of bias, the observed cost differences between compliers and non-compliers should still not be considered causal, although they fall well within ranges observed by other studies. Further limitations are the restricted outcome observation period, the unavailability of information on drug intake by patients, as well as the static compliance definition based on a single time-point. Future analyses should, for example, include more detailed information on comorbidities, perform investigations into long-term outcomes and potentially develop more refined, dynamic measures of treatment compliance. By using novel analytical methods to examine distributions and trajectories of health care expenditures this study found that compliance with recommended secondary prevention consisting of 4-class combinations after myocardial infarction was associated with lower health care costs. The inclusion of methods for investigating the full dynamics and distribution of health care expenditures offers potential for more personalized cost-benefit analyses. The data underlying this study cannot be shared publicly because they are the property of Helsana (https://www.helsana.ch/en/helsana-group), and have restricted public access on grounds of patient privacy. The data are managed by Helsana and subsets of the database are available for researchers after request and under specific conditions. Data are available from Helsana ([email protected]) for researchers who meet the criteria for access to confidential data. Helsana will consider the possibilities of the research proposal and decide to grant access if the research questions can be answered with use of the Helsana data. When requests are granted, data are accessible only in a secure environment. Wennberg JE. Time to tackle unwarranted variations in practice. Bmj. 2011;342:d1513 Epub 2011/03/19. Wennberg JE. Practice variation: implications for our health care system. Manag Care. 2004;13(9 Suppl):3–7 Epub 2004/10/21. Roebuck MC, Liberman JN, Gemmill-Toyama M, Brennan TA. Medication adherence leads to lower health care use and costs despite increased drug spending. Health Aff. 2011;30(1):91–9 Epub 2011/01/07. Brookhart MA, Patrick AR, Dormuth C, Avorn J, Shrank W, Cadarette SM, et al. Adherence to lipid-lowering therapy and the use of preventive health services: an investigation of the healthy user effect. Am J Epidemiol. 2007;166(3):348–54 Epub 2007/05/17. Ibanez B, James S, Agewall S, Antunes MJ, Bucciarelli-Ducci C, Bueno H, et al. 2017 ESC guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation: the task force for the management of acute myocardial infarction in patients presenting with ST-segment elevation of the European Society of Cardiology (ESC). Eur Heart J. 2017;39(2):119–77. Roffi M, Patrono C, Collet JP, Mueller C, Valgimigli M, Andreotti F, et al. 2015 ESC guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation: task force for the Management of Acute Coronary Syndromes in patients presenting without persistent ST-segment elevation of the European Society of Cardiology (ESC). Eur Heart J. 2016;37(3):267–315 Epub 2015/09/01. Hamm CW, Bassand JP, Agewall S, Bax J, Boersma E, Bueno H, et al. ESC guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation: the task force for the management of acute coronary syndromes (ACS) in patients presenting without persistent ST-segment elevation of the European Society of Cardiology (ESC). Eur Heart J. 2011;32(23):2999–3054 Epub 2011/08/30. Task Force on the management of STseamiotESoC, Steg PG, James SK, Atar D, Badano LP, Blomstrom-Lundqvist C, et al. ESC guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation. Eur Heart J. 2012;33(20):2569–619 Epub 2012/08/28. Huber CA, Meyer MR, Steffel J, Blozik E, Reich O, Rosemann T. Post-myocardial infarction (MI) care: medication adherence for secondary prevention after MI in a large real-world population. Clin Ther. 2019;41(1):107–17 Epub 2018/12/29. Choudhry NK, Avorn J, Glynn RJ, Antman EM, Schneeweiss S, Toscano M, et al. Full coverage for preventive medications after myocardial infarction. N Engl J Med. 2011;365(22):2088–97 Epub 2011/11/15. Stuart BC, Dai M, Xu J, Loh FH, Dougherty JS. Does good medication adherence really save payers money? Med Care. 2015;53(6):517–23 Epub 2015/05/12. Cutler RL, Fernandez-Llimos F, Frommer M, Benrimoj C, Garcia-Cardenas V. Economic impact of medication non-adherence by disease groups: a systematic review. BMJ Open. 2018;8(1):e016982 Epub 2018/01/24. Yang Z, Howard DH, Will J, Loustalot F, Ritchey M, Roy K. Association of Antihypertensive Medication Adherence with Healthcare use and Medicaid Expenditures for acute cardiovascular events. Med Care. 2016;54(5):504–11 Epub 2016/04/15. Peterson GM, Jackson SL. Patient compliance in the prevention and treatment of cardiovascular disease: Patient Compliance. Aldershot: Routledge; 2016. p. 61–74. Hou Y, Yue Y, Zhao M, Jiang S. Prevalence and association of medication nonadherence with major adverse cardiovascular events in patients with myocardial infarction. Medicine. 2019;98(44):e17826 Epub 2019/11/07. Figueroa JF, Joynt Maddox KE, Beaulieu N, Wild RC, Jha AK. Concentration of potentially preventable spending among high-cost Medicare subpopulations: an observational study. Ann Intern Med. 2017;167(10):706–13. Biller-Andorno N, Zeltner T. Individual responsibility and community solidarity--the Swiss health care system. N Engl J Med. 2015;373(23):2193–7 Epub 2015/12/03. Wolff H, Gaspoz JM, Guessous I. Health care renunciation for economic reasons in Switzerland. Swiss Med Wkly. 2011;141:w13165 Epub 2011/02/22. Wilkins B, Hullikunte S, Simmonds M, Sasse A, Larsen P, Harding SA. Improving the prescribing gap for guideline recommended medications post myocardial infarction. Heart Lung Circ. 2019;28(2):257–62 Epub 2018/03/11. Lamers LM, Vliet R. Health-based risk adjustment improving the pharmacy-based cost group. Eur J Health Econ. 2003;4(2):107–14. Diehr P, Yanez D, Ash A, Hornbrook M, Lin DY. Methods for analyzing health care utilization and costs. Annu Rev Public Health. 1999;20(1):125–44. Schwenkglenks M, Preiswerk G, Lehner R, Weber F, Szucs TD. Economic efficiency of gate-keeping compared with fee for service plans: a Swiss example. J Epidemiol Community Health. 2006;60(1):24–30 Epub 2005/12/20. Belotti F, Deb P, Manning WG, Norton EC. twopm: Two-part models. Stata J. 2015;15(1):3–20. Hernan MA, Hernandez-Diaz S, Robins JM. A structural approach to selection bias. Epidemiology. 2004;15(5):615–25 Epub 2004/08/17. Chernozhukov V, Fernández-Val I, Melly B. Inference on counterfactual distributions. Econometrica. 2013;81(6):2205–68. von Wyl V, Telser H, Weber A, Fischer B, Beck K. Cost trajectories from the final life year reveal intensity of end-of-life care and can help to guide palliative care interventions. BMJ Support Palliat Care. 2018;8(3):325–34 Epub 2015/10/17. Roebuck MC, Kaestner RJ, Dougherty JS. Impact of medication adherence on health services utilization in medicaid. Med Care. 2018;56(3):266–73. Lloyd JT, Maresh S, Powers CA, Shrank WH, Alley DE. How much does medication nonadherence cost the Medicare fee-for-service program? Med Care. 2019;57(3):218–24 Epub 2019/01/25. The study was funded by the Swiss National Science Foundation (SNSF) National Research Program "Smarter Health Care" (NRP 74), as part of project number 26, grant number 407440_167349. Department of Epidemiology, Epidemiology, Biostatistics and Prevention Institute, University of Zurich, Hirschengraben 84, 8001, Zurich, Switzerland Viktor von Wyl, Agne Ulyte, Wenjia Wei, Dragana Radovanovic, Oliver Grübner, Beat Brüngger, Caroline Bähler, Holger Dressel & Matthias Schwenkglenks Institute for Implementation Science in Health Care, University of Zurich, Universitätsstrasse 84, 8006, Zurich, Switzerland Viktor von Wyl Department of Geography, University of Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland Oliver Grübner & Eva Blozik Department of Health Sciences, Helsana Group, Zürichstrasse 130, 8600, Dübendorf, Switzerland Beat Brüngger & Caroline Bähler Institute of Primary Care, University of Zurich and University Hospital Zurich, Pestalozzistrasse 24, 8091, Zürich, Switzerland Eva Blozik Agne Ulyte Wenjia Wei Dragana Radovanovic Oliver Grübner Beat Brüngger Caroline Bähler Holger Dressel Matthias Schwenkglenks M.S., V.vW. and H.D. developed the underlying study program. A.U., W.W., B.B., E.B., C.B., O.G. did data preparation and data management. V.vW. designed the study, performed statistical analysis and wrote the main manuscript text. V.vW. has the full access to all the study data and takes responsibility for the integrity and accuracy of data analysis. A.U., W.W., B.B., E.B., C.B., O.G., D.R., M.S., V.vW. and H.D. authors gave input on study design, interpreted the statistical results, critically revised the manuscript. All authors have read and approved the final manuscript. Correspondence to Viktor von Wyl. Beat Brüngger, Caroline Bähler, and Eva Blozik are employed by Helsana Insurance Group. Matthias Schwenkglenks receives grants from Helsana Insurance Group, via employment institution. All other authors report no conflicts of interest. Additional file 1: Supplementary figure legend 1. Indicators for trajectory model choice with respect to hypothesized number of distinct trajectory groups. The blue line indicates, for the validation data, the overlap between the training-based predictor and an independent k-means classification. Overall, classification overlap was in the order of 90% to 95% and peaked when four k-means groups were chosen. Supplementary figure legend 2. Cost decomposition by groups derived from the trajectory analysis, full sample. Supplementary figure legend 3. Cost decomposition by groups derived from the trajectory analysis, unexposed persons only. Supplementary Table 1. Comparison of clinical outcomes between compliers and non-compliers during observation period. Supplementary Table 2. Factors associated with compliance to 4-class secondary myocardial infarction prophylaxis (main analysis) or 3- or 4-class prophylaxis (sensitivity analysis). The results from the multivariable logistic regression model were used to calculate the inverse probability weights. Confidence intervals printed in bold face do not include 1, which indicates statistical significance at the 5% level. Supplementary Table 3. comparison of medication expenditures between compliers and non-compliers. von Wyl, V., Ulyte, A., Wei, W. et al. Going beyond the mean: economic benefits of myocardial infarction secondary prevention. BMC Health Serv Res 20, 1125 (2020). https://doi.org/10.1186/s12913-020-05985-x DOI: https://doi.org/10.1186/s12913-020-05985-x Submission enquiries: [email protected]
CommonCrawl
Mössbauer spectroscopy of a monolayer of single molecule magnets Site-specific spectroscopic measurement of spin and charge in (LuFeO3)m/(LuFe2O4)1 multiferroic superlattices Shiyu Fan, Hena Das, … Janice L. Musfeldt Observation of gapped state in rare-earth monopnictide HoSb M. Mofazzel Hosen, Gyanendra Dhakal, … Madhab Neupane Evolution of cooperativity in the spin transition of an iron(II) complex on a graphite surface Lalminthang Kipgen, Matthias Bernien, … Wolfgang Kuch Giant thermal hysteresis in Verwey transition of single domain Fe3O4 nanoparticles Taehun Kim, Sumin Lim, … Je-Geun Park Evidence for the homogeneous ferromagnetic phase in (Ga,Mn)(Bi,As) epitaxial layers from muon spin relaxation spectroscopy K. Levchenko, T. Prokscha, … T. Wosinski Ground state anomalies in SmB6 Anup Pradhan Sakhya & Kalobaran Maiti Magnetic order and critical temperature of substitutionally doped transition metal dichalcogenide monolayers Sabyasachi Tiwari, Maarten L. Van de Put, … William G. Vandenberghe Strong enhancement of magnetic order from bulk to stretched monolayer FeSe as Hund's metals Chang-Youn Moon Quantum magnetisms in uniform triangular lattices Li2AMo3O8 (A = In, Sc) Kazuki Iida, Hiroyuki Yoshida, … Ryoichi Kajimoto Alberto Cini1, Matteo Mannini ORCID: orcid.org/0000-0001-7549-21242, Federico Totti ORCID: orcid.org/0000-0003-4752-04952, Maria Fittipaldi1, Gabriele Spina1, Aleksandr Chumakov ORCID: orcid.org/0000-0002-0755-04223, Rudolf Rüffer3, Andrea Cornia ORCID: orcid.org/0000-0001-9765-31284 & Roberta Sessoli ORCID: orcid.org/0000-0003-3783-27002 Nature Communications volume 9, Article number: 480 (2018) Cite this article Magnetic materials Molecular self-assembly Surface spectroscopy The use of single molecule magnets (SMMs) as cornerstone elements in spintronics and quantum computing applications demands that magnetic bistability is retained when molecules are interfaced with solid conducting surfaces. Here, we employ synchrotron Mössbauer spectroscopy to investigate a monolayer of a tetrairon(III) (Fe4) SMM chemically grafted on a gold substrate. At low temperature and zero magnetic field, we observe the magnetic pattern of the Fe4 molecule, indicating slow spin fluctuations compared to the Mössbauer timescale. Significant structural deformations of the magnetic core, induced by the interaction with the substrate, as predicted by ab initio molecular dynamics, are also observed. However, the effects of the modifications occurring at the individual iron sites partially compensate each other, so that slow magnetic relaxation is retained on the surface. Interestingly, these deformations escaped detection by conventional synchrotron-based techniques, like X-ray magnetic circular dichroism, thus highlighting the power of synchrotron Mössbauer spectroscopy for the investigation of hybrid interfaces. Single molecule magnets (SMMs) are a very appealing class of nanomagnetic objects with potential application for spintronics1,2,3 and quantum computing4,5,6. Their properties depend on the combination of a large molecular spin and an easy-axis magnetic anisotropy, which results in a double well energy potential, that opposes to the reversal of the magnetization7. Organization of these molecules on surfaces was the focus of considerable effort as a prerequisite to single molecule addressing8,9,10. In such studies, it was demonstrated that magnetic bistability persists and can even be enhanced on a surface11, 12. Especially efficient in boosting the memory effect of both molecules13 and individual atoms14 is deposition on thin insulating layers (e.g., MgO) rather than directly on the metal surface. Despite these remarkable results, the factors controlling magnetic bistability on surfaces remain still unclear. This is in part due to the limited number of experimental techniques that are sensitive enough to detect the magnetic properties of a monolayer (or less) of magnetic molecules. Most investigations rely on the use of X-ray absorption and magnetic circular dichroism (XMCD)15 with synchrotron radiation, which has exceptional surface sensitivity and selectivity to the magnetism of the probed elements. In silico methods can also be of considerable aid in predicting the fine evolution of geometrical and magnetic structure of SMMs on a surface16. In such a framework, Mössbauer spectroscopy, beyond having an outstanding sensitivity to the coordination environment of the probed atom, is able to investigate the spin dynamics over timescales (1–1000 ns) much shorter than those accessible by XMCD. The technique was previously adopted to study the relaxation behavior of many SMM materials containing 57Fe as Mössbauer active nucleus17, 18. However, the standard use of radioactive sources limits the application domain of Mössbauer spectroscopy to bulk samples19. This limitation was overcome by the use of time-domain nuclear resonant scattering of synchrotron radiation at grazing angle: an example was the study of monolayers of metallic 57Fe grown on W(100) or embedded in layered systems20,21,22. However, with this technique complex spectra are expected for samples containing inequivalent iron sites and characterized by several electronic levels. Moreover, the radiation intensity customarily used in time-domain spectroscopy may damage molecular or biological compounds. Here, we show how energy-domain Mössbauer spectroscopy based on the high brilliance of synchrotron light can be used to probe the magnetism of a molecular monolayer, providing an unprecedentedly detailed picture of molecule-surface interaction effects on structural, electronic, and magnetic properties of the adsorbed SMMs. Our study focused on the tetrairon(III) (Fe4) SMMs, whose propeller-like molecular structure is sufficiently robust to withstand chemisorption on surfaces, processing by thermal sublimation23, and the construction of single-molecule devices24. Indeed, Fe4 complexes were the first SMMs to show magnetic hysteresis when deposited as a monolayer on a gold surface11, 23. They have a ground state with a total spin S = 5, while the first excited states are two S = 4 manifolds, which lie about 60 K higher in energy and are degenerate in case of perfect three-fold symmetry. A moderate easy-axis magnetic anisotropy, directed perpendicular to the plane of the four Fe3+ ions, splits the ground spin manifold of about 15 K, which also corresponds to the height of the energy barrier for the reversal of the magnetization. Here a monolayer of Fe4 SMMs tethered to a polycrystalline gold substrate was investigated by synchrotron-based Mössbauer spectroscopy, which revealed magnetic features in zero field typical of SMM behavior. Thanks to the outstanding sensitivity of this spectroscopy to the coordination environment of the probed atom, we have also evidenced that molecules on surface undergo significant structural deformations, which are undetectable by X-ray absorption techniques and scanning probe methods25. As predicted by ab initio molecular dynamics (AIMD) calculations26, these modifications affect differently the four iron atoms. However, such local distortions partially compensate each other and the molecules retain on the surface an S = 5 ground state and a slow spin dynamics comparable to that of the bulk phase, thus justifying the magnetic robustness of this class of SMMs. Synchrotron Mössbauer Spectroscopy of Fe4 SMMs In this work, we studied a Au-supported self-assembled monolayer of the same Fe4 derivative (see Fig. 1) previously employed in XMCD investigations27. Its complete formula is [Fe4(L)2(dpm)6], where Hdpm is dipivaloylmethane and H3L is 7-(acetylthio)-2,2-bis(hydroxymethyl)heptan-1-ol, a thioacetyl-functionalized tripodal ligand that promotes chemisorption on gold substrates (see Fig. 1). The compound was characterized elsewhere27 and was here prepared as a 95% 57Fe-enriched sample (see Methods section and Supplementary Methods). A monolayer deposit was obtained by chemisorption using a wet chemistry protocol (see Methods section) that guarantees the formation of a dense monolayer with no physisorbed material28. As a bulk-phase reference, a dropcast sample of the same compound, with an inhomogeneous thickness of about 100 ± 50 nm, was prepared. Structure of the investigated systems and experimental setup. a Structure of Fe4 SMM in the crystalline phase (color code: Fe atoms in green and violet, O in red, S in yellow, C as pale gray sticks, H atoms omitted for clarity); b scheme of the magnetic core, where the dominating antiferromagnetic interactions (in red) lead to an S = 5 ground state; c view of the structure of the Fe4 SMM tethered to Au(111) through the deprotected thioacetyl termination, as obtained by ab initio molecular dynamics calculations26. Hydrogen atoms and tert-butyl groups on dpm– ligands have been omitted for clarity; d scheme of the experimental setup of the Synchrotron Mössbauer Source at the ID18 beamline of ESRF The Synchrotron Mössbauer Source (SMS) available at ID18, the nuclear resonance beamline29 of the European Synchrotron Radiation Facility (ESRF), was used to record Mössbauer spectra of both dropcast and monolayer Fe4 samples. The characteristic device of the SMS is an iron borate (57FeBO3) crystal, kept at a temperature close to its Néel temperature (see Fig. 1d). By means of a pure nuclear reflection by the crystal, the synchrotron light coming out from the monochromators is filtered into an 57Fe-resonant narrow single line30, 31. In contrast to common radioactive sources, the radiation generated by the SMS is a needle-like collimated beam with small (~mm) size, which can be further focused to spot sizes of micrometric lateral dimensions31. This allows working in grazing incidence geometry to achieve the necessary sensitivity to investigate ultra-thin films. Furthermore, the radiation is fully polarized and recoilless, and sufficiently high photon fluxes can be obtained (see Supplementary Methods for details). Mössbauer spectra were recorded by collecting the scattered radiation resulting from the reflection on the sample surface at a grazing incidence (see Fig. 1d). Due to the structure of our samples (low-Z film on a high-Z substrate, Z being the atomic number), the grazing incidence reflection occurs from the surface of the substrate, and the molecular layer produces absorption lines. In the case of small grazing angles, the spectra can be treated as those obtained in a standard setup in transmission geometry, provided that the effective thickness of the samples is multiplied by a factor 2/(sinθ), where θ is the grazing angle between the surface of the sample and the direction of the incoming radiation. In our experimental conditions, working at θ ∼0.1° provides a 1100-fold amplification factor, which is essential for studying molecular monolayers by SMS. Mössbauer spectra of the dropcast and monolayer samples were recorded in the temperature ranges 2.2–40 and 2.2–11 K, respectively, and are shown in Fig. 2. At the lowest explored temperature, the spectrum of the dropcast sample (Fig. 2a) exhibits a six-line pattern, as expected for a Fe-containing sample experiencing slow spin fluctuations compared to the Mössbauer timescale. However, the lines towards the extremes of the spectrum are further split, indicating that inequivalent iron sites are present in the structure; this is in line with previous reports on standard Mössbauer characterization of unfunctionalized Fe4 clusters in the bulk phase18. Mössbauer spectra of the dropcast and monolayer samples for various temperatures. a Experimental spectra (black lines) of the Fe4 dropcast sample and best fit curves (red lines). b Experimental spectra (black lines) of the Fe4 monolayer sample and best fit curves (red lines). The velocity axis values are relative to the α-Fe standard With increasing temperature, distortions of the spectrum become evident. Increased thermal fluctuations and the population of excited spin levels cause a progressive collapse of the spectrum into a single central structure at 40 K. The monolayer's spectra are qualitatively similar, though about two orders of magnitude less intense (Fig. 2b), in agreement with the reduced number of 57Fe4 molecules probed by the radiation, (see Supplementary Figs.1–4 for a direct comparison between selected spectra of the two samples). Fitting of the Mössbauer spectra Given that the Fe4 molecule has total spin state S = 5, we expect each iron atom to contribute to the spectra with six hyperfine-split lines per |±M S 〉 doublet (M S = 1, 2, … 5) plus two quadrupole-split lines arising from the |M S = 0〉 state, thus yielding 4 (6 × 5 + 2) = 128 lines. Some of these transitions may coincide because of the molecular symmetry. In particular, while the central ion (Fe1) experiences a quite distinct coordination environment, the other iron atoms reside in approximately the same coordination environment. Despite this idealized three-fold symmetry, Fe4 derivatives in the solid state often possess a binary symmetry axis lying along two iron atoms. Therefore, for a quantitative analysis of the dropcast sample spectra we considered that the Mössbauer absorption cross-section is given by the superposition of the contributions coming from three inequivalent iron ions, with intensity ratio 1:1:2. Each contribution was evaluated as described in earlier Mössbauer bulk-phase experiments17, 18 and characterized by a number of parameters (see Methods and Supplementary Note 1). These include hyperfine interactions for each iron site (the isomer shift with respect to α-Fe, the quadrupole shift, and the hyperfine fields due to the iron electronic spin), magnetic anisotropy and magnetic exchange interactions governing the transition rates between different spin sublevels of the system, as well as sample-specific parameters, like thickness and texture. The latter is a parameter quantifying a possible macroscopic orientation of the molecules on the surface. In particular, the possibility to sense a texture, which has a signature in the relative intensities of the lines, is based on the orientation of the quantization axis of the molecule with respect to the γ-ray polarization. For the dropcast sample, the agreement between the experimental and calculated data (Fig. 2a) is reasonably good in the whole temperature range. In Fig. 3a, the cross-section contributions of each Fe3+ site to the spectrum taken at 2.2 K are shown, with best fit parameters reported in Table 1 (the values obtained from the fits of higher temperature spectra are reported in Supplementary Tables 1–3, while residuals are shown in Supplementary Figs. 5 and 6). Deconvolution of Mössbauer spectra in individual Fe3+ contributions. a Mössbauer spectrum at 2.2 K of the dropcast sample (black line) and best fit curve (red line); the cross-sections of the three inequivalent Fe3+ sites expected in a twofold-symmetric molecule are shown in green, magenta, and blue and were calculated with the parameters listed in Table 1 (columns from left to right, respectively). b Mössbauer spectrum at 2.2 K of the monolayer sample (black line) and best fit curve (red line); the cross-sections of the four inequivalent Fe3+ sites (green, orange and blue/magenta) were calculated with the parameters listed in Table 1 (columns from left to right, respectively) Table 1 Mössbauer parameters extracted from the fitting of the spectra at T = 2.2 K and from ab initio calculations As expected, the Mössbauer parameters associated to three out of the four Fe3+ ions are very similar to each other and their contributions overlap extensively, in agreement with an almost ideal three-fold symmetry of the cluster. The contribution with a significantly different quadrupole interaction and a smaller hyperfine field can be thus safely assigned to the central Fe3+ ion on the basis of its different coordination environment. In general, the hyperfine field values are proportional to the number of unpaired 3d electrons for each iron ion (i = 1–4) and thus to the spin projection along the local direction of the quantization axis. However, exchange interactions introduce spin fluctuations at frequencies higher than the top limit of the Mössbauer time window, so that the measured hyperfine fields are proportional to the time average of the spin projections, \(\left\langle {s_z^i} \right\rangle\). The latter are affected by both magnetic exchange interactions and magnetic anisotropies, as most simply described by the spin Hamiltonian: $$H = \mathop {\sum}\nolimits_{i,j = 1,4}^{i < j} {J_{ij}} {\mathbf{s}}_i \cdot {\mathbf{s}}_j + \mathop {\sum}\nolimits_{i,j = 1,4}^{i < j} {{\mathbf{s}}_i} \cdot \overline {d_{ij}} \cdot {\mathbf{s}}_j + \mathop {\sum}\nolimits_{i = 1,4} {{\mathbf{s}}_i} \cdot \overline {D_i} \cdot {\mathbf{s}}_i$$ The first term represents the isotropic exchange interaction, the second one contains the anisotropic part of intramolecular interactions, which are mainly of dipolar origin in assemblies of high spin 3d5 ions, while the third one takes into account single ion magnetic anisotropy. Nearest-neighbour antiferromagnetic exchange interactions between the central (i = 1) and peripheral (i = 2, 3, and 4) ions provide the leading term responsible for the ground S = 5 molecular state. This state features a ferrimagnetic-like arrangement of the spins, with the central and peripheral spins pointing in opposite directions (Fig. 1b). Because of magnetic anisotropy, the total spin vector in the ground state lies preferably collinear to the idealized three-fold molecular axis and the S = 5 multiplet is thus split approximately according to the giant-spin Hamiltonian7: $$H_{S = 5} = D\left[ {S_z^2 - \frac{1}{3}S\left( {S + 1} \right)} \right] + E\left( {S_x^2 - S_y^2} \right)$$ where D < 0 and E are the second-order axial and transverse magnetic anisotropy parameters, respectively. Assuming three-fold symmetry, i.e. E = 0, the spin projection values within the ground M S = ±5 doublet are \(\left\langle {s_z^1} \right\rangle = \mp 2.0833\) and \(\left\langle {s_z^2} \right\rangle = \left\langle {s_z^3} \right\rangle = \left\langle {s_z^4} \right\rangle = \pm 2.3611\) (in ħ units). The experimental hyperfine fields for the central and peripheral ions in the dropcast sample (Table 1) are in good agreement with the so-called 22\(\left\langle {s_z^i} \right\rangle\)rule for the Fermi contact field32 (Supplementary Fig. 7). This indicates that the Mössbauer spectrum at 2.2 K probes intact Fe4 complexes in their M S = ±5 ground doublet. Moreover, below 40 K, the thermal evolution of the spectra (Fig. 2a) likely reflects the contribution of excited states within the S = 5 manifold as determined by two main, and somehow distinguishable, mechanisms. Indeed, on increasing temperature, the thermal population of states with |M S | < 5 increases the number of thermally accessible transitions with a corresponding change in the overall intensity of the lines. On the contrary, the increased transition rate between different states causes changes in shape and shifts the lines towards the center of the spectrum, as illustrated in Supplementary Note 2. Inspection of Fig. 2a shows that the thermal evolution of the spectra is dominated by the population of excited spin states below 11 K (slow relaxation regime), and by the increased transition rates above 11 K (intermediate relaxation regime). The latter is evident from the collapse of the spectrum towards a central line superimposed to a V-shaped baseline (see Supplementary Note 2 and Supplementary Fig. 8 in which the two different behaviors are simulated at 11 K). As a consequence of this, the physical parameters describing both effects could be extracted from the fit of the spectra. The best fit values, D = −0.605(7) K and E = 0.003(1) K, well agree with those resulting from magnetic data for the bulk crystalline phase27. The transition rates among the spin states of the S = 5 manifold are determined both by the E term, promoting the tunnelling between the two potential wells, and by the interaction of the molecular spin with the thermal bath. To take this last interaction into account, a spin–bath interaction term H sb has to be included in Eq. 2: $$H_{sb} = F_bQ_S(S_x,S_y,S_z)$$ H sb is expressed as the product of two operators (Q S and F b ), which act on the molecular spin and lattice states, respectively. More details on these two operators can be found in Supplementary Notes 3 and 4. Considering that the matrix elements of Q S between two given spin states do not depend on temperature, the thermal dependence of the matrix elements of H sb is linked only to the thermal dependence of the matrix elements of F b and thus to the physical features of the thermal bath and to the specific interaction scheme17. In particular, the parameters determining the thermal dependence of the spectra are proportional to the Fourier transform of the correlation function of F b evaluated at the transition frequencies. This quantity can be evaluated at zero frequency in the usual approximation where the linewidth of bath states is larger than the energy separation between the electronic spin states. More details are provided in Supplementary Note 4, where both direct (single-phonon) and indirect (multi-phonon) processes are considered. In the approximation introduced above the spin dynamics depends on the transition rates between the electronic spin states of the molecule. To analyse the spin dynamics as a function of temperature, the transition rate W5−4 between M S = ±5 and M S = ±4 states was chosen as a fit parameter. All other transition rates between two generic i and j states were derived according to the relations Wj−i = W5−4|⟨i|Q S |j⟩|2/|⟨5|Q S |4⟩|2 and Wi−j = Wj−iexp((E i − E j )/(k B T)). In Fig. 4, we present the temperature dependence of W5−4, together with a fit of the data up to 11 K with a function of the form: $$W_{5 - 4} = A/(\exp \left( {B/k_BT} \right) - 1),$$ which is representative of a direct process. This analysis of low temperature transition rates yields B = 6.8(4) K and A = 2.8(3) × 106 Hz. For temperatures >11 K, the increase of W5-4 is more rapid than predicted by Eq. (4) suggesting two-phonons processes. These can be taken into account using the expression: $$W_{5 - 4} = A\exp \left( {B/k_BT} \right)/(\exp \left( {B/k_BT} \right) - 1)^2$$ that describes a Raman process due to optical modes. Equation (5) coincides with Eq. (4) for T ≪ B, while for T > B it gives a T2 dependence (see Supplementary Note 4)19. The fit, shown in Fig. 4, was performed keeping the B parameter fixed at the value extracted from low temperature data. Equation (5) is qualitatively in agreement with the experimental data in a wide range of temperatures, suggesting that the spin dynamics is related to an optical mode with an energy of about 7 K (or, equivalently, a frequency of about 0.1 THz). A rapid increase of the transition rate is observed at the highest investigated temperature (40 K) where other total spins states start to be significantly populated. Spin dynamics in the dropcast and monolayer samples. Transition rate W5−4 between the states M S = ±5 and M S = ±4 extracted from the fits of the Mössbauer spectra of the dropcast sample (red squares) and of the monolayer sample (black dots) as a function of temperature. The magenta line represents the fit of the data for the dropcast sample at T ≤ 11 K assuming a direct process as in Eq. (4), while the cyan line corresponds to the more general function reported in Eq. (5) Passing to the monolayer sample, the analysis of the spectra has been performed following the same strategy used for the dropcast sample. From the fitting of the monolayer spectrum taken at the lowest temperature, a surface density of ∼3 57Fe ions per nm2 was estimated. This value agrees, within the experimental uncertainty of the technique, with the one evaluated by an STM study of Fe4 molecules evaporated on a gold single crystal (∼2 57Fe ions per nm2)23. Therefore, our study represents the first successful investigation of a monolayer of molecules by means of Mössbauer spectroscopy. The best fit calculated curves are shown in Fig. 2b for each sampled temperature, while residuals are presented in Supplementary Figs. 9 and 10. The individual Fe3+ cross-section contributions to the spectrum taken at 2.2 K are shown in Fig. 3b, with best fit parameters reported in Table 1. To comply with a possible reduction of symmetry of the cluster four non-equivalent Fe sites were considered. However, two contributions are identical within error (blue and magenta lines in Fig. 3b) and only three non-equivalent Fe sites are thus reported in Table 1. The parameters in Table 1 were kept fixed in the analysis of the spectra acquired at higher temperatures, while the transition rate (W5–4) and the D parameter were let free to vary. The texture value extracted from each spectrum revealed no preferential orientation of the molecules on the substrate. In fact, imposing the same oriented grafting as detected by X-ray absorption experiments on Au(111)27 fails to satisfactorily reproduce the data (see Supplementary Note 5 and Supplementary Fig. 11). This discrepancy is however not unexpected, since a reconstructed, atomically flat Au(111) surface, known to promote the preferential orientation detected by X-ray natural linear dichroism27, could not be used for the Mössbauer experiments (see Methods section). As a distinctive feature of the monolayer's spectra, all the lines of each sextet are characterized by approximately the same linewidth, although larger than found in the dropcast sample. Two mechanisms can be responsible for the observed line broadening. Faster transition rates between spin levels broaden the lines, but, at the same time, promote a collapse towards the center of the spectrum, which is not experimentally observed in the 2.2 K spectrum. Moreover, to reproduce the temperature dependence of the spectra an unphysical decrease of transition rates upon increasing temperature would be required. An alternative source of broadening is the presence of an inhomogeneous distribution of electric parameters (quadrupole shift and/or isomer shift). A significant distribution in hyperfine fields can be safely ruled out as it would affect the external lines approximatively six times more than the internal ones (see Supplementary Note 2 for details). The presence of inhomogeneous broadening in the electric parameters was thus taken into account by assuming Gaussian lines with standard deviation of the order of 0.2–0.3 mm s−1, as estimated from the lowest temperature spectrum, which is less affected by spin dynamics. If the observed broadening is ascribed to a distribution of the ligand contributions to the quadrupole interaction, the ligand donor atoms must undergo displacements of the order of 0.04 Å from their average positions. Furthermore, all the magnetic hyperfine field values are significantly smaller than those found in the dropcast sample. As a final point, the spin dynamics of the grafted Fe4 molecules was addressed by investigating the temperature evolution of the spectra (see Fig. 2b). Despite the much worse signal-to-noise ratio, it is evident that, on increasing the temperature, the external lines lose intensity faster than in the dropcast sample. This suggests that the |D| value is reduced upon grafting, as directly confirmed by fitting the Mössbauer spectrum recorded at the lowest temperature (D = −0.49(6) K, with the E/D ratio held fixed at the value found in the dropcast sample to avoid overparametrization in the fitting of poorly resolved spectra). At the same time the transition rate W5−4 extracted from the fit of the spectra (see Fig. 4) is comparable with that found for the dropcast sample. Thus, Mössbauer data of the monolayer sample indicate that the deposition on a surface does not significantly modify the spin dynamics, as suggested by previous XMCD evidences27. Ab initio calculations Qualitative and quantitative analyses of the recorded Mössbauer spectra point to significant alterations induced by the chemical grafting on the surface. AIMD calculations, recently performed by some of us on the same molecule grafted on the Au(111) surface26, can provide a valuable insight into the observed differences. Indeed, the picture emerging from the AIMD investigation was that van der Waals interactions between the gold substrate and the aliphatic tether of the tripodal ligand cause significant distortions of the magnetic core. Such interactions induce several accessible energy minima in the potential energy surface of the system. The predicted scenario closely matches that emerging from the Mössbauer spectra of the monolayer sample. Remarkably, the standard deviation of Fe–O bond lengths, estimated over eight different AIMD trajectories or walkers (see Supplementary Table 4), has the order of magnitude (10−2 Å) required to account for the observed line broadening through a distribution of quadrupole shifts. A pronounced variation of Fe–O bond lengths among the different walkers is observed for the central site (Fe1) and for the Fe atom closer to the aliphatic chain tethered to the gold substrate, namely Fe3 in Fig. 1c. Moreover, AIMD calculations showed that both single ion magnetic anisotropies and exchange interactions are affected by the grafting process (see Supplementary Table 5 and ref. 26). However, these modifications partially compensate each other so that the molecule retains an S = 5 ground state and an easy axis magnetic anisotropy. Focusing now on the hyperfine field experienced by the 57Fe nuclei, the process of chemical grafting on the gold surface can affect both the individual \(\left\langle {s_z^i} \right\rangle\) values, by altering intramolecular exchange interactions in Eq. (1), and the extent of electron delocalization on the nuclei. For the eight investigated AIMD trajectories26 the local z components in the M S = ±5 doublet are on average smaller in absolute value than in the bulk phase (Supplementary Fig. 7). The difference is more clearly visible on the central site Fe1 (2.087 vs. 2.137 ħ) than on peripheral sites (2.361 vs. 2.378 ħ). Furthermore, Fe3 shows the largest spread of \(\left\langle {s_z^i} \right\rangle\)values (±5%) over the eight walkers. In order to assign the observed hyperfine fields to specific Fe3+ ions, we assumed that the hyperfine field is an increasing function of \(\left| {\left\langle {s_z^i} \right\rangle } \right|\) averaged over the eight walkers. A linear trend was obtained (Supplementary Fig. 7), the slope of which is similar to that of the bulk sample, further indicating that Fe4 complexes retain their magnetic features on surface. However, all values seem to be shifted on average by 6 T towards smaller fields. To shed some light on this aspect, further ab initio calculations of the isotropic hyperfine magnetic fields were here performed. We considered the optimized geometry in the crystalline phase and two of the geometries obtained by AIMD trajectories after extrapolating the Fe4 complex from the underlying gold surface. The computed hyperfine magnetic fields, reported in Table 1, are dominated by the Fermi contact term. The time-demanding relativistic spin orbit coupling (SOC) contributions were not computed as they are expected to be negligible for high spin Fe3+. In agreement with Mössbauer spectra, the lowest hyperfine magnetic field is computed for the central iron in both bulk-phase and on-surface molecules (see Table 1), primarily as a consequence of the different \(\left\langle {s_z^i} \right\rangle\). The hyperfine fields are on average reduced upon grafting. The strongest reduction (−2.5 T) is calculated for the central iron atom, but is much smaller than found experimentally. The above described ab initio calculations only include surface-induced structural deformations, but neglect electronic effects of the metal surface. Their inclusion is not possible with the same accuracy, therefore the local spin densities were computed by pseudo-potential Gaussian Plane Waves (GPW-DFT) calculations by considering both the Fe4 molecule and 314 gold atoms of the underlying slab. To avoid spurious effect due to the use of pseudopotentials, these spin densities were compared to the ones obtained, at the same level of calculation, on molecules with the same geometry, but removing the gold slab—also referred to as extrapolated geometry. The results, averaged over the eight investigated AIMD trajectories, are reported in Supplementary Table 6 (for the individual values see Supplementary Table 7). They indicate that a small, but sizable reduction of the spin density occurs due to a purely electronic effect of the gold substrate, thereby hinting to a decreased polarization of core s orbitals of iron ions. The expected reduction of hyperfine fields, reported in Table 1, is of the order of 2 T for Fe1 and significantly smaller for the remaining ions. Though a similar investigation considering all-electron basis is not affordable, the observed reduction of hyperfine magnetic fields on the surface can be ascribed to both geometrical and electronic perturbations induced by the grafting process. Synchrotron Mössbauer spectroscopy applied here to a monolayer of molecules has provided unprecedented insights in the process of chemical absorption of a complex molecule on a surface. This has been achieved thanks to the largely increased sensitivity—without significant loss in spectral resolution—that characterizes the SMS as compared with a conventional Mössbauer setup. The resulting scenario is that significant deformations of the magnetic core are induced by the interaction with the substrate, even in the relatively well-protected and rigid Fe4 structure. These deformations emerge clearly from state-of-the-art ab initio molecular dynamics modelling, but are undetectable by conventional synchrotron based magneto-optical techniques, such as XMCD, as well as by more local probes, like scanning probe techniques. In the latter case, in fact, the tip of the microscope mechanically induces deformation, thus altering the molecular structure25. In surface-supported Fe4 complexes, SMM behavior is retained with comparable magnetization dynamics to the bulk phase. This observation can be reconciled with the evidences of surface-induced structural modifications because local distortions at different iron sites are calculated to partially compensate each other. This makes the Fe4 class of molecules particularly well suited for organization on surface and insertion in single molecule devices. In this sense, synchrotron Mössbauer spectroscopy could be employed in the investigation of other types of molecules assembled in monolayer architectures in view of the rational design of hybrid interfaces33 and single molecule spintronic devices. The synthesis of [57Fe4(L)2(dpm)6] was based on the previously reported27 preparation of [Fe4(L)2(dpm)6] and is described in detail in Supplementary Methods, while the mass spectra of 57Fe enriched vs. natural samples are reported in Supplementary Fig. 12. For the preparation of the monolayer sample a 20 nm thick polycrystalline gold film evaporated on silicon with a 5 nm Ti decoupling layer (SSENS, Enschede, NL) was immersed in a 2 mM solution of [57Fe4(L)2(dpm)6] in dichloromethane and incubated under controlled atmosphere for 20 h. This type of substrate, selected for its flatness and rigidity, is not compatible with hydrogen flame annealing, used to promote the surface reconstruction of Au(111) and the consequent preferential orientation of the Fe4 molecules in the monolayer27. After incubation the slide was washed with pure dichloromethane in order to remove any physisorbed overlayers, dried under argon atmosphere and directly mounted on the SMS sample holder. No XAS/XMCD data are available for a monolayer assembled on the substrates employed here. However, we recall that Fe4 molecules chemisorbed on well reconstructed Au(111) surfaces or tethered to Au nanoparticles34 both give XAS/XMCD spectra that do not differ appreciably from those recorded in the bulk phase. The dropcast film was prepared starting from the same kind of substrate and a freshly prepared 2 mM solution of [57Fe4(L)2(dpm)6]. Mössbauer experiments Mössbauer spectra were measured at ID18, the nuclear resonance beamline29 of the European Synchrotron radiation facility (ESRF), taking advantage of the Synchrotron Mössbauer Source (SMS)30, 31. For the present study an acceptable compromise between linewidth and acquisition time was achieved by setting the linewidth at half maximum (FWHM) at a value approximately three times larger than for a radioactive source. In these conditions, an intensity of about 1.5 × 104 photons per second was obtained on the 14.4 keV γ-line. The FWHM was evaluated by taking a Mössbauer spectrum of a single-line absorber (K2Mg57Fe(CN)6) of known thickness before and after each Mössbauer measurement of the samples. Moreover, the isomer shift of the SMS line, as measured from the single-line spectra, was of 0.709 mm s−1 with respect to conventional α-Fe. Mössbauer spectra were recorded by collecting the radiation reflected by the sample surface in a grazing incidence geometry. For both samples an incidence angle θ ∼ 0.1° (with a spot size of ca 18 μm in both dimensions) was chosen, after measuring the reflectivity of the samples as a function of θ. Mössbauer spectra were measured at different temperatures in the range 2.2–40 K and 2.2–11 K for the dropcast and the monolayer samples, respectively, using the superconducting He-exchange gas cryo-magnetic system. Mössbauer spectra were recorded without applying any external magnetic field. The contribution of Fe impurities in beryllium collimating lenses of the beamline, as detected in an empty-can Mössbauer spectrum, i.e., with no mounted sample (see Supplementary Fig. 13), was subtracted from the experimental spectra. Fitting method The fitting of Mössbauer spectra required the determination of the absorption cross-section of the samples (further details are provided in Supplementary Note 1). In general, the total cross-section is the superposition of a number of contributions equal to the number of inequivalent Fe sites; however, symmetry considerations can reduce the number of distinct contributions and consequently change their relative intensities; for example, for the Fe4 dropcast sample three contributions were used, with relative intensities in the ratio 1:1:2. The hyperfine Hamiltonians acting on the inequivalent iron nuclear sites are characterized by different static components (i.e., magnetic fields, electric quadrupole tensors, and isomer shifts), but share the same dynamical parameters, which reflect the interaction of the molecular total spin states with the thermal bath. When the molecular spin undergoes transitions between its states, the spins at the individual iron ions change their z-components simultaneously; consequently, the iron nuclei are subjected to stochastic changes in the hyperfine field magnitude, though the ratios between the hyperfine fields at the different iron sites remain unchanged. To evaluate the various contributions to the Mössbauer cross-section, Liouville super-operators associated with the spin and the spin-thermal bath Hamiltonians acting on all the possible transitions between the hyperfine states, were introduced (further details are provided in Supplementary Notes 2 and 3). The procedure required the evaluation of the eigenvectors and left and right eigenvalues of a set of non-Hermitian matrices of order 88. Moreover, broadenings of the cross-section contributions were introduced to describe inhomogeneous Gaussian distributions of hyperfine electric parameters. The resulting total cross-section as a function of the energy expressed in mm s−1 (σ(ω)) was then inserted into the transmission integral function in order to take into account the dependence of the Mössbauer spectra on the sample effective thickness. The complete expression used to fit the spectra was $$Y\left( v \right) = N_b(v)\left\{ {1 - {\int}_{\!\!\!\! - \infty }^\infty {L_2^S} (\omega - v,{{\varGamma }}_S)\left[ {1 - {\mathrm{exp}}( - t_a^{\mathrm{SMS}}\sigma (\omega ))} \right]\mathrm{d}\omega } \right\}$$ where v and N b (v) are the transducer velocity and the spectrum baseline, respectively35. Moreover \(L_2^S(\omega - v,{ {\varGamma }}_S)\) is the squared Lorentzian distribution31, centered at v and having Γ S as FWHM, used to describe the source line shape. Finally, \(t_a^{\mathrm{SMS}}\) is the effective thickness of the sample, obtained multiplying the sample effective thickness in a conventional Mössbauer setup by the factor 2/(sinθ), where θ is the grazing angle between the surface of the sample and the direction of the incoming radiation. The correctness of the procedure was checked by evaluating the number of iron ions per cm2 in the dropcast sample. Considering the mean superficial density of the molecule23, a sample thickness of the order of ∼ 100 nm was calculated in agreement with the nominal thickness of the dropcast sample. It is worth noting that, because the sensed Mössbauer thickness is the result of the multiplication by the above-mentioned factor, only synchrotron light at grazing incidence could succeed in detecting Mössbauer spectra not only of the monolayer but also of such a thin dropcast sample. A rather good agreement between experimental and best-fit data was found in the whole temperature range for the dropcast sample (χ2 values are comprised between 472 and 3908 for the 512 points); in spite of the worse signal-to-noise ratio, a quite good agreement was found for the monolayer sample too (χ2 values are comprised between 502 and 586 for the 512 points). The ORCA 3.0.3 program package was used to calculate the hyperfine tensors at the 57Fe nuclei. Calculations were made on one of the four crystallographically-independent molecules contained in the unit cell of the optimized structure of Fe4 and on the final geometry of two out of eight AIMD trajectories (walkers 1 and 3) considered in a previous work26. The optimized geometries were obtained for the M S = 5 Broken Symmetry state. We have limited the calculations to only two walkers because of the demanding computation time for such a large number of atoms (270). However, to avoid any loss of generality, we have chosen walkers 1 and 3 because they present very similar exchange parameters, but a different sign of the single-ion D value for Fe1. The gradient-corrected (GGA) exchange-correlation functional PBE36 was used with def2-TZVP basis set for all the elements and the RI approximation for the Coulomb operator integral evaluation employing the def2-TZVP/J auxiliary basis set37. It is known that the use of pure Gaussian atomic basis leads to an inaccurate descriptions of the electron density at the 57Fe nucleus38. For such a reason we used the core-polarized CP(PPP)39, 40 basis function for the iron atoms. Relativistic effects were not included in the computations, since it was shown that they do not improve the quality of the computed Mössbauer parameters40. The integral accuracy parameter was increased to 7.0 at the Fe center in order to provide more accurate core properties. The computed isotropic hyperfine fields were scaled by a factor of 1.81 as suggested for B3LYP in 40. In support to this choice, we have observed that an empiric factor 1.79 (hence very close to the B3LYP one) is needed to minimize the differences between the experimental and computed hyperfine fields for the central iron ion, chosen as reference. The Mulliken population analysis was performed on optimized structures obtained by AIMD (see ref. 26) using molecular optimized basis sets (DZVP-MOLOPT-SR-GTH) along with Goedecker–Teter–Hutter pseudopotentials. The TPSS functional together with Grimme's D3 corrections was used to account for the dispersion forces. The auxiliary plane wave basis set was needed for the representation of the electronic density in the reciprocal space and the efficient solution of Poisson's equation. We truncated the plane wave basis set at 400Ry. All relevant data, including ASCII files of the recorded Mössbauer spectra, are available from the authors on request. Sanvito, S. Molecular spintronics. Chem. Soc. Rev. 40, 3336–3355 (2011). Bogani, L. & Wernsdorfer, W. Molecular spintronics using single-molecule magnets. Nat. Mater. 7, 179–186 (2008). Urdampilleta, M., Klyatskaya, S., Cleuziou, J. P., Ruben, M. & Wernsdorfer, W. Supramolecular spin valves. Nat. Mater. 10, 502–506 (2011). Leuenberger, M. N. & Loss, D. Quantum computing in molecular magnets. Nature 410, 789–793 (2001). Thiele, S. et al. Electrically driven nuclear spin resonance in single-molecule magnets. Science 344, 1135–1138 (2014). Shiddiq, M. et al. Enhancing coherence in molecular spin qubits via atomic clock transitions. Nature 531, 348–351 (2016). Gatteschi, D., Sessoli, R. & Villain, J. Molecular Nanomagnets. (Oxford University Press, Oxford, 2006). Cornia, A. & Mannini, M. Molecular Nanomagnets and Related Phenomena (Springer, Berlin Heidelberg, 2014). Dreiser, J. Molecular lanthanide single-ion magnets: from bulk to submonolayers. J. Phys. Condens. Matter 27, 183203 (2015). Holmberg, R. J. & Murugesu, M. Adhering magnetic molecules to surfaces. J. Mater. Chem. C 3, 11986–11998 (2015). Mannini, M. et al. Magnetic memory of a single-molecule quantum magnet wired to a gold surface. Nat. Mater. 8, 194–197 (2009). Mannini, M. et al. Magnetic behaviour of TbPc2 single-molecule magnets chemically grafted on silicon surface. Nat. Commun. 5, 4582 (2014). Wäckerlin, C. et al. Giant hysteresis of single-molecule magnets adsorbed on a nonmagnetic insulator. Adv. Mater. 28, 5195–5199 (2016). Natterer, F. D. et al. Reading and writing single-atom magnets. Nature 543, 226–228 (2017). van der Laan, G. & Figueroa, A. I. X-ray magnetic circular dichroism-A versatile tool to study magnetism. Coord. Chem. Rev. 277, 95–129 (2014). Caneschi, A., Gatteschi, D. & Totti, F. Molecular magnets and surfaces: a promising marriage. A DFT insight. Coord. Chem. Rev. 289–290, 357–378 (2015). Caneschi, A. et al. Study of the spin dynamics in an iron cluster nanomagnet by means of Mössbauer spectroscopy. J. Phys. Condens. Matter 11, 3395–3403 (1999). Cianchi, L., Del Giallo, F., Spina, G., Reiff, W. & Caneschi, A. Spin dynamics study of magnetic molecular clusters by means of Mössbauer spectroscopy. Phys. Rev. B 65, 64415 (2002). Gutlich, P., Eckhard, B. & Trautwein, A. X. Mössbauer spectroscopy and transition metal chemistry: fundamentals and applications. (Springer-Verlag, Berlin Heidelberg, 2011). Rohlsberger, R. et al. Nanoscale magnetism probed by nuclear resonant scattering of synchrotron radiation. Phys. Rev. B 67, 245412 (2003). Rohlsberger, R., Klein, T., Schlage, K., Leupold, O. & Ruffer, R. Coherent X-ray scattering from ultrathin probe layers. Phys. Rev. B 69, 235412 (2004). Slezak, M. et al. Perpendicular magnetic anisotropy and noncollinear magnetic structure in ultrathin Fe films on W(110). Phys. Rev. B 87, 134411 (2013). Malavolti, L. et al. Magnetic bistability in a submonolayer of sublimated Fe4 single-molecule magnets. Nano Lett. 15, 535–541 (2015). Zyazin, A. S. S. et al. Electric field controlled magnetic anisotropy in a single molecule. Nano Lett. 10, 3307–3311 (2010). Burgess, J. A. J. et al. Magnetic fingerprint of individual Fe4 molecular magnets under compression by a scanning tunnelling microscope. Nat. Commun. 6, 8216 (2015). Lunghi, A., Iannuzzi, M., Sessoli, R. & Totti, F. Single molecule magnets grafted on gold: magnetic properties from ab initio molecular dynamics. J. Mater. Chem. C 3, 7294–7304 (2015). Mannini, M. et al. Quantum tunnelling of the magnetization in a monolayer of oriented single-molecule magnets. Nature 468, 417–421 (2010). Totaro, P. et al. Tetrairon(III) single-molecule magnet monolayers on gold: insights from ToF-SIMS and Isotopic Labeling. Langmuir 30, 8645–8649 (2014). Rüffer, R. & Chumakov, A. I. Nuclear resonance beamline at ESRF. Hyperfine Interact. 97–98, 589–604 (1996). Smirnov, G. V., van Bürck, U., Chumakov, A. I., Baron, A. Q. R. & Rüffer, R. Synchrotron Mössbauer source. Phys. Rev. B 55, 5811–5815 (1997). Potapkin, V. et al. The 57Fe synchrotron Mössbauer source at the ESRF. J. Synchrotron Radiat. 19, 559–569 (2012). Greenwood, N. N. & Gibb, T. C. Mössbauer spectroscopy. (Springer Netherlands, Dordrecht, 1971). Cinchetti, M., Dediu, V. A. & Hueso, L. E. Activating the molecular spinterface. Nat. Mater. 16, 507–515 (2017). Perfetti, M. et al. Grafting single molecule magnets on gold nanoparticles. Small 10, 323–329 (2014). Chen, L. & De-Ping, Y. Mössbauer effect in lattice dynamics: experimental techniques and applications. (Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, 2007). Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996). Weigend, F. Accurate Coulomb-fitting basis sets for H to Rn. Phys. Chem. Chem. Phys. 8, 1057–1065 (2006). Römelt, M., Ye, S. & Neese, F. Calibration of modern density functional theory methods for the prediction of 57Fe Mössbauer isomer shifts: meta-GGA and double-hybrid functionals. Inorg. Chem. 48, 784–785 (2009). Neese, F. Prediction and interpretation of the 57Fe isomer shift in Mössbauer spectra by density functional theory. Inorg. Chim. Acta 337, 181–192 (2002). Sinnecker, S., Slep, L. D., Bill, E. & Neese, F. Performance of nonrelativistic and quasi-relativistic hybrid DFT for the prediction of electric and magnetic hyperfine parameters in 57Fe Mössbauer spectra. Inorg. Chem. 44, 2245–2254 (2005). This work was in part supported by the European Research Council through the Advanced Grant MolNanoMaS (no. 267746) and by European Union's Horizon2020 Research and Innovation Programme under grant agreement No. 737709 (FEMTOTERABYTE, http://www.physics.gu.se/femtoterabyte). MF is grateful for financial support by ECRF (no. 2014.0708). We thank Dr. Alessandro Lunghi for fruitful discussion. Department of Physics and Astronomy and INSTM Research Unit, University of Florence, 50019, Sesto Fiorentino, Italy Alberto Cini, Maria Fittipaldi & Gabriele Spina Department of Chemistry 'Ugo Schiff' and INSTM Research Unit, University of Florence, 50019, Sesto Fiorentino, Italy Matteo Mannini, Federico Totti & Roberta Sessoli ESRF-The European Synchrotron, CS40220, 38043, Grenoble Cedex 9, France Aleksandr Chumakov & Rudolf Rüffer Department of Chemical and Geological Sciences and INSTM Research Unit, University of Modena and Reggio Emilia, 41125, Modena, Italy Andrea Cornia Alberto Cini Matteo Mannini Federico Totti Maria Fittipaldi Gabriele Spina Aleksandr Chumakov Rudolf Rüffer Roberta Sessoli A.Co., M.M., and R.S. planned the investigation. A.Co. synthesized the material while molecular films were prepared by M.M. A.Ci., A.Co., and M.M. performed the experiments with the assistance of A.Ch. and R.R. Mössbauer data were analysed by A.Ci., M.F., and G.S. with the support of A.Co. and R.S.; F.T. performed the ab initio studies. All authors contributed to the discussion and to the drafting of the manuscript. Correspondence to Gabriele Spina, Andrea Cornia or Roberta Sessoli. The authors declare no competing financial interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Cini, A., Mannini, M., Totti, F. et al. Mössbauer spectroscopy of a monolayer of single molecule magnets. Nat Commun 9, 480 (2018). https://doi.org/10.1038/s41467-018-02840-w Computational design of magnetic molecules and their environment using quantum chemistry, machine learning and multiscale simulations Alessandro Lunghi Stefano Sanvito Nature Reviews Chemistry (2022) Nitronyl nitroxide radicals at the interface: a hybrid architecture for spintronics Lorenzo Poggini Giuseppe Cucinotta Rendiconti Lincei. Scienze Fisiche e Naturali (2018) Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes (1406.6892) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. Becerra González, B. Biasuzzi, F. Borracci, E. Colombo, P. Da Vela, B. De Lotto, D. Dominis Prester, D. Elsaesser, K. Frantzen, M. Garczarczyk, A. González Muñoz, Y. Hanabata, J. Hose, M. L. Knoetig, J. Kushida, E. Lindfors, R. López-Coto, M. Makariev, K. Mannheim, M. Martínez, R. Mirzoyan, V. Neustroev, R. Orito, D. Paneque, M. Persic, I. Puljak, J. Rodriguez Garcia, K. Satalecka, T. Schweizer, I. Snidaric, T. Steinbring, H. Takami, M. Teshima, T. Toyama (for the MAGIC collaboration), D. Horns IFAE, Campus UAB, E-08193 Bellaterra, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Institute of Space Sciences, E-08193 Barcelona, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, ICREA, Institute of Space Sciences, E-08193 Barcelona, Spain, Università dell'Insubria, INFN Milano Bicocca, Como, I-22100 Como, Italy, now at: NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, now at Ecole polytechnique fédérale de Lausanne Now at Institut für Astro- und Teilchenphysik, Leopold-Franzens- Universität Innsbruck, A-6020 Innsbruck, Austria, , Turku, Finland, now at Astrophysics Science Division, Bhabha Atomic Research Centre, Mumbai 400085, India, Institut für ExperimentalPhysik, Univ. Hamburg, D-22761, Hamburg, Germany, Stockholm University, Oskar Klein Center for Cosmoparticle Physics, SE-106 91, Stockholm, Sweden) July 22, 2015 astro-ph.HE The MAGIC stereoscopic system collected 69 hours of Crab Nebula data between October 2009 and April 2011. Analysis of this data sample using the latest improvements in the MAGIC stereoscopic software provided an unprecedented precision of spectral and night-by-night light curve determination at gamma rays. We derived a differential spectrum with a single instrument from 50 GeV up to almost 30 TeV with 5 bins per energy decade. At low energies, MAGIC results, combined with Fermi-LAT data, show a flat and broad Inverse Compton peak. The overall fit to the data between 1 GeV and 30 TeV is not well described by a log-parabola function. We find that a modified log-parabola function with an exponent of 2.5 instead of 2 provides a good description of the data ($\chi^2=35/26$). Using systematic uncertainties of red the MAGIC and Fermi-LAT measurements we determine the position of the Inverse Compton peak to be at (53 $\pm$ 3stat + 31syst -13syst) GeV, which is the most precise estimation up to date and is dominated by the systematic effects. There is no hint of the integral flux variability on daily scales at energies above 300 GeV when systematic uncertainties are included in the flux measurement. We consider three state- of-the-art theoretical models to describe the overall spectral energy distribution of the Crab Nebula. The constant B-field model cannot satisfactorily reproduce the VHE spectral measurements presented in this work, having particular difficulty reproducing the broadness of the observed IC peak. Most probably this implies that the assumption of the homogeneity of the magnetic field inside the nebula is incorrect. On the other hand, the time-dependent 1D spectral model provides a good fit of the new VHE results when considering a 80 {\mu}G magnetic field. However, it fails to match the data when including the morphology of the nebula at lower wavelengths. The major upgrade of the MAGIC telescopes, Part I: The hardware improvements and the commissioning of the system (1409.6073) MAGIC Collaboration: J. Aleksic, L. A. Antonelli, J. A. Barrio, B. Biasuzzi, S. Bonnefoy, A. Carosi, D. Corti, A. De Angelis, C. Delgado Mendez, M. Doro, D. Fidalgo, C. Fruck, D. Garrido Terrats, A. Gonzalez Munoz, Y. Hanabata, D. Hrupec, M. L. Knoetig, J. Kushida, N. Lewandowska, M. Lopez, I. Lozano, N. Mankuzhiyil, M. Martinez, R. Mirzoyan, M. Negrello, K. Nishijima, M. Palatiello, X. Paredes-Fortuny, P. G. Prada Moroni, M. Ribo, K. Saito, J. Schlammer, J. Sitarek, A. Stamerra, H. Takami, D. Tescaro, T. Toyama, R. Zanin Universita di Udine, INFN Trieste, I-33100 Udine, Italy INAF National Institute for Astrophysics, I-00136 Rome, Italy Universita di Siena, INFN Pisa, I-53100 Siena, Italy Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia Universidad Complutense, E-28040 Madrid, Spain Inst. de Astrofisica de Canarias, E-38200 La Laguna, Tenerife, Spain University of Lodz, PL-90236 Lodz, Poland, D-15738 Zeuthen, Germany Universitat Wurzburg, D-97074 Wurzburg, Germany Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, E-28040 Madrid, Spain Institute of Space Sciences, E-08193 Barcelona, Spain Universita di Padova, INFN, I-35131 Padova, Italy Technische Universitat Dortmund, D-44221 Dortmund, Germany Unitat de Fisica de les Radiacions, Departament de Fisica, CERES-IEEC, Universitat Autonoma de Barcelona, E-08193 Bellaterra, Spain Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria Universita di Pisa, , INFN Pisa, I-56126 Pisa, Italy ICREA, Institute of Space Sciences, E-08193 Barcelona, Spain Universita dell Insubria, INFN Milano Bicocca, Como, I-22100 Como, Italy European Gravitational Observatory, I-56021 S. Stefano a Macerata, Italy Universita di Siena, INFN Siena, I-53100 Siena, Italy now at: NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA now at Ecole polytechnique federale de Lausanne Now at Institut fur Astro- und Teilchenphysik, Leopold-Franzens- Universitat Innsbruck, A-6020 Innsbruck, Austria now at Finnish Centre for Astronomy with ESO now at Astrophysics Science Division, Bhabha Atomic Research Centre, Mumbai 400085, India also at INAF-Trieste) April 30, 2015 astro-ph.IM The MAGIC telescopes are two Imaging Atmospheric Cherenkov Telescopes (IACTs) located on the Canary island of La Palma. The telescopes are designed to measure Cherenkov light from air showers initiated by gamma rays in the energy regime from around 50 GeV to more than 50 TeV. The two telescopes were built in 2004 and 2009, respectively, with different cameras, triggers and readout systems. In the years 2011-2012 the MAGIC collaboration undertook a major upgrade to make the stereoscopic system uniform, improving its overall performance and easing its maintenance. In particular, the camera, the receivers and the trigger of the first telescope were replaced and the readout of the two telescopes was upgraded. This paper (Part I) describes the details of the upgrade as well as the basic performance parameters of MAGIC such as raw data treatment, dead time of the system, linearity in the electronic chain and sources of noise. In Part II, we describe the physics performance of the upgraded system. MAGIC detection of short-term variability of the high-peaked BL Lac object 1ES 0806+524 (1504.06115) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. Becerra González, B. Biasuzzi, F. Borracci, P. Colin, P. Da Vela, B. De Lotto, D. Dominis Prester, D. Eisenacher, D. Fidalgo, D. Galindo, D. Garrido Terrats, S. R. Gozzini, J. Herrera, H. Kellermann, J. Krause, N. Lewandowska, M. López, I. Lozano, K. Mannheim, D. Mazin, A. Moralejo, A. Niedzwiecki, K. Nishijima, M. Palatiello, X. Paredes-Fortuny, P. G. Prada Moroni, M. Ribó, K. Saito, T. Schweizer, I. Snidaric, M. Strzys, P. Temnikov, D. F. Torres, R. Zanin, D. Gasparrini, J. Richards Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Institute of Space Sciences, E-08193 Barcelona, Spain, Università di Padova, INFN, I-35131 Padova, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Japanese MAGIC Consortium, KEK, Department of Physics, Hakubi Center, Kyoto University, Tokai University, The University of Tokushima, ICRR, The University of Tokyo, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, ICREA, Institute of Space Sciences, E-08193 Barcelona, Spain, Università dell'Insubria, INFN Milano Bicocca, Como, I-22100 Como, Italy, now at NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, , Lausanne, Switzerland, now at Institut für Astro- und Teilchenphysik, Leopold-Franzens- Universität Innsbruck, A-6020 Innsbruck, Austria, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, also at ISDC - Science Data Center for Astrophysics, 1290, Versoix Department of Physics, Astronomy, the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA, INAF-IRA Bologna, Via Gobetti 101, I-40129, Bologna, Italy, Science Data Center, I-00044 Frascati Istituto Nazionale di Astrofisica - Osservatorio Astronomico di Roma, I-00040 Monte Porzio Catone Aalto University Metsähovi Radio Observatory, Metsähovintie 114, FI-02540 Kylmälä, Finland, Cahill Center for Astronomy, Astrophysics, California Institute of Technology, Pasadena CA, 91125, USA, National Radio Astronomy Observatory, P.O. Box 0, Socorro, NM 87801, USA, Department of Physics, Purdue University, 525 Northwestern Ave, West Lafayette, IN 47907, USA) April 27, 2015 astro-ph.GA, astro-ph.HE The high-frequency-peaked BL Lac (HBL) 1ES 0806+524 (z = 0.138) was discovered in VHE $\gamma$ rays in 2008. Until now, the broad-band spectrum of 1ES 0806+524 has been only poorly characterized, in particular at high energies. We analysed multiwavelength observations from $\gamma$ rays to radio performed from 2011 January to March, which were triggered by the high activity detected at optical frequencies. These observations constitute the most precise determination of the broad-band emission of 1ES 0806+524 to date. The stereoscopic MAGIC observations yielded a $\gamma$-ray signal above 250 GeV of $(3.7 \pm 0.7)$ per cent of the Crab Nebula flux with a statistical significance of 9.9 $\sigma$. The multiwavelength observations showed significant variability in essentially all energy bands, including a VHE $\gamma$-ray flare that lasted less than one night, which provided unprecedented evidence for short-term variability in 1ES 0806+524. The spectrum of this flare is well described by a power law with a photon index of $2.97 \pm 0.29$ between $\sim$150 GeV and 1 TeV and an integral flux of $(9.3 \pm 1.9)$ per cent of the Crab Nebula flux above 250 GeV. The spectrum during the non-flaring VHE activity is compatible with the only available VHE observation performed in 2008 with VERITAS when the source was in a low optical state. The broad-band spectral energy distribution can be described with a one-zone Synchrotron Self Compton model with parameters typical for HBLs, indicating that 1ES 0806+524 is not substantially different from the HBLs previously detected. Probing the very-high-energy gamma-ray spectral curvature in the blazar PG 1553+113 with the MAGIC telescopes (1408.1975) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. Becerra González, B. Biasuzzi, F. Borracci, E. Colombo, P. Da Vela, B. De Lotto, D. Dominis Prester, D. Elsaesser, K. Frantzen, M. Garczarczyk, A. González Muñoz, Y. Hanabata, W. Idec, K. Kodani, A. La Barbera, S. Lombardi, A. López-Oramas, K. Mallot, M. Mariotti, J. M. Miranda, D. Nakajima, K. Nishijima, M. Palatiello, X. Paredes-Fortuny, P. G. Prada Moroni, M. Ribó, T. Saito, V. Scapin, J. Sitarek, T. Steinbring, H. Takami, M. Teshima, T. Toyama, R. Zanin (18, for the MAGIC Collaboration), F. D'Ammando, S. Buson (15, for the Fermi-LAT Collaboration), A. Lähteenmäki, T. Hovatta Università di Udine, , INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Institute of Space Sciences, E-08193 Barcelona, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Japanese MAGIC Consortium, Department of Physics, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, ICREA, Institute of Space Sciences, E-08193 Barcelona, Spain, Università dell'Insubria, INFN Milano Bicocca, Como, I-22100 Como, Italy, now at NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, , Lausanne, Switzerland, now at Institut für Astro- und Teilchenphysik, Leopold-Franzens- Universität Innsbruck, A-6020 Innsbruck, Austria, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, also at ISDC - Science Data Center for Astrophysics, 1290, Versoix INAF-IRA, I-40129 Bologna, Italy, Aalto University Metsähovi Radio Observatory, Metsähovintie 114, 02540, Kylmälä, Finland, Aalto University Department of Radio Science, Engineering, Espoo, Finland, Cahill Center for Astronomy & Astrophysics, Caltech, 1200 E. California Blvd, Pasadena, CA, 91125, U.S.A., Department of Physics, Purdue University, 525 Northwestern Ave, West Lafayette, IN 47907, USA, * Corresponding authors) April 13, 2015 astro-ph.HE PG 1553+113 is a very-high-energy (VHE, $E>100\,\mathrm{GeV}$) $\gamma$-ray emitter classified as a BL Lac object. Its redshift is constrained by intergalactic absorption lines in the range $0.4<z<0.58$. The MAGIC telescopes have monitored the source's activity since 2005. In early 2012, PG 1553+113 was found in a high-state, and later, in April of the same year, the source reached its highest VHE flux state detected so far. Simultaneous observations carried out in X-rays during 2012 April show similar flaring behaviour. In contrast, the $\gamma$-ray flux at $E<100\,\mathrm{GeV}$ observed by Fermi-LAT is compatible with steady emission. In this paper, a detailed study of the flaring state is presented. The VHE spectrum shows clear curvature, being well fitted either by a power law with an exponential cut-off or by a log-parabola. A simple power-law fit hypothesis for the observed shape of the PG 1553+113 VHE $\gamma$-ray spectrum is rejected with a high significance (fit probability P=2.6 $\times 10^{-6}$). The observed curvature is compatible with the extragalactic background light (EBL) imprint predicted by current generation EBL models assuming a redshift $z\sim0.4$. New constraints on the redshift are derived from the VHE spectrum. These constraints are compatible with previous limits and suggest that the source is most likely located around the optical lower limit, $z=0.4$, based on the detection of Ly$\alpha$ absorption. Finally, we find that the synchrotron self-Compton (SSC) model gives a satisfactory description of the observed multi-wavelength spectral energy distribution during the flare. The major upgrade of the MAGIC telescopes, Part II: A performance study using observations of the Crab Nebula (1409.5594) MAGIC Collaboration: J. Aleksic, L. A. Antonelli, J. A. Barrio, B. Biasuzzi, S. Bonnefoy, A. Carosi, D. Corti, A. De Angelis, C. Delgado Mendez, M. Doro, D. Fidalgo, C. Fruck, D. Garrido Terrats, A. Gonzalez Munoz, Y. Hanabata, D. Hrupec, M. L. Knoetig, J. Kushida, N. Lewandowska, M. Lopez, I. Lozano, N. Mankuzhiyil, M. Martinez, R. Mirzoyan, V. Neustroev, K. Noda, M. Palatiello, X. Paredes-Fortuny, E. Prandini, J. Rodriguez Garcia, K. Satalecka, J. Schlammer, J. Sitarek, A. Stamerra, H. Takami, T. Terzic, D. F. Torres, H. Wetteskind IFAE, Campus UAB, E-08193 Bellaterra, Spain INAF National Institute for Astrophysics, I-00136 Rome, Italy Universita di Siena, INFN Pisa, I-53100 Siena, Italy Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia Max-Planck-Institut fur Physik, D-80805 Munchen, Germany Inst. de Astrofisica de Canarias, E-38200 La Laguna, Tenerife, Spain University of Lodz, PL-90236 Lodz, Poland, D-15738 Zeuthen, Germany Universitat Wurzburg, D-97074 Wurzburg, Germany Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, E-28040 Madrid, Spain Institute of Space Sciences, E-08193 Barcelona, Spain Universita di Padova, INFN, I-35131 Padova, Italy Technische Universitat Dortmund, D-44221 Dortmund, Germany Unitat de Fisica de les Radiacions, Departament de Fisica, CERES-IEEC, Universitat Autonoma de Barcelona, E-08193 Bellaterra, Spain Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria ICREA, Institute of Space Sciences, E-08193 Barcelona, Spain Universita dell Insubria, INFN Milano Bicocca, Como, I-22100 Como, Italy European Gravitational Observatory, I-56021 S. Stefano a Macerata, Italy Universita di Siena, INFN Siena, I-53100 Siena, Italy now at: NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, Lausanne, Switzerland Now at Institut fur Astro- und Teilchenphysik, Leopold-Franzens- Universitat Innsbruck, A-6020 Innsbruck, Austria deceased, Turku, Finland now at Astrophysics Science Division, Bhabha Atomic Research Centre, Mumbai 400085, India Feb. 20, 2015 astro-ph.IM MAGIC is a system of two Imaging Atmospheric Cherenkov Telescopes located in the Canary island of La Palma, Spain. During summer 2011 and 2012 it underwent a series of upgrades, involving the exchange of the MAGIC-I camera and its trigger system, as well as the upgrade of the readout system of both telescopes. We use observations of the Crab Nebula taken at low and medium zenith angles to assess the key performance parameters of the MAGIC stereo system. For low zenith angle observations, the standard trigger threshold of the MAGIC telescopes is ~50GeV. The integral sensitivity for point-like sources with Crab Nebula-like spectrum above 220GeV is (0.66+/-0.03)% of Crab Nebula flux in 50 h of observations. The angular resolution, defined as the sigma of a 2-dimensional Gaussian distribution, at those energies is < 0.07 degree, while the energy resolution is 16%. We also re-evaluate the effect of the systematic uncertainty on the data taken with the MAGIC telescopes after the upgrade. We estimate that the systematic uncertainties can be divided in the following components: < 15% in energy scale, 11-18% in flux normalization and +/-0.15 for the energy spectrum power-law slope. The 2009 multiwavelength campaign on Mrk 421: Variability and correlation studies (1502.02650) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, E. Bernardini, S. Bonnefoy, E. Carmona, J. L. Contreras, F. Dazzi, C. Delgado Mendez, D. Dorner, D. Elsaesser, K. Frantzen, D. Garrido Terrats, A. González Mu noz, D. Hadasch, W. Idec, J. Kushida, E. Lindfors, R. López-Coto, M. Makariev, K. Mannheim, M. Martínez, J. M. Miranda, D. Nakajima, R. Orito, D. Paneque, S. Partini, E. Prandini, W. Rhode, S. Rügamer, V. Scalzotto, S. N. Shore, D. Sobczynska, J. Storz, P. Temnikov, J. Thaele, M. Uellenbeck, R. Zanin, M. Beilicke, V. Bugaev, E. Collins-Hughes, S. Federici, P. Fortin, S. T. Griffiths, G. Hughes, M. Kertzman, F. Krennrich, S. McArthur, R. A. Ong, A. Popkow, P. T. Reynolds, G. H. Sembroski, I. Telezhinsky (38, 34), M. Theiling, S. P. Wakely, A. Wilhelm, External collaborators: M. Villata, M. F. Aller, O. M. Kurtanidze, V. M. Larionov, E. Angelakis, P. Cassaro, T. P. Krichbaum, J. W. Moody, C. Pace, J.L. Richards, M. Tornikoski IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Universidad Complutense, E-28040 Madrid, Spain, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Lodz, PL-90236 Lodz, Poland, , D-15738 Zeuthen, Germany, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, Inst. de Astrofísica de Andalucía Università di Padova, INFN, I-35131 Padova, Italy, Università dell'Insubria, Como, I-22100 Como, Italy, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Institut de Ciències de l'Espai Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, Universitat de Barcelona Università di Pisa, , INFN Pisa, I-56126 Pisa, Italy, now at Ecole polytechnique fédérale de Lausanne now at Department of Physics & Astronomy, UC Riverside, CA 92521, USA, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, now at Stockholm University, Fysikum, Oskar Klein Centre, AlbaNova, SE-106 91 Stockholm, Sweden, now at GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands, now at Stockholm University, Department of Astronomy, Oskar Klein Centre, AlbaNova, SE-106 91 Stockholm, Sweden, Physics Department, McGill University, Montreal, QC H3A 2T8, Canada, Department of Physics, Washington University, St. Louis, MO 63130, USA, Fred Lawrence Whipple Observatory, Harvard-Smithsonian Center for Astrophysics, Amado, AZ 85645, USA, School of Physics, University College Dublin, Belfield, Dublin 4, Ireland, Institute of Physics, Astronomy, University of Potsdam, 14476 Potsdam-Golm, Germany, Astronomy Department, Adler Planetarium, Astronomy Museum, Chicago, IL 60605, USA, Department of Physics, Astronomy, Purdue University, West Lafayette, IN 47907, USA, School of Physics, Astronomy, University of Minnesota, Minneapolis, MN 55455, USA, Department of Physics, Astronomy, Iowa State University, Ames, IA 50011, USA, Department of Astronomy, Astrophysics, 525 Davey Lab, Pennsylvania State University, University Park, PA 16802, USA, Santa Cruz Institute for Particle Physics, Department of Physics, University of California, Santa Cruz, CA 95064, USA, Department of Physics, Astronomy, University of Iowa, Van Allen Hall, Iowa City, IA 52242, USA, Department of Physics, Astronomy, the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA, Physics Department, Columbia University, New York, NY 10027, USA, Department of Physics, Astronomy, DePauw University, Greencastle, IN 46135-0037, USA, Department of Physics, Astronomy, University of Utah, Salt Lake City, UT 84112, USA, School of Physics, National University of Ireland Galway, University Road, Galway, Ireland, Enrico Fermi Institute, University of Chicago, Chicago, IL 60637, USA, School of Physics, Center for Relativistic Astrophysics, Georgia Institute of Technology, 837 State Street NW, Atlanta, GA 30332-0430, Department of Life, Physical Sciences, Galway-Mayo Institute of Technology, Dublin Road, Galway, Ireland, Department of Physics, Astronomy, Barnard College, Columbia University, NY 10027, USA, Department of Physics, Astronomy, University of California, Los Angeles, CA 90095, USA, Instituto de Astronomia y Fisica del Espacio, Casilla de Correo 67 - Sucursal 28, Department of Applied Physics, Instrumentation, Cork Institute of Technology, Bishopstown, Cork, Ireland, Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439, USA, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA, INAF, Osservatorio Astronomico di Torino, I-10025 Pino Torinese Department of Astronomy, University of Michigan, Ann Arbor, MI 48109-1042, USA, Graduate Institute of Astronomy, National Central University, Jhongli 32054, Taiwan, School of Cosmic Physics, Dublin Institute for Advanced Studies, Dublin, 2, Ireland, Moscow M.V. Lomonosov State University, Sternberg Astronomical Institute, Russia, Abastumani Observatory, Mt. Kanobili, 0301 Abastumani, Georgia, Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, 69117 Heidelberg, Germany, Aalto University Metsähovi Radio Observatory Metsähovintie 114 FIN-02540 Kylmälä Finland, Aalto University Department of Radio Science, Engineering, P.O.Box 13000, FI-00076 Aalto, Finland, University College Dublin, Belfield, Dublin 4, Ireland, Isaac Newton Institute of Chile, St. Petersburg Branch, St. Petersburg, Russia, Pulkovo Observatory, 196140 St. Petersburg, Russia, Astronomical Institute, St. Petersburg State University, St. Petersburg, Russia, Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany, Agenzia Spaziale Italiana, Italy, Instituto Nacional de Astrofísica, Óptica y Electrónica, Tonantzintla, Puebla 72840, Mexico, INAF Istituto di Radioastronomia, Sezione di Noto, Contrada Renna Bassa, 96017 Noto Department of Physics, University of Trento, I38050, Povo, Trento, Italy, Cahill Center for Astronomy, Astrophysics, California Institute of Technology, 1200 E California Blvd, Pasadena, CA 91125, Astro Space Center of the Lebedev Physical Institute, 117997 Moscow, Russia, Center for Research, Exploration in Space Science, Technology, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Indiana University, Department of Astronomy, Swain Hall West 319, Bloomington, IN 47405-7105, USA, National Radio Astronomy Observatory, PO Box 0, Socorro, NM 87801, Department of Physics, Astronomy, Brigham Young University, Provo, Utah 84602, USA, INAF Istituto di Radioastronomia, Stazione Radioastronomica di Medicina, I-40059 Medicina, Italy, Department of Physics, Tokyo Institute of Technology, Meguro City, Tokyo 152-8551, Japan, ASI-Science Data Center, Via del Politecnico, I-00133 Rome, Italy, Department of Physics, Purdue University, 525 Northwestern Ave, West Lafayette, IN 47907, Department of Physics, University of Colorado, Denver, CO 80220, USA, Department of Physics, Mathematics, College of Science, 952 Engineering, Aoyama Gakuin University, 5-10-1 Fuchinobe, Chuoku, Sagamihara-shi Kanagawa 252-5258, Japan, Department of Physics, Astronomy, Pomona College, Claremont CA 91711-6312, USA, INAF Istituto di Radioastronomia, 40129 Bologna, Italy) Feb. 10, 2015 astro-ph.HE We performed a 4.5-month multi-instrument campaign (from radio to VHE gamma rays) on Mrk421 between January 2009 and June 2009, which included VLBA, F-GAMMA, GASP-WEBT, Swift, RXTE, Fermi-LAT, MAGIC, and Whipple, among other instruments and collaborations. Mrk421 was found in its typical (non-flaring) activity state, with a VHE flux of about half that of the Crab Nebula, yet the light curves show significant variability at all wavelengths, the highest variability being in the X-rays. We determined the power spectral densities (PSD) at most wavelengths and found that all PSDs can be described by power-laws without a break, and with indices consistent with pink/red-noise behavior. We observed a harder-when-brighter behavior in the X-ray spectra and measured a positive correlation between VHE and X-ray fluxes with zero time lag. Such characteristics have been reported many times during flaring activity, but here they are reported for the first time in the non-flaring state. We also observed an overall anti-correlation between optical/UV and X-rays extending over the duration of the campaign. The harder-when-brighter behavior in the X-ray spectra and the measured positive X-ray/VHE correlation during the 2009 multi-wavelength campaign suggests that the physical processes dominating the emission during non-flaring states have similarities with those occurring during flaring activity. In particular, this observation supports leptonic scenarios as being responsible for the emission of Mrk421 during non-flaring activity. Such a temporally extended X-ray/VHE correlation is not driven by any single flaring event, and hence is difficult to explain within the standard hadronic scenarios. The highest variability is observed in the X-ray band, which, within the one-zone synchrotron self-Compton scenario, indicates that the electron energy distribution is most variable at the highest energies. Multiwavelength observations of Mrk 501 in 2008 (1410.6391) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, E. Bernardini, S. Bonnefoy, E. Carmona, J. L. Contreras, F. Dazzi, C. Delgado Mendez, D. Dorner, D. Elsaesser, K. Frantzen, D. Garrido Terrats, A. González Muñoz, D. Hadasch, W. Idec, J. Kushida, E. Lindfors, A. López-Oramas, K. Mallot, L. Maraschi, U. Menzel, A. Moralejo, K. Nilsson, M. Palatiello, X. Paredes-Fortuny, P. G. Prada Moroni, R. Reinthal, J. Rodriguez Garcia, M. Salvati, C. Schultz, J. Sitarek, V. Stamatescu, S. Sun, T. Terzić, O. Tibolla, P. Vogler, VERITAS Collaboration: B. Behera, K. Berger, X. Chen, C. Duke, J. P. Finley, G. H. Gillanders, G. Gyuk, M. Kertzman, A. S Madhavan, P. Moriarty, A. O'Faoláin de Bhróithe, A. Popkow, G. Ratliff, E. Roache, A. W. Smith, J. Tyler, T. C. Weekes, External Collaborators: M. Villata, M. Perri, N. V. Efimova, W. P. Chen, O. M. Kurtanidze, P. Leto, A. Lahtenmaki, V. Kadenius, Yu. A. Kovalev IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Universidad Complutense, E-28040 Madrid, Spain, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, Inst. de Astrofísica de Andalucía Università di Padova, INFN, I-35131 Padova, Italy, Università dell'Insubria, Como, I-22100 Como, Italy, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Institut de Ciències de l'Espai Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, Universitat de Barcelona Università di Pisa, , INFN Pisa, I-56126 Pisa, Italy, now at Ecole polytechnique fédérale de Lausanne now at Department of Physics & Astronomy, UC Riverside, CA 92521, USA, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, Now at Stockholms universitet, Oskar Klein Centre for Cosmoparticle Physics, now at GRAPPA Institute, University of Amsterdam, 1098XH, Department of Physics, Washington University, St. Louis, MO 63130, USA, Fred Lawrence Whipple Observatory, Harvard-Smithsonian Center for Astrophysics, Amado, AZ 85645, USA, Department of Physics, Astronomy, the Bartol Research Institute, University of Delaware, Newark, DE 19716, USA, School of Physics, University College Dublin, Belfield, Dublin 4, Ireland, Santa Cruz Institute for Particle Physics, Department of Physics, University of California, Santa Cruz, CA 95064, USA, Institute of Physics, Astronomy, University of Potsdam, 14476 Potsdam-Golm, Germany, Astronomy Department, Adler Planetarium, Astronomy Museum, Chicago, IL 60605, USA, Department of Physics, Purdue University, West Lafayette, IN 47907, USA, Department of Physics, Grinnell College, Grinnell, IA 50112-1690, USA, School of Physics, Astronomy, University of Minnesota, Minneapolis, MN 55455, USA, Department of Astronomy, Astrophysics, 525 Davey Lab, Pennsylvania State University, University Park, PA 16802, USA, School of Physics, National University of Ireland Galway, University Road, Galway, Ireland, Physics Department, McGill University, Montreal, QC H3A 2T8, Canada, Department of Physics, Astronomy, University of Iowa, Van Allen Hall, Iowa City, IA 52242, USA, Department of Physics, Astronomy, DePauw University, Greencastle, IN 46135-0037, USA, Department of Physics, Astronomy, University of Utah, Salt Lake City, UT 84112, USA, Department of Physics, Astronomy, Iowa State University, Ames, IA 50011, USA, Department of Physics, Astronomy, University of California, Los Angeles, CA 90095, USA, School of Physics, Center for Relativistic Astrophysics, Georgia Institute of Technology, 837 State Street NW, Atlanta, GA 30332-0430, Department of Life, Physical Sciences, Galway-Mayo Institute of Technology, Dublin Road, Galway, Ireland, Department of Physics, Astronomy, Barnard College, Columbia University, NY 10027, USA, Physics Department, Columbia University, New York, NY 10027, USA, Instituto de Astronomia y Fisica del Espacio, Casilla de Correo 67 - Sucursal 28, Ciudad Autónoma de Buenos Aires, Argentina, Physics Department, California Polytechnic State University, San Luis Obispo, CA 94307, USA, Department of Applied Physics, Instrumentation, Cork Institute of Technology, Bishopstown, Cork, Ireland, Enrico Fermi Institute, University of Chicago, Chicago, IL 60637, USA, Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439, USA, INAF, Osservatorio Astronomico di Torino, I-10025 Pino Torinese Space Sciences Laboratory, 7 Gauss Way, University of California, Berkeley, CA 94720-7450, USA, ASI-Science Data Center, Via del Politecnico, I-00133 Rome, Italy, Department of Astronomy, University of Michigan, Ann Arbor, MI 48109-1042, USA, Astron. Inst., St.-Petersburg State Univ., Russia, Pulkovo Observatory, St.-Petersburg, Russia, Graduate Institute of Astronomy, National Central University, 300 Jhongda Rd., Jhongli 32001, Taiwan, Moscow M.V. Lomonosov State University, Sternberg Astronomical Institute, Russia, Abastumani Observatory, Mt. Kanobili, 0301 Abastumani, Georgia, Landessternwarte, Zentrum für Astronomie der Universität, School of Cosmic Physics, Dublin Institute For Advanced Studies, Ireland, INAF - Osservatorio Astrofisico di Catania, Italy, Aalto University Metsähovi Radio Observatory Metsähovintie 114 FIN-02540 Kylmälä Finland, Finnish Centre for Astronomy with ESO University of Turku Väisäläntie 20 FIN-21500 Piikkiö Finland, INAF Istituto di Radioastronomia, 40129 Bologna, Italy, University of Trento, Department of Physics, I38050 Povo, Trento, Italy, Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany, Astro Space Center of the Lebedev Physical Institute, 117997, Engelhardt Astronomical Observatory, Kazan Federal University, Tatarstan, Russia) Oct. 23, 2014 astro-ph.HE Mrk 501 is one of the brightest blazars at TeV energies and has been extensively studied since its first VHE detection in 1996. Our goal is to characterize in detail the source gamma-ray emission, together with the radio-to-X-ray emission, during the non-flaring (low) activity, which is less often studied than the occasional flaring (high) activity. We organized a multiwavelength (MW) campaign on Mrk 501 between March and May 2008. This multi-instrument effort included the most sensitive VHE gamma-ray instruments in the northern hemisphere, namely the imaging atmospheric Cherenkov telescopes MAGIC and VERITAS, as well as Swift, RXTE, the F-GAMMA, GASP-WEBT, and other collaborations and instruments. Mrk 501 was found to be in a low state of activity during the campaign, with a VHE flux in the range of 10%-20% of the Crab nebula flux. Nevertheless, significant flux variations were detected with various instruments, with a trend of increasing variability with energy. The broadband spectral energy distribution during the two different emission states of the campaign can be adequately described within the homogeneous one-zone synchrotron self-Compton model, with the (slightly) higher state described by an increase in the electron number density. This agrees with previous studies of the broadband emission of this source during flaring and non-flaring states. We report for the first time a tentative X-ray-to-VHE correlation during a low VHE activity. Although marginally significant, this positive correlation between X-ray and VHE, which has been reported many times during flaring activity, suggests that the mechanisms that dominate the X-ray/VHE emission during non-flaring-activity are not substantially different from those that are responsible for the emission during flaring activity. MAGIC reveals a complex morphology within the unidentified gamma-ray source HESS J1857+026 (1401.7154) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, E. Bernardini, G. Bonnoli, P. Colin, S. Covino, B. De Lotto, D. Dominis Prester, D. Eisenacher, D. Fidalgo Carreto, R. J. García López, M. Gaug, D. Hadasch, D. Hildebrand, H. Kellermann, J. Krause, N. Lewandowska, R. López-Coto, M. Makariev, K. Mannheim, M. Martínez, J. M. Miranda, D. Nakajima, N. Nowak, S. Paiano, J. M. Paredes, F. Prada, I. Puljak, J. Rodriguez Garcia, K. Saito, C. Schultz, J. Sitarek, V. Stamatescu, S. Sun, P. Temnikov, J. Thaele, M. Uellenbeck, R. Zanin Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, Inst. de Astrofísica de Andalucía Università di Padova, INFN, I-35131 Padova, Italy, Università dell'Insubria, Como, I-22100 Como, Italy, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Institut de Ciències de l'Espai Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Università di Pisa, , INFN Pisa, I-56126 Pisa, Italy, now at: NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, now at: Ecole polytechnique fédérale de Lausanne, Lausanne, Switzerland, now at: Department of Physics, Astronomy, UC Riverside, CA 92521, USA, , Turku, Finland, also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, now at: School of Chemistry, Physics, University of Adelaide, Adelaide 5005, Australia, now at: Stockholm University, Oskar Klein Centre for Cosmoparticle Physics, SE-106 91 Stockholm, Sweden, now at: GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands) Aug. 20, 2014 astro-ph.HE HESS J1857+026 is an extended TeV gamma-ray source that was discovered by H.E.S.S. as part of its Galactic plane survey. Given its broadband spectral energy distribution and its spatial coincidence with the young energetic pulsar PSR J1856+0245, the source has been put forward as a pulsar wind nebula (PWN) candidate. MAGIC has performed follow-up observations aimed at mapping the source down to energies approaching 100 GeV in order to better understand its complex morphology. HESS J1857+026 was observed by MAGIC in 2010, yielding 29 hours of good quality stereoscopic data that allowed us to map the source region in two separate ranges of energy. We detected very-high-energy gamma-ray emission from HESS J1857+026 with a significance of $12 \sigma$ above $150$ GeV. The differential energy spectrum between $100$ GeV and $13$ TeV is well described by a power law function $dN/dE = N_0(E/1\textrm{TeV})^{-\Gamma}$ with $N_0 = (5.37 \pm0.44_{stat} \pm1.5_{sys}) \times 10^{-12} (\textrm{TeV}^{-1} \textrm{cm}^{-2}$ $\textrm{ s}^{-1})$ and $\Gamma = 2.16\pm0.07_{stat} \pm0.15_{sys}$, which bridges the gap between the GeV emission measured by Fermi-LAT and the multi-TeV emission measured by H.E.S.S.. In addition, we present a detailed analysis of the energy-dependent morphology of this region. We couple these results with archival multi-wavelength data and outline evidence in favor of a two-source scenario, whereby one source is associated with a PWN, while the other could be linked with a molecular cloud complex containing an HII region and a possible gas cavity. Discovery of TeV gamma-ray emission from the pulsar wind nebula 3C 58 by MAGIC (1405.6074) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. Becerra González, B. Biasuzzi, F. Borracci, E. Colombo, P. Da Vela, B. De Lotto, D. Dominis Prester, D. Elsaesser, C. Fruck, D. Garrido Terrats, A. González Muñoz, M. Hayashida, W. Idec, J. Krause, N. Lewandowska, R. López-Coto, M. Makariev, K. Mannheim, M. Martínez, R. Mirzoyan, A. Niedzwiecki, A. Overkemping, R. Paoletti, P. G. Prada Moroni, W. Rhode, S. Rügamer, V. Scalzotto, A. Sillanpää, F. Spanier, J. Storz, P. Temnikov, J. Thaele, M. Uellenbeck, M.A. Pérez-Torres IFAE, Campus UAB, E-08193 Bellaterra, Spain, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Institute of Space Sciences, E-08193 Barcelona, Spain, Università di Padova, INFN, I-35131 Padova, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, ICREA, Institute of Space Sciences, E-08193 Barcelona, Spain, Università dell'Insubria, INFN Milano Bicocca, Como, I-22100 Como, Italy, now at: NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, now at Ecole polytechnique fédérale de Lausanne Now at Institut für Astro- und Teilchenphysik, Leopold-Franzens- Universität Innsbruck, A-6020 Innsbruck, Austria, now at Finnish Centre for Astronomy with ESO now at Astrophysics Science Division, Bhabha Atomic Research Centre, Mumbai 400085, India, now at School of Chemistry & Physics, University of Adelaide, Adelaide 5005, Australia, , E-18080 Granada, Spain, also at Depto. de Física Teórica, Facultad de Ciencias de la Universidad de Zaragoza. now at Centro de Física del Cosmos de Aragón, Teruel) July 9, 2014 astro-ph.HE The pulsar wind nebula (PWN) 3C 58 is one of the historical very-high-energy (VHE; E>100 GeV) gamma-ray source candidates. It is energized by one of the highest spin-down power pulsars known (5% of Crab pulsar) and it has been compared to the Crab Nebula due to their morphological similarities. This object was previously observed by imaging atmospheric Cherenkov telescopes (Whipple, VERITAS and MAGIC), although not detected, with an upper limit of 2.4% Crab Unit (C.U.) at VHE. It was detected by Fermi-LAT with a spectrum extending beyond 100 GeV. We analyzed 81 hours of 3C 58 data taken with the MAGIC telescopes and we detected VHE gamma-ray emission with a significance of 5.7 sigma and an integral flux of 0.65% C.U. above 1 TeV. The differential energy spectrum between 400 GeV and 10 TeV is well described by a power-law function d\phi/dE=f_0(E/1TeV)^{-Gamma} with f_0=(2.0\pm0.4_{stat}\pm0.6_{sys})\times10^{-13}cm^{-2}s^{-1}TeV^{-1} and Gamma=2.4\pm0.2_{stat}\pm0.2_{sys}. The skymap is compatible with an unresolved source. We report the first significant detection of PWN 3C 58 at TeV energies. According to our results 3C 58 is the least luminous VHE gamma-ray PWN ever detected at VHE and the one with the lowest flux at VHE to date. We compare our results with the expectations of time-dependent models in which electrons up-scatter photon fields. The best representation favors a distance to the PWN of 2 kpc and Far Infrared (FIR) comparable to CMB photon fields. If we consider an unexpectedly high FIR density, the data can also be reproduced by models assuming a 3.2 kpc distance. A low magnetic field, far from equipartition, is required to explain the VHE data. Hadronic contribution from the hosting supernova remnant (SNR) requires unrealistic energy budget given the density of the medium, disfavoring cosmic ray acceleration in the SNR as origin of the VHE gamma-ray emission. MAGIC gamma-ray and multifrequency observations of flat spectrum radio quasar PKS 1510-089 in early 2012 (1401.5646) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, E. Bernardini, G. Bonnoli, D. Carreto-Fidalgo, J. Cortina, G. De Caneva, A. Domínguez, S. Einecke, D. Ferenc, R. J. García López, M. Gaug, D. Hadasch, D. Hildebrand, H. Kellermann, H. Kubo, E. Lindfors, A. López-Oramas, K. Mallot, B. Marcote, U. Menzel, P. Munar-Adrover, K. Nishijima, A. Overkemping, J. M. Paredes, F. Prada, I. Puljak, J. Rodriguez Garcia, K. Saito, C. Schultz, J. Sitarek, A. Stamerra, S. Sun, P. Temnikov, J. Thaele, M. Uellenbeck, R. Zanin, F. Verrecchia, F. D'Ammando (for the Fermi-LAT Collaboration), C. Mundell, B. Zarpudin, A. Lähteenäki, A.C.S. Readhead, S. Jorstad, D.A.Blinov, L.V.Larionova, A.A.Mokrushina, N.Panwar, O.M.Kurtanidze, R.A.Chigladze, A.Manilla-Robles, M.F.Aller, I.Nestoras, D.Riquelme (1 IFAE, Campus UAB, E-08193 Bellaterra, Spain, 2 Universita di Udine, INFN Trieste, I-33100 Udine, Italy, 3 INAF National Institute for Astrophysics, I-00136 Rome, Italy, 4 Universita di Siena, INFN Pisa, I-53100 Siena, Italy, 5 Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, 6 Max-Planck-Institut fur Physik, D-80805 Munchen, Germany, 7 Universidad Complutense, E-28040 Madrid, Spain, 8 Inst. de Astrofisica de Canarias, E-38200 La Laguna, Tenerife, Spain, 9 University of Lodz, PL-90236 Lodz, Poland, 10 Deutsches Elektronen-Synchrotron, D-15738 Zeuthen, Germany, 11 ETH Zurich, CH-8093 Zurich, Switzerland, 12 Universitat Wurzburg, D-97074 Wurzburg, Germany, 13 Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, E-28040 Madrid, Spain, 14 Technische Universitat Dortmund, D-44221 Dortmund, Germany, 15 Inst. de Astrofisica de Andalucia, E-18080 Granada, Spain, 16 Universita di Padova, INFN, I-35131 Padova, Italy, 17 Universita dell Insubria, Como, I-22100 Como, Italy, 18 Unitat de Fisica de les Radiacions, Departament de Fısica, CERES-IEEC, Universitat Autonoma de Barcelona, E-08193 Bellaterra, Spain, 19 Institut de Ciencies del Espai, E-08193 Bellaterra, Spain, 20 Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, 21 Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, 22 Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, 23 Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, 24 Universita di Pisa, INFN Pisa, I-56126 Pisa, Italy, 25 now at: NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, 26 now at: Ecole polytechnique federale de Lausanne, Lausanne, Switzerland, 27 now at: Department of Physics & Astronomy, UC Riverside, CA 92521, USA 28 now at: Finnish Centre for Astronomy with ESO, Turku, Finland, 29 also at INAF-Trieste, 30 also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, 31 now at: Stockholm University, Oskar Klein Centre for Cosmoparticle Physics, SE-106 91 Stockholm, Sweden, 32 now at GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands, 33 Dipartimento di Fisica, Universit degli Studi di Perugia, Via A. Pascoli, I-06123 Perugia, Italy, 34 ASI Science Data Centre, via del Politecnico snc, 00133, Roma, INAF-OAR, Via Frascati 33, I00040 Monte Porzio Catone, Italy, 35 INAF-IASF Palermo, via Ugo La Malfa 153, 90146 Palermo, Italy, 36 Astrophysics Research Institute, Liverpool John Moores University, Twelve Quays House, Egerton Wharf, Birkenhead, CH41 1LD, UK, 37 Tuorla Observatory, Department of Physics, Astronomy, University of Turku, Finland, 38 INAF-Osservatorio Astrofisico di Torino, Italy, 39 Aalto University Metsahovi Radio Observatory, Metsahovintie 114, 02540, Kylmala, Finland, 40 Aalto University Department of Radio Science, Engineering, Espoo, Finland, 41 Cahill Center for Astronomy & Astrophysics, Caltech, 1200 E. California Blvd, Pasadena, CA, 91125, U.S.A., 42 Department of Physics, Purdue University, 525 Northwestern Ave, West Lafayette, IN 47907, USA, 43 Institute for Astrophysical Research, Boston University, U.S.A, 44 INAF-IRA Bologna, Italy, 45 Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 USA, 46 Astron. Inst., St.-Petersburg State Univ., Russia, 47 Pulkovo Observatory, St.-Petersburg, Russia, 48 Isaac Newton Institute of Chile, St.-Petersburg Branch, 49 University of Crete, Heraklion, Greece, 50 Graduate Institute of Astronomy, National Central University, 300 Jhongda Rd., Jhongli 32001, Taiwan, 51 Joint Institute for VLBI in Europe, Postbus 2, NL-7990 AA Dwingeloo, the Netherlands, 52 Instituto de Astrofisica de Andalucia, CSIC, Apartado 3004, 18080, Granada, Spain 53 Abastumani Observatory, Mt. Kanobili, 0301 Abastumani, Georgia, 54 Landessternwarte, Zentrum fur Astronomie der Universitat Heidelberg, Konigstuhl 12, 69117 Heidelberg, Germany, 55 Engelhardt Astronomical Observatory, Kazan Federal University, Tatarstan, Russia, 56 Instituto de Astrofisica de Canarias, La Laguna, Tenerife, Spain, 57 Departamento de Astrofisica, Universidad de La Laguna, La Laguna, Tenerife, Spain, 58 Sofia University, Bulgaria, 59 Department of Astronomy, University of Michigan, 817 Dennison Bldg., Ann Arbor, MI 48109-1042, USA, 60 Max-Planck-Institut fur Radioastronomie, Auf dem Huegel 69, 53121 Bonn, Germany, 61 Institut de Radio Astronomie Millimetrique, Avenida Divina Pastora 7, Local 20, 18012 Granada, Spain 62 Institute of Space, Astronautical Science, JAXA, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210, Japan, 63 Astronomical Observatory, Jagiellonian University, 30-244 Krakow, Poland, correspondence to: E. Lindfors, F. Tavecchio, J.Sitarek Among more than fifty blazars detected in very high energy (VHE, E>100GeV) gamma-rays, only three belong to the subclass of Flat Spectrum Radio Quasars (FSRQs). MAGIC observed FSRQ PKS 1510-089 in February-April 2012 during a high activity state in the high energy (HE, E>100 MeV) gamma-ray band observed by AGILE and Fermi. MAGIC observations result in the detection of a source with significance of 6.0 sigma. In agreement with the previous VHE observations of the source, we find no statistically significant variability during the MAGIC observations in daily, weekly or monthly time scales. The other two known VHE FSRQs have shown daily scale to sub-hour variability. We study the multifrequency behaviour of the source at the epoch of MAGIC observation, collecting quasi-simultaneous data at radio and optical (GASP-WEBT and F-Gamma collaborations, REM, Steward, Perkins, Liverpool, OVRO and VLBA telescopes), X-ray (Swift satellite) and HE gamma-ray frequencies. The gamma-ray SED combining AGILE, Fermi and MAGIC data joins smoothly and shows no hint of a break. The multifrequency light curves suggest a common origin for the millimeter radio and HE gamma-ray emission and the HE gamma-ray flaring starts when the new component is ejected from the 43GHz VLBA core. The quasi-simultaneous multifrequency SED is modelled with a one-zone inverse Compton model. We study two different origins of the seed photons for the inverse Compton scattering, namely the infra-red torus and a slow sheath surrounding the jet around the VLBA core. Both models fit the data well. However, the fast HE gamma-ray variability requires that within the modelled large emitting region, there must exist more compact regions. We suggest that these observed signatures would be most naturally explained by a turbulent plasma flowing at a relativistic speed down the jet and crossing a standing conical shock. Contemporaneous observations of the radio galaxy NGC 1275 from radio to very high energy gamma-rays (1310.8500) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, E. Bernardini, S. Bonnefoy, E. Carmona, J. L. Contreras, F. Dazzi, C. Delgado Mendez, D. Dorner, D. Elsaesser, K. Frantzen, D. Garrido Terrats, A. González Muñoz, D. Hadasch, W. Idec, J. Kushida, E. Lindfors, A. López-Oramas, K. Mallot, B. Marcote, U. Menzel, P. Munar-Adrover, N. Nowak, M. Palatiello, X. Paredes-Fortuny, P. G. Prada Moroni, R. Reinthal, J. Rodriguez Garcia, M. Salvati, T. Schweizer, I. Snidaric, A. Stamerra, L. Takalo, D. Tescaro, D. F. Torres, R. M. Wagner, J. Kataoka IFAE, Edifici Cn., Campus UAB, Bellaterra, Spain, Università di Udine, INFN Trieste, Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, München, Germany, Universidad Complutense, E-28040 Madrid, Spain, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Technische Universität Dortmund, Dortmund, Germany, Università di Padova, INFN, Padova, Italy, Università dell'Insubria, Como, Como, Italy, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, Bellaterra, Spain, , Bellaterra, Spain, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Inst. for Nucl. Research, Nucl. Energy, Sofia, Bulgaria, Università di Pisa, INFN Pisa, Pisa, Italy, now at Ecole polytechnique fédérale de Lausanne now at Department of Physics & Astronomy, UC Riverside, USA, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, Now at Stockholms universitet, Oskar Klein Centre for Cosmoparticle Physics, now at GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands, INAF Obs. Astrofisoco di Torino, Italy March 5, 2014 astro-ph.HE The radio galaxy NGC 1275, recently identified as a very high energy (VHE, >100 GeV) gamma-ray emitter by MAGIC, is one of the few non-blazar AGN detected in the VHE regime. In order to better understand the origin of the gamma-ray emission and locate it within the galaxy, we studied contemporaneous multi-frequency observations of NGC 1275 and modeled the overall spectral energy distribution (SED). We analyzed unpublished MAGIC observations carried out between Oct. 2009 and Feb. 2010, and the previously published ones taken between Aug. 2010 and Feb. 2011. We studied the multi-band variability and correlations by analyzing data of Fermi-LAT (0.1-100 GeV), as well as Chandra (X-ray), KVA (optical) and MOJAVE (radio) data taken during the same period. Using customized Monte Carlo simulations corresponding to early MAGIC stereo data, we detect NGC 1275 also in the earlier campaign. The flux level and energy spectra are similar to the results of the second campaign. The monthly light curve >100 GeV shows a hint of variability at the 3.6 sigma level. In the Fermi-LAT band, both flux and spectral shape variabilities are reported. The optical light curve is variable and shows a clear correlation with the gamma-ray flux >100 MeV. In radio, 3 compact components are resolved in the innermost part of the jet. One of them shows a similar trend as the LAT and KVA light curves. The 0.1-650 GeV spectra measured simultaneously with MAGIC and Fermi-LAT can be well fitted either by a log-parabola or by a power-law with a sub-exponential cutoff for both campaigns. A single-zone synchrotron-self-Compton model, with an electron spectrum following a power-law with an exponential cutoff, can explain the broadband SED and the multi-band behavior of the source. However, this model suggests an untypical low bulk-Lorentz factor or a velocity alignment closer to the line of sight than the pc-scale radio jet. Search for Very-High-Energy Gamma Rays from the z = 0.896 Quasar 4C +55.17 with the MAGIC telescopes (1402.0291) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, E. Bernardini, S. Bonnefoy, E. Carmona, J. L. Contreras, F. Dazzi, C. Delgado Mendez, D. Dorner, D. Elsaesser, L. Font, M. Garczarczyk, N. Godinović, D. Hadasch, D. Hrupec, M. L. Knoetig, J. Kushida, E. Lindfors, A. López-Oramas, K. Mallot, B. Marcote, U. Menzel, P. Munar-Adrover, K. Nishijima, S. Paiano, J. M. Paredes, F. Prada, I. Puljak, J. Rodriguez Garcia, K. Saito, V. Scapin, A. Sillanpää, V. Stamatescu, S. Sun, P. Temnikov, J. Thaele, P. Vogler IFAE, Campus UAB, E-08193 Bellaterra, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Łódź, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Università di Padova, INFN, I-35131 Padova, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Inst. de Astrofísica de Andalucía Università dell'Insubria, Como, I-22100 Como, Italy, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Institut de Ciències de l'Espai Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, Universitat de Barcelona (ICC, IEEC-UB), E-08028 Barcelona, Spain, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, now at Ecole polytechnique fédérale de Lausanne now at Department of Physics & Astronomy, UC Riverside, CA 92521, USA, now at Finnish Centre for Astronomy with ESO also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, now at: Stockholm University, Oskar Klein Centre for Cosmoparticle Physics, SE-106 91 Stockholm, Sweden, now at GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands) Feb. 3, 2014 astro-ph.GA, astro-ph.HE The bright gamma-ray quasar 4C +55.17 is a distant source ($z = 0.896$) with a hard spectrum at GeV energies as observed by the Large Area Telescope (LAT) on board the {{\it Fermi}} satellite. This source is identified as a good source candidate for very-high-energy (VHE; $> 30$ GeV) gamma rays. In general VHE gamma rays from distant sources provide an unique opportunity to study the extragalactic background light (EBL) and underlying astrophysics. The flux intensity of this source in the VHE range is investigated. Then, constraints on the EBL are derived from the attenuation of gamma-ray photons coming from the distant blazar. We searched for a gamma-ray signal from this object using the 35-hour observations taken by the MAGIC telescopes between November 2010 and January 2011. No significant VHE gamma-ray signal was detected. We computed the upper limits of the integrated gamma-ray flux at $95\%$ confidence level of $9.4 \times 10^{-12}$ cm$^{-2}$ s$^{-1}$ and $2.5 \times 10^{-12}$ cm$^{-2}$ s$^{-1}$ above $100$ GeV and $200$ GeV, respectively. The differential upper limits in four energy bins in the range from $80$ GeV to $500$ GeV are also derived. The upper limits are consistent with the attenuation predicted by low-flux EBL models on the assumption of a simple power-law spectrum extrapolated from LAT data. MAGIC upper limits on the GRB 090102 afterglow (1311.3637) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, E. Bernardini, S. Bonnefoy, E. Carmona, J. L. Contreras, F. Dazzi, C. Delgado Mendez, D. Dorner, D. Elsaesser, L. Font, M. Garczarczyk, N. Godinović, D. Hadasch, D. Hrupec, M. L. Knoetig, J. Kushida, E. Lindfors, A. López-Oramas, K. Mallot, B. Marcote, U. Menzel, P. Munar-Adrover, K. Nishijima, S. Paiano, J. M. Paredes, F. Prada, I. Puljak, J. Rodriguez Garcia, K. Saito, V. Scapin, A. Sillanpää, V. Stamatescu, S. Sun, P. Temnikov, J. Thaele, P. Vogler, A. Bouvier IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Università di Padova, INFN, I-35131 Padova, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, , E-18080 Granada, Spain, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Institut de Ciències de l'Espai Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Department of Physics, University of Oulu, Finland, Universitat de Barcelona Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, now at Ecole polytechnique fédérale de Lausanne now at Department of Physics & Astronomy, UC Riverside, CA 92521, USA, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain, now at: Stockholm University, Oskar Klein Centre for Cosmoparticle Physics, SE-106 91 Stockholm, Sweden, now at GRAPPA Institute, University of Amsterdam, 1098XH Amsterdam, Netherlands, Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064, USA, W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics, Cosmology, Department of Physics, SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305, USA, Solar-Terrestrial Environment Laboratory, Nagoya University, Nagoya 464-8601, Japan, Dipartimento di Fisica, Universita' di Trieste, INFN Trieste, I-34127 Trieste, Italy) Nov. 14, 2013 astro-ph.HE Indications of a GeV component in the emission from GRBs are known since the EGRET observations during the 1990's and they have been confirmed by the data of the Fermi satellite. These results have, however, shown that our understanding of GRB physics is still unsatisfactory. The new generation of Cherenkov observatories and in particular the MAGIC telescope, allow for the first time the possibility to extend the measurement of GRBs from several tens up to hundreds of GeV energy range. Both leptonic and hadronic processes have been suggested to explain the possible GeV/TeV counterpart of GRBs. Observations with ground-based telescopes of very high energy photons (E>30 GeV) from these sources are going to play a key role in discriminating among the different proposed emission mechanisms, which are barely distinguishable at lower energies. MAGIC telescope observations of the GRB 090102 (z=1.547) field and Fermi Large Area Telescope (LAT) data in the same time interval are analysed to derive upper limits of the GeV/TeV emission. We compare these results to the expected emissions evaluated for different processes in the framework of a relativistic blast wave model for the afterglow. Simultaneous upper limits with Fermi and a Cherenkov telescope have been derived for this GRB observation. The results we obtained are compatible with the expected emission although the difficulties in predicting the HE and VHE emission for the afterglow of this event makes it difficult to draw firmer conclusions. Nonetheless, MAGIC sensitivity in the energy range of overlap with space-based instruments (above about 40 GeV) is about one order of magnitude better with respect to Fermi. This makes evident the constraining power of ground-based observations and shows that the MAGIC telescope has reached the required performance to make possible GRB multiwavelength studies in the very high energy range. The Simultaneous Low State Spectral Energy Distribution of 1ES 2344+514 from Radio to Very High Energies (1211.2608) MAGIC Collaboration: J. Aleksić, P. Antoranz, J. A. Barrio, A. Biland, G. Bonnoli, A. Carosi, J. L. Contreras, F. Dazzi, C. Delgado Mendez, D. Dominis Prester, D. Ferenc, R. J. García López, G. Giavitto, S. R. Gozzini, D. Hrupec, M. L. Knoetig, A. La Barbera, E. Lindfors, A. López-Oramas, K. Mallot, L. Maraschi, J. Masbou, J. Moldón, A. Niedzwiecki, R. Orito, J. M. Paredes, F. Prada, I. Reichardt, J. Rico, M. Salvati, C. Schultz, J. Sitarek, S. Spiro, S. Sun, P. Temnikov, O. Tibolla, M. Uellenbeck, F. Zandanel, F. Lucarelli, D. Bastieri, E. Angelakis, A. Sievers, W. Baumgartner, N. Gehrels, F. Krauss, W. Max-Moerbeck, A. C. S. Readhead, K. V. Sokolovsky, M. Weidinger INAF National Institute for Astrophysics, Rome, Italy, Università di Siena, , INFN Pisa, Italy, Technische Universität Dortmund, Germany, Max-Planck-Institut für Physik, München, Germany, Inst. de Astrofísica de Canarias, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, Spain, Deutsches Elektronen-Synchrotron Universität Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, Spain, Università di Udine, INFN Trieste, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, Croatia, Institut de Ciències de l'Espai Tuorla Observatory, University of Turku, Finland, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Inst. for Nucl. Research, Nucl. Energy, Sofia, Bulgaria, Universitat de Barcelona Università di Padova, INFN Padova, Italy, Università dell'Insubria, Como, Italy, Università di Pisa, INFN Pisa, Italy, now at: Ecole polytechnique fédérale de Lausanne supported by INFN Padova, now at: Finnish Centre for Astronomy with ESO also at Instituto de Fisica Teorica, UAM/CSIC, Spain, Università di Trieste, , INFN Trieste, Italy, Science Data Center, INAF-Oar, Italy, INAF, Istituto di Astrofisica Spaziale e Fisica Cosmica, Italy, Max-Planck-Institut für Radioastronomie, Germany, Instituto de Radio Astronoma Milimétrica, Spain, Astrophysics Science Division, NASA Goddard Space Flight Center, USA, Department of Physics, Astronomy, Western Kentucky University, USA, Astro Space Center of Lebedev Physical Institute, 117997 Moscow, Russia, Dr. Remeis-Sternwarte, ECAP, Universität Erlangen-Nürnberg, Germany, Aalto University Metsähovi Radio Observatory, Finland, Department of Physics, Purdue University, USA, Cahill Center for Astronomy, Astrophysics, California Institute of Technology, USA, Pulkovo Observatory, St. Petersburg, Russia, Sternberg Astronomical Institute, Moscow State University, 119992 Moscow, Russia, Theoretische Physik IV, Ruhr-Universität Bochum, Germany) June 5, 2013 astro-ph.CO, astro-ph.HE [Abridged] Context. To construct and interpret the spectral energy distribution (SED) of BL Lacertae objects, simultaneous broad-band observations are mandatory. Aims. We present the results of a dedicated multi-wavelength study of the high-frequency peaked BL Lacertae (HBL) object and known TeV emitter 1ES 2344+514 by means of a pre-organised campaign. Methods. The observations were conducted during simultaneous visibility windows of MAGIC and AGILE in late 2008. The measurements were complemented by Mets\"ahovi, RATAN-600, KVA+Tuorla, Swift and VLBA pointings. Additional coverage was provided by the ongoing long-term F-GAMMA and MOJAVE programs, the OVRO 40-m and CrAO telescopes as well as the Fermi satellite. The obtained SEDs are modelled using a one-zone as well as a self-consistent two-zone synchrotron self-Compton model. Results. 1ES 2344+514 was found at very low flux states in both X-rays and very high energy gamma rays. Variability was detected in the low frequency radio and X-ray bands only, where for the latter a small flare was observed. The X-ray flare was possibly caused by shock acceleration characterised by similar cooling and acceleration time scales. MOJAVE VLBA monitoring reveals a static jet whose components are stable over time scales of eleven years, contrary to previous findings. There appears to be no significant correlation between the 15 GHz and R-band monitoring light curves. The observations presented here constitute the first multi-wavelength campaign on 1ES 2344+514 from radio to VHE energies and one of the few simultaneous SEDs during low activity states. The quasi-simultaneous Fermi-LAT data poses some challenges for SED modelling, but in general the SEDs are described well by both applied models. The resulting parameters are typical for TeV emitting HBLs. Consequently it remains unclear whether a so-called quiescent state was found in this campaign. Discovery of very high energy gamma-ray emission from the blazar 1ES 1727+502 with the MAGIC Telescopes (1302.6140) MAGIC Collaboration: J. Aleksić, P. Antoranz, J. A. Barrio, E. Bernardini, A. Boller, T. Bretz, P. Colin, L. Cossio, G. De Caneva, A. Domínguez, D. Eisenacher, M. V. Fonseca, M. Garczarczyk, N. Godinović, A. Hadamek, D. Hrupec, S. Klepser, J. Kushida, E. Lindfors, A. López-Oramas, K. Mallot, L. Maraschi, J. Masbou, J. Moldón, A. Niedzwiecki, S. Paiano, J. M. Paredes, P. G. Prada Moroni, W. Rhode, A. Saggion, V. Scalzotto, S. N. Shore, D. Sobczynska, B. Steinke, H. Takami, D. Tescaro, A. Treves, Q. Weitzel IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, University of Łódź, PL-90236 Lodz, Poland, Depto. de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Spain, , D-15738 Zeuthen, Germany, Universität Würzburg, D-97074 Würzburg, Germany, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, E-28040 Madrid, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, , E-18080 Granada, Spain, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Università dell'Insubria, Como, I-22100 Como, Italy, Institut de Ciències de l'Espai Tuorla Observatory, University of Turku, FI-21500 Piikkiö, Finland, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Universitat de Barcelona Università di Padova, INFN, I-35131 Padova, Italy, INAF/Osservatorio Astronomico, INFN, I-34143 Trieste, Italy, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, ICREA, E-08010 Barcelona, Spain, now at: Ecole polytechnique fédérale de Lausanne now at: Department of Physics & Astronomy, UC Riverside, CA 92521, USA now at: DESY, Zeuthen, Germany, now at: Finnish Centre for Astronomy with ESO also at Instituto de Fisica Teorica, UAM/CSIC, E-28049 Madrid, Spain) Motivated by the Costamante & Ghisellini (2002) predictions we investigated if the blazar 1ES 1727+502 (z=0.055) is emitting very high energy (VHE, E>100 GeV) gamma rays. We observed the BL Lac object 1ES 1727+502 in stereoscopic mode with the two MAGIC telescopes during 14 nights between May 6th and June 10th 2011, for a total effective observing time of 12.6 hours. For the study of the multiwavelength spectral energy distribution (SED) we use simultaneous optical R-band data from the KVA telescope, archival UV/optical and X-ray observations by instruments UVOT and XRT on board of the Swift satellite and high energy (HE, 0.1 GeV - 100 GeV) gamma-ray data from the Fermi-LAT instrument. We detect, for the first time, VHE gamma-ray emission from 1ES 1727+502 at a statistical significance of 5.5 sigma. The integral flux above 150 GeV is estimated to be (2.1\pm0.4)% of the Crab Nebula flux and the de-absorbed VHE spectrum has a photon index of (2.7\pm0.5). No significant short-term variability was found in any of the wavebands presented here. We model the SED using a one-zone synchrotron self-Compton model obtaining parameters typical for this class of sources. Discovery of VHE gamma-rays from the blazar 1ES 1215+303 with the MAGIC Telescopes and simultaneous multi-wavelength observations (1203.0490) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, M. Backes, J. Becerra González, A. Biland, D. Borla Tridon, A. Carosi, J. Cortina, A. De Angelis, C. Delgado Mendez, A. Domínguez, D. Eisenacher, L. Font, D. Garrido Terrats, A. González Muñoz, A. Herrero, F. Jankowski, S. Klepser, D. Lelas, S. Lombardi, E. Lorenz, K. Mannheim, D. Mazin, A. Moralejo, K. Nilsson, R. Paoletti, M. A. Perez-Torres, F. Prada, I. Puljak, M. Ribó, T. Y. Saito, C. Schultz, J. Sitarek, S. Spiro, N. Strah, F. Tavecchio, M. Teshima, M. Uellenbeck, F. Zandanel, E. Järvelä, J. Tammi Universidad Complutense, Madrid, Spain, Università di Siena, INFN Pisa, Italy, Technische Universität Dortmund, Dortmund, Germany, Max-Planck-Institut für Physik, München, Germany, Inst. de Astrofísica de Canarias, Tenerife, Spain, University of Łódź, Lodz, Poland, Depto. de Astrofísica, Universidad de La Laguna, La Laguna, Spain, , Zeuthen, Germany, Universität Würzburg, Würzburg, Germany, , Barcelona, Spain, Institut de Ciències de l'Espai Inst. de Astrofísica de Andalucía Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, Zagreb, Croatia, Universitat Autònoma de Barcelona, Bellaterra, Spain, Inst. for Nucl. Research, Nucl. Energy, Sofia, Bulgaria, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, Università dell'Insubria, Como, Como, Italy, Università di Pisa, INFN Pisa, Pisa, Italy, now at: Ecole polytechnique fédérale de Lausanne supported by INFN Padova, now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, Madrid, Spain, now at: Finnish Centre for Astronomy with ESO, University of Turku, Finland Aalto University Metsähovi Radio Observatory, Metsähovintie, Finland The Oskar Klein Centre for Cosmoparticle Physics, Stockholm, Sweden Department of Astronomy, Stockholm University, Stockholm, Sweden) Context. We present the discovery of very high energy (VHE, E > 100GeV) gamma-ray emission from the BL Lac object 1ES 1215+303 by the MAGIC telescopes and simultaneous multi-wavelength data in a broad energy range from radio to gamma-rays. Aims. We study the VHE gamma-ray emission from 1ES 1215+303 and its relation to the emissions in other wavelengths. Methods. Triggered by an optical outburst, MAGIC observed the source in January-February 2011 for 20.3 hrs. The target was monitored in the optical R-band by the KVA telescope that also performed optical polarization measurements. We triggered target of opportunity observations with the Swift satellite and obtained simultaneous and quasi-simultaneous data from the Fermi Large Area Telescope and from the Mets\"ahovi radio telescope. We also present the analysis of older MAGIC data taken in 2010. Results. The MAGIC observations of 1ES 1215+303 carried out in January-February 2011 resulted in the first detection of the source at VHE with a statistical significance of 9.4 sigma. Simultaneously, the source was observed in a high optical and X-ray state. In 2010 the source was observed in a lower state in optical, X-ray, and VHE, while the GeV gamma-ray flux and the radio flux were comparable in 2010 and 2011. The spectral energy distribution obtained with the 2011 data can be modeled with a simple one zone SSC model, but it requires extreme values for the Doppler factor or the electron energy distribution. MAGIC observations of the giant radio galaxy M87 in a low-emission state between 2005 and 2007 (1207.2147) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, A. Berdyugin, O. Blanch, D. Borla Tridon, A. Carosi, J. Cortina, G. De Caneva, C. Delgado Mendez, D. Dominis Prester, D. Ferenc, R. J. García López, N. Godinović, D. Hildebrand, B. Huber, T. Krähenbühl, E. Leonardo, A. López-Oramas, N. Mankuzhiyil, M. Martínez, R. Mirzoyan, P. Munar-Adrover, I. Oya, S. Partini, M. Persic, P. G. Prada Moroni, R. Reinthal, S. Rügamer, K. Satalecka, T. Schweizer, J. Sitarek, V. Stamatescu, N. Strah, P. Temnikov, O. Tibolla, H. Vankov, F. Zandanel IFAE, Edifici Cn., Campus UAB, Bellaterra, Spain, Universidad Complutense, E-28040 Madrid, Spain, Università di Siena, INFN Pisa, Siena, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Università di Padova, INFN, I-35131 Padova, Italy, Inst. de Astrofísica de Canarias, La Laguna, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, La Laguna, Spain, Tuorla Observatory, University of Turku, Piikkiö, Finland, Deutsches Elektronen-Synchrotron ETH Zurich, Switzerland, Universität Würzburg, Würzburg, Germany, Universitat de Barcelona Università di Udine, INFN Trieste, Udine, Italy, , Bellaterra, Spain, , Granada, Spain, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, Zagreb, Croatia, Universitat Autònoma de Barcelona, Bellaterra, Spain, Inst. for Nucl. Research, Nucl. Energy, Sofia, Bulgaria, INAF/Osservatorio Astronomico, INFN, Trieste, Italy, Università di Pisa, INFN Pisa, Pisa, Italy, ICREA, Barcelona, Spain, , Lausanne, Switzerland, now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, Madrid, Spain, , University of Turku, Finland, INAF/Osservatorio Astronomico di Brera, Merate, Italy) July 11, 2012 astro-ph.CO, astro-ph.HE We present the results of a long M87 monitoring campaign in very high energy $\gamma$-rays with the MAGIC-I Cherenkov telescope. We aim to model the persistent non-thermal jet emission by monitoring and characterizing the very high energy $\gamma$-ray emission of M87 during a low state. A total of 150\,h of data were taken between 2005 and 2007 with the single MAGIC-I telescope, out of which 128.6\,h survived the data quality selection. We also collected data in the X-ray and \textit{Fermi}--LAT bands from the literature (partially contemporaneous). No flaring activity was found during the campaign. The source was found to be in a persistent low-emission state, which was at a confidence level of $7\sigma$. We present the spectrum between 100\,GeV and 2\,TeV, which is consistent with a simple power law with a photon index $\Gamma=2.21\pm0.21$ and a flux normalization at 300\,GeV of $(7.7\pm1.3) \times 10^{-8}$ TeV$^{-1}$ s$^{-1}$ m$^{-2}$. The extrapolation of the MAGIC spectrum into the GeV energy range matches the previously published \textit{Fermi}--LAT spectrum well, covering a combined energy range of four orders of magnitude with the same spectral index. We model the broad band energy spectrum with a spine layer model, which can satisfactorily describe our data. Constraining Cosmic Rays and Magnetic Fields in the Perseus Galaxy Cluster with TeV observations by the MAGIC telescopes (1111.5544) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, U. Barres de Almeida, W. Bednarek, A. Biland, D. Borla Tridon, E. Carmona, J. L. Contreras, A. De Angelis, C. Delgado Mendez, A. Domínguez, D. Eisenacher, C. Fruck, G. Giavitto, D. Häfner, J. Hose, S. Klepser, A. La Barbera, S. Lombardi, E. Lorenz, K. Mannheim, D. Mazin, A. Moralejo, K. Nilsson, R. Paoletti, M. A. Perez-Torres, J. Pochon, I. Puerto Gimenez, W. Rhode, K. Saito, V. Scalzotto, S. N. Shore, D. Sobczynska, A. Stamerra, T. Surić, P. Temnikov, D. F. Torres, P. Vogler, F. Zandanel, A. Pinzke Universidad Complutense, Madrid, Spain, INAF National Institute for Astrophysics, Rome, Italy, Università di Siena, INFN Pisa, Siena, Italy, Max-Planck-Institut für Physik, München, Germany, Università di Padova, INFN, Padova, Italy, Inst. de Astrofísica de Canarias, La Laguna, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, La Laguna, Spain, Tuorla Observatory, University of Turku, Piikkiö, Finland, Deutsches Elektronen-Synchrotron ETH Zurich, Zurich, Switzerland, Universitat de Barcelona Università di Udine, INFN Trieste, Udine, Italy, Institut de Ciències de l'Espai Inst. de Astrofísica de Andalucía Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, Zagreb, Croatia, Inst. for Nucl. Research, Nucl. Energy, Sofia, Bulgaria, INAF/Osservatorio Astronomico, INFN, Trieste, Italy, Università dell'Insubria, Como, Como, Italy, ICREA, Barcelona, Spain, now at: Ecole polytechnique fédérale de Lausanne now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas now at: Finnish Centre for Astronomy with ESO HITS, Heidelberg, Germany, May 4, 2012 astro-ph.CO, astro-ph.HE Galaxy clusters are being assembled today in the most energetic phase of hierarchical structure formation which manifests itself in powerful shocks that contribute to a substantial energy density of cosmic rays (CRs). Hence, clusters are expected to be luminous gamma-ray emitters since they also act as energy reservoirs for additional CR sources, such as active galactic nuclei and supernova-driven galactic winds. To detect the gamma-ray emission from CR interactions with the ambient cluster gas, we conducted the deepest to date observational campaign targeting a galaxy cluster at very high-energy gamma-rays and observed the Perseus cluster with the MAGIC Cherenkov telescopes for a total of ~85 hr of effective observing time. This campaign resulted in the detection of the central radio galaxy NGC 1275 at energies E > 100 GeV with a very steep energy spectrum. Here, we restrict our analysis to energies E > 630 GeV and detect no significant gamma-ray excess. This constrains the average CR-to-thermal pressure ratio to be <= 1-2%, depending on assumptions and the model for CR emission. Comparing these gamma-ray upper limits to predictions from cosmological cluster simulations that include CRs constrains the maximum CR acceleration efficiency at structure formation shocks to be < 50%. Alternatively, this may argue for non-negligible CR transport processes such as CR streaming and diffusion into the outer cluster regions. Finally, we derive lower limits on the magnetic field distribution assuming that the Perseus radio mini-halo is generated by secondary electrons/positrons that are created in hadronic CR interactions: assuming a spectrum of E^-2.2 around TeV energies as implied by cluster simulations, we limit the central magnetic field to be > 4-9 microG, depending on the rate of decline of the magnetic field strength toward larger radii. Detection of very-high energy \gamma-ray emission from NGC 1275 by the MAGIC telescopes (1112.3917) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, U. Barres de Almeida, W. Bednarek, A. Biland, D. Borla Tridon, E. Carmona, J. L. Contreras, A. De Angelis, C. Delgado Mendez, A. Domínguez, D. Eisenacher, C. Fruck, G. Giavitto, D. Häfner, J. Hose, S. Klepser, A. La Barbera, S. Lombardi, E. Lorenz, K. Mannheim, D. Mazin, A. Moralejo, K. Nilsson, R. Paoletti, M. A. Perez-Torres, J. Pochon, I. Puerto Gimenez, W. Rhode, K. Saito, V. Scalzotto, S. N. Shore, D. Sobczynska, A. Stamerra, T. Surić, P. Temnikov, D. F. Torres, P. Vogler, F. Zandanel, A. Pinzke Universidad Complutense, Madrid, Spain, INAF National Institute for Astrophysics, Rome, Italy, Technische Universität Dortmund, Dortmund, Germany, Max-Planck-Institut für Physik, München, Germany, Università di Padova, INFN, Padova, Italy, Inst. de Astrofísica de Canarias, La Laguna, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, La Laguna, Spain, Tuorla Observatory, University of Turku, Piikkiö, Finland, Deutsches Elektronen-Synchrotron ETH Zurich, Zurich, Switzerland, Universitat de Barcelona Università di Udine, INFN Trieste, Udine, Italy, Institut de Ciències de l'Espai Inst. de Astrofísica de Andalucía Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, Zagreb, Croatia, Inst. for Nucl. Research, Nucl. Energy, Sofia, Bulgaria, INAF/Osservatorio Astronomico, INFN, Trieste, Italy, Università dell'Insubria, Como, Como, Italy, ICREA, Barcelona, Spain, now at: Ecole polytechnique fédérale de Lausanne now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas now at: Finnish Centre for Astronomy with ESO HITS, Heidelberg, Germany, We report on the detection of very-high energy (VHE, E>100 GeV) gamma-ray emission from NGC 1275, the central radio galaxy of the Perseus cluster of galaxies. The source has been detected by the MAGIC telescopes with a statistical significance of 6.6 sigma above 100 GeV in 46 hr of stereo observations carried out between August 2010 and February 2011. The measured differential energy spectrum between 70 GeV and 500 GeV can be described by a power law with a steep spectral index of \Gamma=-4.1+/-0.7stat+/-0.3syst, and the average flux above 100 GeV is F_{gamma}=(1.3+/-0.2stat+/-0.3syst) x 10^-11 cm^-2 s^-1. These results, combined with the power-law spectrum measured in the first two years of observations by the Fermi-LAT above 100 MeV, with a spectral index of Gamma ~ -2.1, strongly suggest the presence of a break or cut-off around tens of GeV in the NGC 1275 spectrum. The light curve of the source above 100 GeV does not show hints of variability on a month time scale. Finally, we report on the nondetection in the present data of the radio galaxy IC 310, previously discovered by the Fermi-LAT and MAGIC. The derived flux upper limit F^{U.L.}_{gamma} (>300 GeV)=1.2 x 10^-12 cm^-2 s^-1 is a factor ~ 3 lower than the mean flux measured by MAGIC between October 2009 and February 2010, thus confirming the year time-scale variability of the source at VHE. Phase-resolved energy spectra of the Crab Pulsar in the range of 50-400 GeV measured with the MAGIC Telescopes (1109.6124) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, A. Berdyugin, O. Blanch, D. Borla Tridon, A. Carosi, J. Cortina, G. De Caneva, C. Delgado Mendez, D. Dominis Prester, D. Elsaesser, R. J. García López, N. Godinović, D. Hildebrand, T. Jogler, J. Krause, E. Leonardo, A. López-Oramas, N. Mankuzhiyil, M. Martínez, R. Mirzoyan, A. Niedzwiecki, R. Orito, S. Partini, M. Pilia, E. Prandini, R. Reinthal, S. Rügamer, K. Satalecka, T. Schweizer, J. Sitarek, V. Stamatescu, N. Strah, P. Temnikov, O. Tibolla, H. Vankov, F. Zandanel IFAE, Edifici Cn., Campus UAB, Bellaterra, Spain, INAF National Institute for Astrophysics, Rome, Italy, Università di Siena, , INFN Pisa, Siena, Italy, Technische Universität Dortmund, Dortmund, Germany, Inst. de Astrofísica de Canarias, La Laguna, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, La Laguna, Spain, University of Łódź, Lodz, Poland, Tuorla Observatory, University of Turku, Piikkiö, Finland, , Zeuthen, Germany, Max-Planck-Institut für Physik, München, Germany, Universität Würzburg, Würzburg, Germany, Università di Udine, INFN Trieste, Udine, Italy, Institut de Ciències de l'Espai Inst. de Astrofísica de Andalucía Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, Zagreb, Croatia, Universitat Autònoma de Barcelona, Bellaterra, Spain, INAF/Osservatorio Astronomico, INFN, Trieste, Italy, Università dell'Insubria, Como, Como, Italy, Università di Pisa, INFN Pisa, Pisa, Italy, now at: Ecole polytechnique fédérale de Lausanne supported by INFN Padova, now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas now at: Finnish Centre for Astronomy with ESO, University of Turku, Finland We use 73 h of stereoscopic data taken with the MAGIC telescopes to investigate the very high-energy (VHE) gamma-ray emission of the Crab pulsar. Our data show a highly significant pulsed signal in the energy range from 50 to 400 GeV in both the main pulse (P1) and the interpulse (P2) phase regions. We provide the widest spectra to date of the VHE components of both peaks, and these spectra extend to the energy range of satellite-borne observatories. The good resolution and background rejection of the stereoscopic MAGIC system allows us to cross-check the correctness of each spectral point of the pulsar by comparison with the corresponding (strong and well-known) Crab nebula flux. The spectra of both P1 and P2 are compatible with power laws with photon indices of 4.0 \pm 0.8 (P1) and 3.42 \pm 0.26 (P2), respectively, and the ratio P1/P2 between the photon counts of the two pulses is 0.54 \pm 0.12. The VHE emission can be understood as an additional component produced by the inverse Compton scattering of secondary and tertiary e\pm pairs on IR-UV photons. Morphological and spectral properties of the W51 region measured with the MAGIC telescopes (1201.4074) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, U. Barres de Almeida, W. Bednarek, O. Blanch, D. Borla Tridon, P. Colin, L. Cossio, G. De Caneva, C. Delgado Mendez, D. Dominis Prester, D. Elsaesser, R. J. García López, N. Godinović, D. Hadasch, J. Hose, V. Kadenius, J. Krause, N. Lewandowska, A. López-Oramas, N. Mankuzhiyil, M. Martínez, R. Mirzoyan, A. Niedzwiecki, S. Paiano, J. M. Paredes, M. Pilia, I. Puerto Gimenez, W. Rhode, K. Saito, V. Scalzotto, A. Sillanpää, F. Spanier, B. Steinke, L. Takalo, D. Tescaro, A. Treves, V. Zabalza IFAE, Edifici Cn., Campus UAB, Bellaterra, Spain, Universidad Complutense, Madrid, Spain, Università di Siena, INFN Pisa, Siena, Italy, Technische Universität Dortmund, Dortmund, Germany, Max-Planck-Institut für Physik, München, Germany, Inst. de Astrofísica de Canarias, La Laguna, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, La Laguna, Spain, University of Łódź, Lodz, Poland, , Zeuthen, Germany, Universität Würzburg, Würzburg, Germany, , Barcelona, Spain, Università di Udine, INFN Trieste, Udine, Italy, Inst. de Astrofísica de Andalucía Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, Zagreb, Croatia, Universitat Autònoma de Barcelona, Bellaterra, Spain, Tuorla Observatory, University of Turku, Piikkiö, Finland, Inst. for Nucl. Research, Nucl. Energy, Sofia, Bulgaria, Japanese MAGIC Consortium, Division of Physics, Astronomy, Kyoto University, Japan, INAF/Osservatorio Astronomico, INFN, Trieste, Italy, Università di Pisa, INFN Pisa, Pisa, Italy, ICREA, Barcelona, Spain, , Lausanne, Switzerland, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, Madrid, Spain, , University of Turku, Finland) The W51 complex hosts the supernova remnant W51C which is known to interact with the molecular clouds in the star forming region W51B. In addition, a possible pulsar wind nebula CXO J192318.5+140305 was found likely associated with the supernova remnant. Gamma-ray emission from this region was discovered by Fermi/LAT (between 0.2 and 50 GeV) and H.E.S.S. (>1 TeV). The spatial distribution of the events could not be used to pinpoint the location of the emission among the pulsar wind nebula, the supernova remnant shell and/or the molecular cloud. However, the modeling of the spectral energy distribution presented by the Fermi/LAT collaboration suggests a hadronic emission mechanism. We performed observations of the W51 complex with the MAGIC telescopes for more than 50 hours. The good angular resolution in the medium (few hundred GeV) to high (above 1 TeV) energies allow us to perform morphological studies. We detect an extended emission of very-high-energy gamma rays, with a significance of 11 standard deviations. We extend the spectrum from the highest Fermi/LAT energies to \sim 5 TeV and find that it follows a single power law with an index of 2.58 \pm 0.07stat \pm 0.22syst . The main part of the emission coincides with the shocked cloud region, while we find a feature extending towards the pulsar wind nebula. The possible contribution of the pulsar wind nebula, assuming a point-like source, shows no dependence on energy and it is about 20% of the overall emission. The broad band spectral energy distribution can be explained with a hadronic model that implies proton acceleration above 100 TeV. This result, together with the morphology of the source, tentatively suggests that we observe ongoing acceleration of ions in the interaction zone between supernova remnant and cloud. These results shed light on the long-standing problem of the origin of galactic cosmic rays. Detection of the gamma-ray binary LS I +61 303 in a low flux state at Very High Energy gamma-rays with the MAGIC Telescopes in 2009 (1111.6572) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, A. Berdyugin, O. Blanch, D. Borla Tridon, A. Cañellas, J. L. Contreras, F. Dazzi, B. De Lotto, A. Domínguez, D. Elsaesser, C. Fruck, G. Giavitto, A. Herrero, D. Hrupec, S. Klepser, D. Lelas, A. López-Oramas, N. Mankuzhiyil, M. Martínez, R. Mirzoyan, P. Munar-Adrover, I. Oya, S. Partini, M. Persic, P. G. Prada Moroni, R. Reinthal, S. Rügamer, K. Satalecka, T. Schweizer, J. Sitarek, A. Stamerra, L. Takalo, T. Terzić, D. F. Torres, P. Vogler, R. Zanin IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, , INFN Pisa, I-53100 Siena, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Università di Padova, INFN, I-35131 Padova, Italy, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Spain, University of Łódź, PL-90236 Lodz, Poland, Tuorla Observatory, University of Turku, FI-21500 Piikkiö, Finland, ETH Zurich, CH-8093 Switzerland, Max-Planck-Institut für Physik, D-80805 München, Germany, Universität Würzburg, D-97074 Würzburg, Germany, , E-08028 Barcelona, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, , E-08193 Bellaterra, Spain, , E-18080 Granada, Spain, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, INAF/Osservatorio Astronomico, INFN, I-34143 Trieste, Italy, Università dell'Insubria, Como, I-22100 Como, Italy, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, ICREA, E-08010 Barcelona, Spain, , Lausanne, Switzerland, now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, Madrid, Spain, , University of Turku, Finland) We present very high energy (VHE, E > 100 GeV) {\gamma}-ray observations of the {\gamma}-ray binary system LS I+61 303 obtained with the MAGIC stereo system between 2009 October and 2010 January. We detect a 6.3{\sigma} {\gamma}-ray signal above 400 GeV in the combined data set. The integral flux above an energy of 300 GeV is F(E>300 GeV)=(1.4 +- 0.3stat +- 0.4syst) * 10^{-12} cm^{-2} s^{-1}, which corresponds to about 1.3% of the Crab Nebula flux in the same energy range. The orbit-averaged flux of LS I +61 303 in the orbital phase interval 0.6--0.7, where a maximum of the TeV flux is expected, is lower by almost an order of magnitude compared to our previous measurements between 2005 September and 2008 January. This provides evidence for a new low emission state in LS I +61 303. We find that the change to the low flux state cannot be solely explained by an increase of photon-photon absorption around the compact star. Performance of the MAGIC stereo system obtained with Crab Nebula data (1108.1477) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, J. A. Barrio, A. Berdyugin, O. Blanch, D. Borla Tridon, A. Carosi, J. Cortina, G. De Caneva, C. Delgado Mendez, D. Dominis Prester, D. Elsaesser, R. J. García López, N. Godinović, D. Hildebrand, T. Jogler, J. Krause, E. Leonardo, A. López-Oramas, N. Mankuzhiyil, M. Martínez, R. Mirzoyan, D. Nieto, R. Paoletti, M. A. Perez-Torres, J. Pochon, I. Puerto Gimenez, W. Rhode, K. Saito, V. Scalzotto, S. N. Shore, D. Sobczynska, A. Stamerra, L. Takalo, D. Tescaro, A. Treves, R. M. Wagner IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain, Universidad Complutense, E-28040 Madrid, Spain, INAF National Institute for Astrophysics, I-00136 Rome, Italy, Università di Siena, INFN Pisa, I-53100 Siena, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Università di Padova, INFN, I-35131 Padova, Italy, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Spain, Tuorla Observatory, University of Turku, FI-21500 Piikkiö, Finland, Deutsches Elektronen-Synchrotron ETH Zurich, CH-8093 Zurich, Switzerland, Max-Planck-Institut für Physik, D-80805 München, Germany, Universität Würzburg, D-97074 Würzburg, Germany, , E-08028 Barcelona, Spain, Università di Udine, INFN Trieste, I-33100 Udine, Italy, , E-08193 Bellaterra, Spain, , E-18080 Granada, Spain, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, HR-10000 Zagreb, Croatia, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, INAF/Osservatorio Astronomico, INFN, I-34143 Trieste, Italy, Università dell'Insubria, Como, I-22100 Como, Italy, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, ICREA, E-08010 Barcelona, Spain, , Lausanne, Switzerland, now at: Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, Madrid, Spain, , University of Turku, Finland) Nov. 17, 2011 astro-ph.IM MAGIC is a system of two Imaging Atmospheric Cherenkov Telescopes located in the Canary island of La Palma. Since autumn 2009 both telescopes have been working together in stereoscopic mode, providing a significant improvement with respect to the previous single-telescope observations. We use observations of the Crab Nebula taken at low zenith angles to assess the performance of the MAGIC stereo system. The trigger threshold of the MAGIC telescopes is 50-60 GeV. Advanced stereo analysis techniques allow MAGIC to achieve a sensitivity as good as (0.76 +/- 0.03)% of the Crab Nebula flux in 50 h of observations above 290 GeV. The angular resolution at those energies is better than ~0.07 degree. We also perform a detailed study of possible systematic effects which may influence the analysis of the data taken with the MAGIC telescopes. MAGIC contributions to the 32nd International Cosmic Ray Conference (1111.0879) MAGIC Collaboration: J. Aleksić, L. A. Antonelli, U. Barres de Almeida, W. Bednarek, A. Biland, D. Borla Tridon, E. Carmona, J. L. Contreras, A. De Angelis, C. Delgado Mendez, A. Domínguez, D. Eisenacher, C. Fruck, G. Giavitto, D. Häfner, J. Hose, S. Klepser, A. La Barbera, S. Lombardi, E. Lorenz, K. Mannheim, M. Martínez, R. Mirzoyan, A. Niedzwiecki, S. Paiano, S. Partini, M. Pilia, E. Prandini, R. Reinthal, S. Rügamer, K. Satalecka, T. Schweizer, J. Sitarek (1, 10), I. Snidaric, V. Stamatescu, S. Sun, P. Temnikov, O. Tibolla, H. Vankov, V. Zabalza IFAE, Edifici Cn., Campus UAB, Bellaterra, Spain, INAF National Institute for Astrophysics, Rome, Italy, Università di Siena, , INFN Pisa, Siena, Italy, Technische Universität Dortmund, Dortmund, Germany, Università di Padova, INFN, Padova, Italy, Inst. de Astrofísica de Canarias, Tenerife, Spain, Depto. de Astrofísica, Universidad de La Laguna, La Laguna, Spain, Tuorla Observatory, University of Turku, Piikkiö, Finland, Deutsches Elektronen-Synchrotron ETH Zurich, Zurich, Switzerland, Universitat de Barcelona Università di Udine, INFN Trieste, Udine, Italy, Institut de Ciències de l'Espai Inst. de Astrofísica de Andalucía Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, Zagreb, Croatia, Inst. for Nucl. Research, Nucl. Energy, Sofia, Bulgaria, INAF/Osservatorio Astronomico, INFN, Trieste, Italy, Università dell'Insubria, Como, Italy, ICREA, Barcelona, Spain, , Lausanne, Switzerland, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, Madrid, Spain, , University of Turku, Finland) Nov. 8, 2011 astro-ph.HE Compilation of the papers contributed by the MAGIC collaboration to the 32nd International Cosmic Ray Conference (ICRC 2011), which took place between August 11 and 18, 2011 in Beijing, China. The papers are sorted in 6 categories: Overview and Highlight papers; Instrument, software and techniques; Galactic sources; Extragalactic sources; Multi-wavelength and joint campaigns; Fundamental physics, dark matter and cosmic rays. Observations of the Crab pulsar between 25 GeV and 100 GeV with the MAGIC I telescope (1108.5391) MAGIC Collaboration: J. Aleksić, E. A. Alvarez, L. A. Antonelli, P. Antoranz, M. Asensio, M. Backes, J. A. Barrio, D. Bastieri, J. Becerra González, W. Bednarek, A. Berdyugin, K. Berger, E. Bernardini, A. Biland, O. Blanch, R. K. Bock, A. Boller, G. Bonnoli, D. Borla Tridon, I. Braun, T. Bretz, A. Cañellas, E. Carmona, A. Carosi, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, L. Cossio, S. Covino, F. Dazzi, A. De Angelis, G. De Caneva, E. De Cea del Pozo, B. De Lotto, C. Delgado Mendez, A. Diago Ortega, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, D. Eisenacher, D. Elsaesser, D. Ferenc, M. V. Fonseca, L. Font, C. Fruck, R. J. García López, M. Garczarczyk, D. Garrido, G. Giavitto, N. Godinović, D. Hadasch, D. Häfner, A. Herrero, D. Hildebrand, D. Höhne-Mönch, J. Hose, D. Hrupec, T. Jogler, H. Kellermann, S. Klepser, T. Krähenbühl, J. Krause, J. Kushida, A. La Barbera, D. Lelas, E. Leonardo, E. Lindfors, S. Lombardi, M. López, A. López-Oramas, E. Lorenz, M. Makariev, G. Maneva, N. Mankuzhiyil, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, M. Meucci, J. M. Miranda, R. Mirzoyan, J. Moldón, A. Moralejo, P. Munar-Adrover, D. Nieto, K. Nilsson, R. Orito, N. Otte, I. Oya, D. Paneque, R. Paoletti, S. Pardo, J. M. Paredes, S. Partini, M. A. Perez-Torres, M. Persic, L. Peruzzo, M. Pilia, J. Pochon, F. Prada, P. G. Prada Moroni, E. Prandini, I. Puerto Gimenez, I. Puljak, I. Reichardt, R. Reinthal, W. Rhode, M. Ribó, J. Rico, M. Rissi, S. Rügamer, A. Saggion, K. Saito, T. Y. Saito, M. Salvati, K. Satalecka, V. Scalzotto, V. Scapin, C. Schultz, T. Schweizer, M. Shayduk, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, F. Spanier, S. Spiro, V. Stamatescu, A. Stamerra, B. Steinke, J. Storz, N. Strah, T. Surić, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, O. Tibolla, D. F. Torres, A. Treves, M. Uellenbeck, H. Vankov, P. Vogler, R. M. Wagner, Q. Weitzel, V. Zabalza, F. Zandanel, R. Zanin, K. Hirotani We report on the observation of $\gamma$-rays above 25\,GeV from the Crab pulsar (PSR B0532+21) using the MAGIC I telescope. Two data sets from observations during the winter period 2007/2008 and 2008/2009 are used. In order to discuss the spectral shape from 100\,MeV to 100\,GeV, one year of public {\it Fermi} Large Area Telescope ({\it Fermi}-LAT) data are also analyzed to complement the MAGIC data. The extrapolation of the exponential cutoff spectrum determined with the Fermi-LAT data is inconsistent with MAGIC measurements, which requires a modification of the standard pulsar emission models. In the energy region between 25 and 100\,GeV, the emission in the P1 phase (from -0.06 to 0.04, location of the main pulse) and the P2 phase (from 0.32 to 0.43, location of the interpulse) can be described by power laws with spectral indices of $-3.1 \pm 1.0_{stat} \pm 0.3_{syst}$ and $-3.5 \pm 0.5_{stat} \pm 0.3_{syst}$, respectively. Assuming an asymmetric Lorentzian for the pulse shape, the peak positions of the main pulse and the interpulse are estimated to be at phases $-0.009 \pm 0.007$ and $0.393 \pm 0.009$, while the full widths at half maximum are $0.025 \pm 0.008$ and $0.053 \pm 0.015$, respectively.
CommonCrawl
Complexity class of comparison of power towers Consider the following decision problem: given two lists of positive integers $a_1, a_2, \dots, a_n$ and $b_1, b_2, \dots, b_m$ the task is to decide if $a_1^{a_2^{\cdot^{\cdot^{\cdot^{a_n}}}}} < b_1^{b_2^{\cdot^{\cdot^{\cdot^{b_m}}}}}$. Is this problem in the class $P$? If yes, then what is the algorithm solving it in polynomial time? Otherwise, what is the fastest algorithm that can solve it? I mean polynomial type with respect to the size of the input, i.e. total number of digits in all $a_i, b_i$. $p^{q^{r^s}}=p^{(q^{(r^s)})}$, not $((p^q)^r)^s$. number-theory algorithms computational-complexity exponentiation power-towers Lord_Farin Vladimir ReshetnikovVladimir Reshetnikov $\begingroup$ At first glance, it doesn't seem likely that the problem is in P. If you are to compute each tower, which is only too likely in a worst-case scenario, you will need at least exponential space. For example, to write $a^b$ you need $b\log(a)$ of space, and $b$ is exponential w.r.t. $\log(b)$, which is the size of $b$. I think that in general you will need something like $\exp(\exp(\cdots\exp(c)\cdots))$ in space, where $c$ is the topmost element of the tower ($a_n$ or $b_m$), so the height of the tower is, I think, $n$. $\endgroup$ – Janoma Jan 28 '12 at 17:46 $\begingroup$ It seems like an interesting question, but I think you need limitations on $a_i$ and $b_i$ for this to be answerable. Otherwise, you can introduce arbitrary space requirement (and thus PSPACE, EXPSPACE etc) just for comparing $ a_1 $ and $b_1". That said, I also agree with Janoma that this seems likely to be EXPSPACE even with bounded numbers. $\endgroup$ – Desiato Mar 20 '12 at 1:25 $\begingroup$ A related question is discussed at mathematica.stackexchange.com/questions/24815/… $\endgroup$ – Liu Jin Tsai May 8 '13 at 2:26 $\begingroup$ Obviously, in many cases there is a cheaper way to get the answer than to explicitly calculate the power towers and then compare them digit by digit. I suppose, you do not need to fully calculate the numbers to see that 2^2^2^2^2^2 < 3^3^3^3^3^3^3^3^3. $\endgroup$ – Vladimir Reshetnikov May 8 '13 at 4:17 $\begingroup$ @VladimirReshetnikov Thanks for your +350! I reinvested them into a new bounty. Hopefully we will see a complete and correct solution soon. $\endgroup$ – Piotr Shatalin May 15 '13 at 3:35 For readability I'll write $[a_1,a_2,\ldots,a_n]$ for the tower $a_1^{a_2^{a_3^\cdots}}$. Let all of the $a_i,b_i$ be in the interval $[2,N]$ where $N=2^K$ (if any $a_i$ or $b_i$ is 1 we can truncate the tower at the previous level, and the inputs must be bounded to talk about complexity). Then consider two towers of the same height $$ T=[N,N,\ldots,N,x] \quad \mathrm{and} \quad S=[2,2,\dots,2,y] $$ i.e. T is the largest tower in our input range with $x$ at the top, and S is the smallest with $y$ at the top. With $N, x\ge 2$ and $y>2Kx$ then $$ \begin{aligned} 2^y & > 2^{2Kx} \\ & = N^{2x} \\ & > 2log(N)N^x &\text{ as $x \gt 1 \ge \frac{1+log(log(N))}{log(N)}$} \\ & = 2KN^x \end{aligned} $$ Now write $x'=N^x$ and $y'=2^y>2Kx'$ then $$ [N,N,x]=[N,x']<[2,y']=[2,2,y] $$ Hence by induction $T<S$ when $y>2Kx$. So we only need to calculate the exponents down from the top until one exceeds the other by a factor of $2K$, then that tower is greater no matter what values fill in the lower ranks. If the towers have different heights, wlog assume $n>m$, then first we reduce $$ [a_1,a_2,\ldots,a_n] = [a_1,a_2,\ldots,a_{m-1},A] $$ where $A=[a_m,a_{m+1},\ldots,a_n]$. If we can determine that $A>2Kb_m$ then the $a$ tower is larger. If the towers match on several of the highest exponents, then we can reduce the need for large computations with a shortcut. Assume $n=m$, that $a_j> b_j$ for some $j<m$ and $a_i=b_i$ for $j<i\le m$. Then $$ [a_1,a_2,\ldots,a_m] = [a_1,a_2,\ldots,a_j,X] \\ [b_1,b_2,\ldots,b_m] = [b_1,b_2,\ldots,b_j,X] $$ and the $a$ tower is larger if $(a_j/b_j)^X>2K$. So we don't need to compute $X$ fully if we can determine that it exceeds $\log(2K)/\log(a_j/b_j)$. These checks need to be combined with a numeric method like the one @ThomasAhle gave. They can solve the problem that method has with deep trees that match at the top, but can't handle $[4,35,15],[20,57,13]$ which are too big to compute but don't allow for one of these shortcuts. Thomas Ahle ZanderZander $\begingroup$ Very nice! I think you've got the correct answer. Only, in the inductive proof, why is $2^y = (N^x)^2$? $\endgroup$ – Thomas Ahle May 22 '13 at 7:02 $\begingroup$ @ThomasAhle Oh thanks, should be greater than, not equals. $N=2^K,y>2Kx$ so $2^y>2^{2Kx}=((2^K)^x)^2=(N^x)^2$. $\endgroup$ – Zander May 22 '13 at 11:12 $\begingroup$ Got it. So for $a > b \ge 2$ we just need $x > log(2K)/log(N/(N-1)) \ge log(2K)/log(a/b)$ to get $a^x > 2Kb^x$. For N = 100 that means $x$ just needs to be at least 258. Amazing. $\endgroup$ – Thomas Ahle May 22 '13 at 12:06 $\begingroup$ @ThomasAhle Yes, I agree something else is needed. For 4 (or any fixed number of levels) I think we can just increase our calculation precision and still keep it polynomial time. But that doesn't work for an arbitrary number of "close" levels. I think abc-conjecture type bounds (Stewart & Yu e.g.) can do it by guaranteeing the difference is large even if the ratio is close to 1. I don't have time right now but will try to write this up later. $\endgroup$ – Zander May 23 '13 at 11:08 $\begingroup$ @Zander Came across this problem and I've been interested in a solution that works for all cases. I know it's been a few years but if you're still around do you think you could write it up? $\endgroup$ – exfret Mar 31 at 21:25 Recently I asked a very similar question at Mathematica.SE. I assume you know it, because you participated in the discussion. Leonid Shifrin suggested an algorithm that solves this problem for the majority of cases, although there were cases when it gave an incorrect answer. But his approach seems correct and it looks like it is possible to fix those defects. Although it was not rigorously proved, his algorithm seems to work in polynomial time. It looks like it would be fair if he got the bounty for this question, but for some reason he didn't want to. So, this question is not yet settled completely and I am going to look for the complete and correct solution, and will start a new bounty for this question once the current one expires. I do not expect to get a bounty for this answer, but should you choose to award it, I will add it up to the amount of the new bounty so that it passes to whomever eventually solves this question. Piotr ShatalinPiotr Shatalin 9541717 silver badges3535 bronze badges $\begingroup$ There is an approach at Robert Munafo's "Hypercalc" site. I did not check how far we can extend the powertower there, but from his description I've the impression, it can be used for fairly high powertowers and that the explanation might be that what you're looking for. See: mrob.com/pub/perl/hypercalc.html $\endgroup$ – Gottfried Helms May 21 '13 at 15:07 $\begingroup$ Hypercalc has 27^googolplex = 10^googolplex. It probably works much like my solution below $\endgroup$ – Thomas Ahle May 21 '13 at 17:04 My approach is similarly numeric to Leonid's, but more precise and perhaps easier to analyse. It supports real exponents > 0. The idea is to represent power towers as a single floating point number with $n$ exponentiations: $(x\mid n) := exp^n(x)$. Normalizing $x\in[0,1)$, this format allows easy comparison between numbers. What remains is a way to calculate $a^{(x\mid n)}$ for any real, positive $a$. My Python code below is an attempt to do so, while being as numerically stable as possibly, e.g. by using the log-sum trick. My code runs in time proportional to the height of the tower (number of apow calls) and the iterated-log of it's value (number of recursive calls). I haven't been able to find two towers with values close enough to case my method to fail. At least for integer exponents. With fractional exponents it is possible to create very towers too close for my representation to handle. E.g. $$ 2^{2^{2^{2^0}}} \\ < \\ 2^{2^{2^{2^{(1/2)^{2^{2^{2^2}}}}}}} $$ I would be interested in suggestions to other types of counter examples, especially integer ones. It seems to me that for the problem to be in P, we need to non-numerical methods. It doesn't seem unlikely at all, that certain analytical cases are harder than P. from math import log, exp def normalize((x,n)): """ Adjusts n to put x in the range [0,1) """ if x >= 1: return normalize((log(x),n+1)) if x < 0: return normalize((exp(x),n-1)) return (x,n) normdec = lambda f: lambda *a: normalize(f(*a)) @normdec def apow(a,(x,n)): """ Calculates a^(x|n) """ if a == 1: return (1,0) if a < 1: return (rpow(x,n)*log(a), 1) if n == 0: return (x*log(a), 1) if n >= 1: y, k = cpow(log(log(a)), x, n-1) return (y, k+2) def cpow(c,x,n): """ Calculates (x|n) + c """ if c == 0: return (x, n) if n == 0: return (x + c, 0) z = rpow(x,n-1) if z <= log(abs(c)): return (exp(z)+c, 0) if c < 0: y, k = cpow(log(1-exp(log(-c)-z)), x, n-1) if c > 0: y, k = cpow(log(1+exp(log(c)-z)), x, n-1) def rpow(x,n): """ Calculates (x|n) as a float Returs Infinty if the value is out of range""" for _ in range(n): x = exp(x) except OverflowError: # We get into this case in two situations # 1) We are calculating log((x|n) + c) and c contributes very little # 2) We are calculating a^(x|n) for a < 1 and (x|n) is so small it doesn't # fit into a float return float('inf') def powtow(bs): """ Calculates b[0]**b[1]**b[2]**...**b[m-1] in the (x|n) form. Equivalent to `foldr apow (1,0) bs' e.g. apow(b[0], apow(b[1], apow(b[2], (1,0)))) """ if not bs: return (1,0) return apow(bs[0], powtow(bs[1:])) print powtow([1,2,3,4,5]) print powtow([2,3,4,5]) print powtow([2,2,5,2,7,4,9,3,7,6,9,9,9,9,3,2]) print powtow([2,3,2,3,5,8]) print powtow([2,2,2,2,2,2,2,2,2,2,2,2,2,2,4,2,2,2]) print powtow([2,2,2,2,2,2,2,2,2,2,2,2,2,2,4,16]) print powtow([1.54090919967, 1.46228204461, 1.78555495826, 1.75545819035, 2.21730941808, 1.0797564499, 7.90630125423, 0.881978093585, 1.75085618709, 2.23911325176, 1.39697337886, 1.16053659586, 1.5939192079, 6.11401961748, 0.844860266481, 1.92758094038, 4.64573316954, 0.870819420274, 1.49026447511, 1.77839910981, 1.46208378213, 2.29956158055, 1.00884903003, 1.77521724246, 2]) print powtow([2.32185478602, 1.88198918762, 2.27614315145, 1.77518235487, 1.4841479727, 0.563158971798, 0.732132856919, 0.669957968262, 2.16345101714, 2.23185963501, 0.824885385628, 0.873101580546, 1.45714899023, 2.3973000247, 0.507154709525, 1.94022843601, 1.29982267606, 0.578058713016, 1.58207655843, 1.79417433851, 1.18630377782, 1.37314328673, 0.655551609076, 1.57569812897, 1]) print powtow([2, 2, 2, 1]) print powtow([2, 2, 2, 2, .5, 2, 2, 2, 2]) flip = lambda (a,b): (b,a) snd = lambda (a,b): b f = lambda xs: [1./x for x in xs] # Print all power towers of permutations [2..6], sorted print list(reversed(sorted((flip(powtow(p)),p) for p in itertools.permutations(range(2,7))))) # Print all power towers of permutations [1/2..1/6] sorted print list(reversed(sorted((flip(powtow(f(p))),p) for p in itertools.permutations(range(2,7))))) powtow([2,2,2,2,2,2,2,2,2,2,2,2,2,2,4,2,2,2]) = (0.1184590219613409, 18) powtow([9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9]) = (0.10111176550354063, 18) Counter examples: powtow([2,2,2,2,2,2,2]) = (0.8639310719129168, 6) Thomas AhleThomas Ahle $\begingroup$ I read petr-mitrichev.blogspot.co.uk/2012/05/… to get some more ideas. It seems like Petr assumes that once the head of the tower is large enough for the LI representation to be equal, we can just look at the remaining tower, from top to bottom, and choose the one with the largest value the first time they differ. $\endgroup$ – Thomas Ahle May 21 '13 at 21:32 $\begingroup$ I assume the counterexample in your answer is the case where your algorithm fails to give the correct answer. It is exactly the case on which both WolframAlpha and HyperCalc (try 3^2^2^2^2^2^2 - 2^2^2^2^2^2^2) also fail. Do have any ideas about how this defect could be fixed? $\endgroup$ – Vladimir Reshetnikov May 22 '13 at 0:27 $\begingroup$ Yes. I believe if we bound the exponents so $1 < a < 100$ (or maybe a much larger upper bound) we can use my method for comparing the height 3 or 4 head of the tower. It should be strong enough to tell us if they are equal or one is larger. Plan B is to choose the highest tower. Plan C is the interesting one: At this point the values of the rest of the tower only matter if the heads are equal, so we can walk down the towers in parallel, stopping as soon as we see a differing value. $\endgroup$ – Thomas Ahle May 22 '13 at 6:48 $\begingroup$ Hence the main thing to be proven is that once the head of a tower exceeds a certain point, and the rest of the exponents are bounded (and equally numerous), we can simply look at the top differing value. It's a bit counter intuitive, but it seems very likely from the simple inequalities you get. Zander's post might be exactly what is needed. $\endgroup$ – Thomas Ahle May 22 '13 at 6:51 $\begingroup$ From Zander's post it seems like the critical point is just less than $N(C+log(log(N)))$ where $N$ is the upper bound for exponents. (And $C = log(2/log(2))) < 1.06$). Since a tower of height 4 (with no ones) is at least 65536 we don't have to calculate more than 4 levels for $N$ up to around 19,000. $\endgroup$ – Thomas Ahle May 22 '13 at 12:39 I believe this is going to be much much harder than P, even with suitable bounds on the $a_i$ and $b_i$ introduced. Without them, it is trivially beyond any bounded complexity as one needs to at very least compare $a_1$ and $b_1$. This is just a really rough idea and might not ultimately work out, but consider representing arbitrary games through power towers, with a certain number of levels representing the game decision tree. So if that tree is just exponential in size for a game, we can fit all move outcomes into a fixed number of levels and can then add several sets of these levels on top to determine alternating moves for both players and have the one result larger than the other if that player has a winning strategy. There are many games that are at very least PSPACE-complete where the site of the decision tree for possible moves is (multi-)exponentially limited, and I believe some are much harder (GO for instance is, with suitable generalisation, EXPTIME-complete, and from intuition, I would expect the decision tree to be exponentially limited with regards to the board size). A vague answer, I realize, but maybe someone more familiar with complexity theory can say if that thought makes sense. DesiatoDesiato protected by Alexander Gruber♦ Jun 22 '13 at 15:44 Not the answer you're looking for? Browse other questions tagged number-theory algorithms computational-complexity exponentiation power-towers or ask your own question. Close power towers Finding whether a solution exists to the given system of equations A sequence of rationals whose continued square roots are also rational Finding numbers at least half the sum What is the relationship between $N \sum_{i=1}^N \frac{a_i}{b_i}$ and $\sum_{i=1}^N \frac{\sum_{i=1}^N a_i}{b_i}$? Completing partial permutation How many permutations of $A_1, A_2, A_3, A_4, B_1, B_2, B_3$ have the $A$'s and $B$'s in ascending order. R to N mapping and Cantor's diagonal argument Prove $\pi(x) \geq k$ Nontrivial integer solutions of $\sum_{i=1}^3 a_i ^3=\sum_{i=1}^3 b_i ^3$ and $\sum_{i=1}^3 a_i =\sum_{i=1}^3 b_i$ Count the size of $|\{(b_1, \dots, b_n): (-1)^{b_1} a_1 + \cdots + (-1)^{b_n} a_n =0, b_i \in \{0,1\} \}|$ in $O(n)$ way
CommonCrawl
Impacts of the zero mark-up drug policy on hospitalization expenses of COPD inpatients in Sichuan province, western China: an interrupted time series analysis Junman Wang1 na1, Peiyi Li1 na1 & Jin Wen1 Since 1950, the hospitals had been permitted to take a 15% mark-up of drug purchase price to remedy the loss of public hospitals and doctors' salaries in China due to tight government budget. This policy resulted in an increasing over-prescriptions which increased burden for patients eventually. The soaring medical expenditures prompted Chinese government to launch the zero mark-up drug policy (ZMDP) in 2009, which aims to eliminate physicians' financial incentives and lighten patients' economic burden through cancelling the 15% mark-up. The purpose of this study is to assess the impacts of the ZMDP on hospitalization expenses for inpatients with chronic obstructive pulmonary disease (COPD) in western China. An interrupted time series was used to assess the impact of the ZMDP in 25 tertiary hospitals of Sichuan province, in which the policy was implemented in 2017. Monthly average total hospitalization expenses including drug expenses, medical service expenses and diagnosis expenses of COPD inpatients were analyzed with segmented regression model developed from January 2015 to June 2018. After the intervention of the ZMDP, the total hospitalization expenses of COPD patients significantly decreased immediately by 1022.06 CNY (P = .011). The post-policy long-term trend was decreasing by 125.32 CNY (P < .001) per month compared to the pre-policy period. The drug expenses kept downward trend both before and after the policy implementation. It had decreased by 46.42 CNY (P < .001) per month on average before the policy implementation and then dropped 1073.58 CNY (P < .001) immediately after the policy was implemented. Meanwhile, the medical service expenses had an increasing baseline trend of 14.93 CNY (P < .001) per month before the policy intervention, but it increased 197.75 CNY immediately after the policy was implemented (P = .011). The pre-policy period long-term trend of diagnosis expenses had increased by 25.78 CNY (P < .001) per month and decreased immediately by 310.78 CNY (P = .010). The post-policy trend was decreasing by 35.60 CNY (P = .001) per month compared to the pre-policy period. Our study suggested that the ZMDP have been an effective intervention to curb the increase of hospitalization expenses for inpatients with COPD, especially the drug expenses in western region of China. Public hospitals, as non-profit medical institutions, are the main providers of healthcare services in China. In the era of planned economy, every Chinese citizen could enjoy free medical services, and the majority income of hospitals and medics was supported by the financial subsidies from the government [1]. However, the Chinese government could not afford the large healthcare inputs in a long period, the subsidies had fallen to 11% from 60% [2]. To remedy the loss in incomes of health services providers, a 15% makeup on the price of pharmaceutical products (except Chinese herbal medicine) had been allowed since 1950 [3]. The nationwide implementation of the policy enabled the medical institutions to maintain normal operation by the revenue of drugs even if insufficient government financial resources. In 1978, the Third Plenary Session of the Eleventh Central Committee inaugurated a new era of reform and opening-up, in which the government decided to loosen their control over the medical institutions and give part of their profits back to hospitals themselves so that the hospitals could organize their medical activities according to the market demand [4]. Therefore, hospitals gradually turned to be market-oriented and started to encourage their physicians to prescribe superfluous drugs with the power of self-management for their personal income was linked with drug profits. Thus, the phenomenon of over-prescriptions and high economic burden for patients gradually appeared [5]. Extremely high drug expenses have become a serious social problem in China, and seriously affected the relationship between physicians and patients [6]. In view of this, the Chinese government issued the zero mark-up drug policy (ZMDP), which proposed to eradicate the 15% profit margin for drug sales in 2009. The implementation of this policy was expected to mitigating the medical misconduct of over-prescription driving by financial incentives and finally alleviate the economic burden of the public [7]. The ZMDP was executed gradually in different regions in China, with local Ministry of Health increased subsidies to hospitals to compensate the loss of incomes. All public hospitals in Sichuan province have removed the drug mark-up on January 1, 2017. The income reduced due to the elimination of drug markup will be compensated through various adjustments. 70% of them was be compensated via increasing the price of medical services that reflecting value of labor, technical difficulty and risk degree of treatment, and other 20% was offset through local financial input, and 10% was be compensated by hospitals themselves through cost reduction and other measures [8]. Moreover, the adjusted medical services shall be reimbursed from medical insurance, ensuring that the burden of medical expenses on patients would be reduced [9]. However, it's need noted that the adjustments varies in different provinces which may resulted various impacts of the ZMDP. It was reported that the average medical expenses per outpatient and per inpatient decreased 25.93 and 25.22% respectively in 2011 compared with 2010 by a study investigated 60 primary medical institutions in Shandong and Hebei provinces [10]. Studies conducted by Yang et al. and Zhou et al. got similar conclusions in Shaanxi province [11, 12]. Yi et al. have found that there are large reductions in drug revenue after the implementation of this policy [13]. Also an increasing trend by 1.31% of hospitalization expenses per capita was found in Shanwei city by researchers Chen [14]. In addition, there is no study had discussed the impact of the ZMDP on hospitalization expenditures of a specific disease. Chronic obstructive pulmonary disease (COPD) is one of the main contributors to the global burden of disease, especially in China which ranks the 4th leading cause of disability adjusted life years (DALYs) [15]. In Sichuan, the number of deaths of COPD was 140,858 in 1990 and 13,478 in 2013, compare with the number of cases of COPD was 3,087,656 in 1990 and 3,469,142 [16]. Worsely, COPD puts enormous financial pressure on governments. The annual cost of COPD in Europe is 38.6 billion euros (303 billion CNY) [17] and 49.5 billion dollars (350 billion CNY) in the United States [18]. In Singapore, hospitalization accounting for 73% of the total COPD burden, or an average of 7.2 million dollars (50.8 million CNY) per year [19]. In 2013–2016, the hospitalization cost of COPD patients in Lanzhou was 8134.11 CNY, and the median value was 6794.52 CNY [20]. The patient's economic burden of disease was excessive. Evidence from the investigation of six large cities in China shows that the total expenses of one COPD patient account for 40% of the total income of their family. Additional, drug expenses among the total medical expenses of COPD accounted for the major priority of total medical costs [21]. Therefore, in this paper, interrupted time series (ITS) was used to explore whether the ZMDP can reduce the expenses of inpatients continuously and effectively in Sichuan province, providing strong evidences for the effectiveness of the policy. The Sichuan province is an agricultural and economically developing province that located in southwestern China with complex and diverse topography. The total area of Sichuan is 486,000 km2 and the resident population is 83.41 million in 2018 [22]. The Health Commission of Sichuan Province selected 25 tertiary hospitals in Sichuan province (all of them are tertiary hospitals) which were convinced to be the representatives of medical institutions in Western China. This study analyzed the average hospitalization expenses of COPD inpatients in these hospitals from January 2015 to June 2018 for a total of 42 months. This study was approved by Ethics Committee of West China Hospital of Sichuan University. Outcome variables The hospitalization expenses mainly include drug expenses, medical service expenses and diagnosis expenses. The data used in this study came from electronic medical records at the 25 hospitals. Inpatients who hospitalized less than 2 days or more than 60 day and who didn't accept diagnosis and treatment were excluded. Stata (Version 15; Stata Corporation College Station, TX, USA) was employed in this research to conduct the ITS statistical analysis with P < .05 was considered statistically significant. All the expenses have been adjusted by inflation rate to minimize errors. The ITS analysis is considered the strongest quasi-experimental research design. It is best understood as a simple but powerful tool used for evaluating the impact of a policy change or quality improvement program. In the simplest case, divide the time series into 2 segments. The first segment comprises rates of the event before the intervention or policy, and the second segment is the rates after the intervention. "Segmented regression" is used to measure statistically the changes in level and slope in the post-intervention period compared to the pre-intervention period. In other words, segmented regression is used to measure immediate changes in the rate of the outcome as well as changes in the trend [23]. Segmented regression models fit a least squares regression line to each segment of the independent variable, time, and thus assume a linear relationship between time and the outcome within each segment [24]. The following multivariable regression model was specified to estimate the level and trend in various costs for patients with COPD: $$ {Y}_t={\beta}_0+{\beta}_1\times \kern0.5em time+{\beta}_2\times \kern0.5em intervention\kern0.5em +{\beta}_3\times \kern0.5em time\_\mathrm{after}\_\mathrm{intervention}\kern0.5em +\kern0.5em \upvarepsilon $$ Yt is the expenses in month t; the variable time is a continuous variable indicating time in months at time t from the start of the observation period; intervention is an indicator variable for time t occurring before (0) or after (1) the policy, which was implemented at month 25 in the series; time after intervention is a continuous variable coded 0 before the policy is implemented, then sequentially numbers time periods after implementation [24].. β0 is the average cost level at the start time; β1 is the trend of monthly average cost before the implementation of the policy; β2 estimates the level change in the cost immediately after the intervention from the end of the preceding segment; and β3 estimates the change in the long-term trend in the outcome after the implementation of the policy, compared with the trend before the policy. ε is the error term at time t representing the random variability not explained by the model. We used the Durbin-Watson test to assess the existence of auto-correlations. In our study, 64,546 inpatients with COPD across 42 months were enrolled. The detailed data of expenses from January 2015 to June 2018 are given in Additional file 1. We can find that the average total hospitalization expenses before the policy intervention was 16,926.87 CNY and it was higher than the average total hospitalization expenses which was 14,405.53 CNY after the policy intervention. Model 1: Total hospitalization expenses as the dependent variable In the model for total hospitalization expenses of COPD patients, the growth before the policy was positive but almost flat, not showing significant difference with respect to the pre-policy period. The intercept was 17,110.64 at the time zero and this was statistically significant (P < .001). The β1 was − 14.70 showing no statistical significance (P = .414). This indicates that there was no change in the expenses per month before the policy. The β2 was − 1022.06 and this was statistically significant (P = .011). This indicates that after the policy intervention, the total hospitalization expenses of COPD patients decreased immediately in level by 1022.26 CNY. The β3 was − 125.32 and this was statistically significant (P < .001). This indicates that the post-policy long-term trend began to be negative which was decreasing by 125.32 CNY compared to the pre-policy period (Table 1, Fig. 1). R-squared = .845. The adjusted R square was .832. The Durbin-Watson test was 1.757. Table 1 Regression coefficients, standard errors, and P-values from the multiple regression analysis type of segmented regression analysis of interrupted time series for models 1(total hospitalization expenses as the dependent variable) Trend in the monthly average total hospitalization expense (CNY) for 25 tertiary hospitals in Sichuan, January 2015–June 2018. The vertical line shows the time when the ZMDP was launched Model 2: medical service expenses of COPD patients as the dependent variable In the model for medical service expenses of COPD patients, the constant was 2811.98 and this was statistically significant (P < .001). The β1 was −14.93 this was statistically significant (P < .001). This indicates that there was a statistically significant decreasing baseline trend in medical service expenses of 14.93 CNY per month before the policy intervention. The β2 was 197.75 and that was significant (P = .011). This indicates that after the policy intervention, the medical service expense of COPD patients increased by 197.75 CNY immediately and that was statistically significant. The β3 was − 4.25and that was not significant (P = .507). This indicates that the decrease in trend in the expenses after the policy compared with the trend before the policy was not significant (Table 2, Fig. 2). The R-squared = .568. The adjusted R square was .533. The Durbin-Watson test was 1.890. Table 2 Regression coefficients, standard errors, and P-values from the multiple regression analysis type of segmented regression analysis of interrupted time series for models 2(medical service expenses as the dependent variable) Trend in the monthly average medical service expense (CNY) for 25 tertiary hospitals in Sichuan, January 2015–June 2018. The vertical line shows the time when the ZMDP was launched Model 3: diagnosis expenses of COPD patients as the dependent variable In the model for diagnosis expenses of COPD patients, the constant was 4152.46 at time zero, and this was statistically significant (P < .001). The β1 was 25.78 and this was statistically significant (P < .001). This indicates that there was a statistically significant increasing baseline trend in the expenses of 25.78 CNY per month before the policy intervention. After the policy intervention, the expense decreased immediately by 310.78 CNY (β2 = − 310.78, P = .010). The β3 was − 35.60 and that was significant (P = .001). This indicates that compared with the trend of diagnosis expenses before the policy intervention, the trend of diagnosis expenses after the policy intervention was dropped by 35.60 CNY. That is to say, after the policy intervention, the diagnosis expenses of COPD patients began to be negative and dropped by 9.82 CNY (β1 + β3) per month (Table 3, Fig. 3). The R-squared = .425. The adjusted R square was .380. The Durbin-Watson test was 1.587. Table 3 Regression coefficients, standard errors, and P-values from the multiple regression analysis type of segmented regression analysis of interrupted time series for models 3 (diagnosis expenses as the dependent variable) Trend in the monthly average diagnosis expense (CNY) for 25 tertiary hospitals in Sichuan, January 2015–June 2018. The vertical line shows the time when the ZMDP was launched Model 4: drug expenses of COPD patients as the dependent variable In the model for drug expenses of COPD patients, the constant was 7905.49 at the time zero and this was statistically significant (P < .001). The β1 was − 46.42 and that was statistically significant (P < .001). This indicates that there was a significant change in the drug expenses before the policy was implemented. The drug expenses dropped by 46.42 CNY per month on average. The β2 was − 1073.58 and this was statistically significant (P < .001). This indicates a significant level decrease in the drug expenses after the policy. After the policy intervention, the drug expense dropped 1073.58 CNY immediately. The β3 was − 19.88 but this was not significant (P = .248). This indicates that the change in the trend in the drug expenses after the policy was implemented with the trend before the policy was implemented was not significant (Table 4, Fig. 4). The R-squared = .938. The adjusted R square was .933. The Durbin-Watson test was 1.571. Table 4 Regression coefficients, standard errors, and P-values from the multiple regression analysis type of segmented regression analysis of interrupted time series for models 2(drug expenses as the dependent variable) Trend in the monthly average drug expense (CNY) for 25 tertiary hospitals in Sichuan, January 2015–June 2018. The vertical line shows the time when the ZMDP was launched This is the first study that assessed the impact of the ZMDP in Sichuan province by ITS. We chose the COPD population which is one of the major chronic diseases affecting human health [15]. Even if no control group is set, the ITS model can control and eliminate the influence of long-term trend changes of the time series before the intervention on the results through the analysis of multiple observation time points data, so as to correctly evaluate the real effect of the intervention on the results [25]. After the ZMDP intervention, the total hospitalization expenses dropped significantly. Subsequently, the total hospitalization expense began to show a downward trend. It is worth noticing that the drug expense, which is the mainly part of total hospitalization expenses, dropped significantly and immediately after the policy intervention. The diagnosis expense showed a rising trend before the intervention, but there was a decline immediately after the policy and the post-policy trend began to be negative. However, the medical service expense increased immediately after the policy intervention. In line with our findings, previous studies demonstrated that the ZMDP can alleviate the economic burden of patients. One study by Fu and Yang showed a decrease in drug and diagnosis, but an increase in medical service expenses after the policy intervention which was consistent with our study [26]. Another study showed that without the mark-up of drugs, the expense of cesarean section in primary hospitals had been decreased and the phenomenon of over-prescription had been curbed [27]. However, these studies did not use ITS to access the impact of the ZMDP on medical expenses so that they might not able to measure the immediate changes in the expenses as well as changes in the trend in post-policy period. China's health care reform has been expected to alleviate the economic burden of patients and get better medical services [28,29,30,31]. It is clear that the ZMDP has significant impact on hospitalization expenses of COPD patients especially the drug expenses. Meanwhile, the increase of medical service expenses after the policy intervention may due to the reform of the medical and health care system [32, 33]. In 2017, the Chinese government call upon that Minstry of Health must accelerate the establishment of a timely and flexible mechanism for dynamic price adjustment, and optimize the price of medical service, especially the price that reflects the value of medical personnel's technical labor by standardizing medical treatment behavior and reducing the prices of large medical equipment inspection, treatment and inspection [34]. It hadn't brought any economic burden to inpatients since the total hospitalization expenses had kept decreasing; in contrast, it had encouraged the medical staff to provide better medical service [26]. The diagnosis expenses after the intervention showed a downward trend which was beyond our expectation. Because some hospitals may develop adaptive strategies to address reduction in revenue by providing financial incentives for doctors to excessive diagnosis [13]. The government should take appropriate measures to prevent it. This study suffers from several limitations. Firstly, the data of other possible explanatory variables such as age, status of smoking, treatments and drug use were not able to obtain. Secondly, we merely collect tertiary hospitals in the survey, which thereby limited the possibilities for investigation of other secondary hospitals and primary healthcare centers to the impacts of ZMDP. Finally, we didn't consider the mode of payment of inpatients [35]. For example, if the health insurance of patients is different, the hospitalization expenses after reimbursement may be different. Therefore, population-based economic evaluation studies of a wider range should be conducted on the effectiveness of COPD drug treatment in the future [36]. In addition, more similar studies for other disease and other region are warranted. In conclusion, the ZMDP in Sichuan province had a successful impact on hospitalization expenses for inpatients with COPD. Especially the drug expenses decreased dramatically. To some extent, the ZMDP appears to have been an effective public health intervention to alleviate patients' economic burden. The results of this study on policy interventions may be helpful for other under-developed countries that intend to reduce their citizens' medical expenses. The datasets generated or analysed during the current study are not publicly available due confidentiality policies but are available from the corresponding author on reasonable request. ZMDP: Zero mark-up drug policy COPD: Interrupted time series Cheng W, Fang YY, Fan DH, Sun J, Shi XF, Li J. The effect of implementing "medicines zero markup policy" in Beijing community health facilities. Southern Med Rev. 2012;5(1):53–6. Chinese Ministry of Health. Financial report of Medical institutions. Beijing; 2010. Zheng GL, Zheng L, Yang A, Yang YS, Chen LJ. Analysis on the origins of canceling drugs addition policy (in Chinese). Chinese Health Eco. 2015;34(2):37–40. Fu MW, Xue XL. The evolution of the relationship between government and market in China's medical industry since the reform and opening up (in Chinese). Res Chinese Econ History. 2018;05:67–76. Jiang Q, Yu BN, Ying GY, Liao J, Gan H, Blanchard J, Zhang J. Outpatient Prescription Practices in Rural Township Health Centers in Sichuan Province, China. BMC Health Serv Res. 2012;12(324). https://doi.org/10.1186/1472-6963-12-324. Cai WY. Research on the root causes and solutions of expensive medical cost in China based on the theory of public product pricing (in Chinese): Nanchang University; 2018. Mao WH, Chen W. CHINA: the zero mark-up policy for essential medicines at primary level facilities; 2015. Health Commission of Sichuan Province. Interpretation of the policy of removing drug mark-up in urban public hospitals of sichuan province (in Chinese). 2016. http://wsjkw.sc.gov.cn/zt/ygzl/ygzc/201612/t20161220_12893.html. Accessed 1 Jan 2019. National Development and Reform Commission. Notice on the Promotion of Medical Price Reform In County-level Public Hospitals (in Chinese). 2012. Wang HT, Tang YQ, Liu YY, Yang LP, Zhang XP. Effect evaluation of National Essential Medicine System in China: based on survey data from Shandong, Hubei and Sichuan (in Chinese). Chinese J Health Policy. 2012;05(4):30–4. Yang C, Shen Q, Cai W, Zhu W, Li Z, Wu L, Fang Y. Impact of the zero-markup drug policy on hospitalisation expenditure in western rural China: an interrupted time series analysis. Tropical Med Int Health. 2017;22(2):180–6. Zhou Z, Su Y, Campbell B, Zhou Z, Gao J, Yu Q, Chen J, Pan Y. The financial impact of the 'Zero-markup policy for essential Drugs' on patients in county hospitals in Western rural China. PLoS One. 2015;10(3):e0121630. Yi H, Miller G, Zhang L, Li S, Rozelle S. Intended and unintended consequences of China's zero markup drug policy. Health Aff (Millwood). 2015;34(8):1391–8. Chen HY, Zhuang YH. The effect and analysis of the zero mark-up policy in public hospitals in less developed areas (in Chinese). China Chief Financial Officer. 2018;4:124–5. Zhu BF, Wang YF, Ming J, Chen W, Zhang LY. Disease burden of COPD in China: a systematic review. Int J Chron Obstruct Pulmon Dis. 2018;13:1353–64. Yin P, Wang HD, Vos T, Li YC, Liu SW, Liu YN, Liu JM, Wang LJ, Naghavi M, Murray CJ, et al. A subnational analysis of mortality and prevalence of COPD in China from 1990 to 2013: findings from the global burden of disease study 2013. Chest. 2016;150(6):1269–80. Loddenkemper R, Loddenkemper R. European Lung White Book. In: The first comprehensive survey on respiratory health in Europe: Newcastle University; 2003. p. 107. Guarascio AJ, Ray SM, Finch CK, Self TH. The clinical and economic burden of chronic obstructive pulmonary disease in the USA. Clinicoecon Outcomes Res. 2013;5:235–45. Teo WS, Tan WS, Chong WF, Abisheganaden J, Lew YJ, Lim TK, Heng BH. Economic burden of chronic obstructive pulmonary disease. Respirology. 2012;17(1):120–6. Yu J. Study on hospital expenditure and the influence factors of COPD patients in Lanzhou City Hospital (in Chinese). Master. Lanzhou: Lanzhou University; 2018. Fang XC, Wang XD, Bai CX. COPD in China: the burden and importance of proper management. Chest. 2011;139(4):920–9. Statistical Bureau Of Sichuan. NBS survey office in Sichuan: Sichuan statistical yearbook. Beijing: China Statistics Press; 2018. Penfold RB, Zhang F. Use of interrupted time series analysis in evaluating health care quality improvements. Acad Pediatr. 2013;13(6 Suppl):S38–44. Nistal-Nuno B. Segmented regression analysis of interrupted time series data to assess outcomes of a south American road traffic alcohol policy change. Public Health. 2017;150:51–9. Zhu XY, Lin TF, Mi Y, Hu M. Interrupted time series model and its application in effect evaluation of health policy intervention (in Chinese). Chinese Pharmaceutical Affairs. 2018;32(11):1531–40. Fu CL, Yang JM. Influence of carrying out zero Price addition policy of drugs on public hospital expenses in Shenzhen (in Chinese). Chinese Hospital Manage. 2013;33(2):4–6. Lin HX, Li YQ, Huang LM, Gong ZS. Influence of drug Price zero addition on expense of cesarean section in primary hospital (in Chinese). China Pharmaceuticals. 2015;24(16):110–1. State Council of PRC. Current Major Project On Health Care System Reform (2009–2011) (in Chinese). 2009. http://www.gov.cn/zwgk/2009-04/07/content_1279256.htm. Accessed 5 Jan 2019. State Council of PRC. Plan of Furthering the Reform of Health-Care System During the 12th Five-Year-Plan Period (in Chinese). 2014. http://www.gov.cn/zwgk/2012-03/21/content_2096671.htm. Accessed 5 Jan 2019. General Office of State Council of PRC. Opinions on Pilot Comprehensive Reforms of County Public Hospitals (in Chinese). 2012. State Council of PRC. 13th Five-Year Plan for Deepening Healthcare System Reform (in Chinese). 2017. http://www.gov.cn/zhengce/content/2017-01/09/content_5158053.htm. Accessed 5 Jan 2019. Xiang MF, Liu G, Liu XX, Hou F, Luo AX. Forecast Analysis on the Impact of Drug Zero Bonus Policy in a Tumor Hospital (in Chinese). Chinese Med Record. 2017;18(4). Pan YH, Cui MD, LI XH. Study on effect of zero-profit medicine policy on medical expenses and compensation mechanisms (in Chinese). J Shanghai Jiaotong Univ (Med Sci). 2015;35(11):1696–701. General Office of State Council of PRC. Notice on key tasks for deepening the reform of the medical and health care system in the second half of 2018. 2018. Kiil A, Houlberg K. How does copayment for health care services affect demand, health and redistribution? A systematic review of the empirical evidence from 1990 to 2011. Eur J Health Econ. 2014;15(8):813–28. Li J, Feng RH, Cui XY, Liu SM, Zeng ZYL, Wang XW. Analysis on the affordability and economic risk for using medicine to treat patients with chronic obstructive pulmonary disease in tier 3 hospitals in China (in Chinese). Chinese Health Econ. 2015;34(09):66–8. Junman Wang and Peiyi Li, as co-first authors of our study, both contributed to the principal research design and critically revised the manuscript. The authors sincerely thank all the Health Commission of Sichuan Province for the contribution of data sources. This study was supported by the National Natural Science Foundation of China (Grant No. 71874115) and Science & Technology Department of Sichuan Province, China (Grant No. 2018KZ0046 and 2017FZ0104). The National Natural Science Foundation of China and Science &Technology Department of Sichuan Province did not play a role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Junman Wang and Peiyi Li contributed equally to this work and should be considered co-first authors. Institute of Hospital Management, West China Hospital, Sichuan University, Guo Xue Xiang 37, Chengdu, 610041, People's Republic of China Junman Wang, Peiyi Li & Jin Wen Junman Wang Peiyi Li Jin Wen JW is expected to have made substantial contributions to the conception of this study. JMW analyzed the data and produced all tables and figures. JMW carried out literature search and drafted the manuscript. PYL and JW helped to draft the manuscript and revised it critically. All authors have read and approved the manuscript. Correspondence to Jin Wen. This study was approved by Ethics Committee of West China Hospital of Sichuan University. The authors obtained all necessary administrative permission form Health Commission of Sichuan Province to access and analyze the data in this study. The full dataset didn't include any identifiable patient data. All authors have no conflict of interest to declare. Changes in total hospitalization expenses, medical service expenses, diagnosis expenses, and drug expenses of COPD patients in 25 tertiary hospitals of Sichuan province from January 2015 to June 2018. Wang, J., Li, P. & Wen, J. Impacts of the zero mark-up drug policy on hospitalization expenses of COPD inpatients in Sichuan province, western China: an interrupted time series analysis. BMC Health Serv Res 20, 519 (2020). https://doi.org/10.1186/s12913-020-05378-0 Hospitalization expenses Drug expenses Health policy, reform, governance and law
CommonCrawl
Technical Article Integrator Limitations: The Op-Amp's Gain Bandwidth Product August 28, 2019 by Dr. Sergio Franco In this first part of a series of articles, we investigate the role of the op-amp's gain-bandwidth product (GBP). The op-amp integrator lends itself to a variety of applications, ranging from integrating-type digital-to-analog converters, to voltage-to-frequency converters, to dual-integrator-loop filters, such as the biquad and state-variable types. These systems are usually analyzed by assuming ideal integrator behavior, when in fact there are limitations stemming primarily from op-amp nonidealities, that the user needs to be aware of for an effective application of the integrator. Integrator Using an Ideal Op-Amp Let us briefly review the ideal integrator, so we have a standard against which to compare a real-life integrator. For a thorough comparison, it pays to investigate the integrator both in the time domain and in the frequency domain. For time-domain analysis, refer to Figure 1(a). Figure 1. (a) Integrator and (b) example of time-domain integrator behavior. Exploiting the fact that an ideal op-amp keeps the inverting- input node at virtual ground, or 0 V, we apply KCL to write \[\frac {v_I - 0}{R} = C \frac {d(0- v_O)}{dt}\] \[dv_O = \frac {-1}{RC} v_I dt\] Integrating both sides from 0 to t, \[\int_{0}^{t}dv_O = \frac {-1}{RC} \int_{0}^{t} v_I dt\] and considering that the left-hand term reduces to \(v_O (t) – v_O (0)\), we finally get \[v_O(t) = \frac {-1}{RC} \int_{0}^{t} v_I (t)dt + v_O(0)\] In words, \(v_O (t)\) is proportional to the integral of \(v_I(t)\), with the constant of proportionality being –1/RC. Moreover, \(v_O(0)\) is the initial value, which depends on the charge Q(0) initially stored in the capacitor as \(v_O(0) = Q(0)/C \). A good way to evaluate an integrator in the time domain is via the quality of the ramps it generates in response to constant input voltages, such as those exemplified in Figure 1(b). For frequency-domain analysis, refer to the circuit of Figure 2(a). Figure 2. (a) Integrator and (b) its ideal gain, or transfer function \(H_{ideal}(jf)\) in the frequency domain. Applying the gain formula of the ideal inverting amplifier, we write the ideal transfer function as \[H_{ideal}(s) = \frac {V_o}{V_i} = - \frac {1/(sC)}{R} = \frac {-1}{sRC}\] where s is the complex frequency, and \(V_i\) and \(V_o\) are the Laplace's transforms of the input and output voltages. Plotted in the s-plane, \(H_{ideal}(s)\) looks like a tent with a single pole pitched right at the origin. We are primarily interested in the frequency-domain (or ac) response, so we limit our considerations to the tent's profile along the imaginary axis, which we obtain by letting \[s\rightarrow j\omega\], where ω is the angular frequency. (Actually, in the lab and in the course of SPICE simulations we work with the frequency \[f = \omega/2\pi\], in Hertz.) So, letting \[s\rightarrow j\omega = j 2\pi f\], we put Equation (2) in the more insightful form \[H_{ideal}(jf) =\frac {V_o}{V_i}= \frac {-1}{jf/f_0 }\] where \(V_i\) and \(V_o\) are the phasors associated with the input and output, and \[f_0 = \frac {1}{2 \pi RC}\] The magnitude plot of \(H_{ideal}\) vs. f is a hyperbola, but when plotted on logarithmic scales it becomes a straight line with a slope of –20 dB/dec, as shown in Figure 2(b). For obvious reasons, \(f_0\) is called the integrator's 0-dB gain frequency, or also the unity-gain frequency. Integrator Using a Constant GBP Op-Amp Real-life integrators are usually implemented with constant gain-bandwidth product (constant GBP) op-amps. With reference to the integrator of Figure 3(a), where the op-amp's gain is denoted as a, we seek a closed-loop transfer function of the type $$H(jf) = \frac {V_o}{V_i} = H_{ideal}(jf) \frac {1}{1+1/T (jf)}$$ where \(H_{ideal}\) is as in Equation (3) and T = aβ is called the loop gain. Moreover, β is called the feedback factor. We find it by setting the input to zero, breaking the loop at the op amp's output, and applying a test voltage \(V_t\) as shown in Figure 3(b). Figure 3. (a) Integrator using an op-amp with open-loop gain a ≠ ∞. (b) Finding the feedback factor β. Finally, we use the voltage divider formula to get $$\beta = \frac {V_n}{V_t} = \frac {R}{R+1/(j2\pi fC)} = \frac {j2 \pi fRC}{1+ j2\pi fRC} = \frac {jf/ f_0}{1+jf /f_0}$$ with \(f_0\) as in Equation (4). To construct the Bode plot of |H(jf)|, it is more convenient to work with the reciprocal of β , also known as the noise gain, $$\frac {1}{\beta} = \frac {1}{jf/f_0}+1$$ for then we can visualize the decibel plot of |T| = |aβ| = |a|/|1/β| as the difference between the decibel plot of |a| and that of |1/β|, \[\left |T \right |_{dB} = \left | a \right |_{dB} - \left | 1/\beta \right |_{dB}\] The plots of |a|, |1/β|, and |H| are shown in Figure 4. Figure 4. The transfer function |H| of an integrator implemented with a constant GBP op-amp. Here, a is the op-amp's open-loop gain, \(a_0\) its dc value, \(f_b\) its –3-dB bandwidth, and \(f_t\) its transition frequency. The noise gain 1/β of Equation (7) has a high-frequency asymptote of 1, or 0 dB, and a low-frequency asymptote of 1/(jf/f0), whose magnitude coincides with that of \(H_{ideal}\) of Equation (3). From this, we make the following considerations: T is largest for \(f_b < f < f_0\), so this is the frequency range over which H is closest to \(H_{ideal}\), by Equation (5). As we lower f below \(f_b\), |T| gradually decreases until it reaches 0 dB at \(f_p\). Below \(f_p\), C acts as an open circuit, thus letting the op-amp amplify at its full dc gain \(a_0\). To find \(f_p\), we exploit the constancy of the GBP of |H| from \(f_p\) to \(f_0\) and write \(a_0 \times f_p = 1\times f_0 \). This gives \(f_p = f_0/a_0\). As we raise f above \(f_0\), |T| gradually decreases until it reaches 0 dB at \(f_t\). For \(f >> f_t\), |T| becomes so small (and 1/|T| so large) that we can approximate Equation (5) as \[H \cong H_{ideal} \frac {1}{1/T} = H_{ideal}T = H_{ideal}\: a \beta = H_{ideal}\: a \times 1 \cong \frac {-1}{jf/f_0} \times \frac {1}{jf /f_t}\] indicating that above \(f_t\), |H| rolls off with frequency at the rate of –40 dB/dec. It is apparent that H has two pole frequencies, so we can also express it in the alternative form \[H(jf) = \frac {-a_0}{(1+jf /f_p)(1+jf/f_t)}\] where \(f_p = f_0/a_0\). This is a far cry from $$H_{ideal}$$ of Equation (3). Only in the limits \(f >> f_p\) and \(f << f_t\) will H approach \(H_{ideal}\). Verification via PSpice Let us verify our findings via PSpice. The circuit of Figure 6(a) uses a Laplace block to simulate an op-amp with \(a_0 = 10^5\) V/V and \(f_b\) = 10 Hz, so \(f_t\) = GBP = $$a_0 \times f_b$$ = 1 MHz. By Equation (4), the integrator's unity-gain frequency is \(f_0\) = 10 kHz. Moreover, \(f_p = f_0/a_0\) = 0.1 Hz. The plots of Figure 5(b) confirm our analysis. Figure 5. (a) PSpice integrator using a Laplace block to simulate an op-amp with GBP = 1 MHz. (b) Plots of the open-loop gain |a|, the ideal integrator transfer function |\(H_{ideal}\)|, and the actual function |H|. Next, let us investigate the integrator of Figure 5(a) in a popular real-life application, namely, the actively-compensated biquad filter shown in Figure 6. Figure 6. PSpice circuit schematic of an actively compensated biquad filter designed for a band-pass response centered at \(f_0\) = 10 kHz with a quality factor Q = 25. This filter provides both the band-pass (BP) and the low-pass (LP) responses. The plots of Figure 7(a) show the two responses for the case of ideal op-amps simulated by changing the dc gains of the Laplace blocks from 1E5 to 1E7, and dropping entirely the denominator polynomial in s. The BP response is symmetric about 10 kHz, with a low-frequency slope of +20 dB/dec and a high-frequency slope of –20 dB/dec. The LP response has a high-frequency slope of –40 dB/dec. Turning to the real-life plots of Figure 7(b), we observe that the effect of GBP = 1 MHz is to initiate, at about 1 MHz, steeper roll-off rates for both responses, so the ultimate rate for the BP response changes from –20 dB/dec to –40 dB/dec, and that for the LP response from –40 dB/dec to –60 dB/dec. Figure 7. Band-pass and low-pass responses of the biquad filter of Figure 6 for the case of (a) ideal op-amps, and (b) op-amps with GBP = 1 MHz. Depending on the application, these steeper roll-off rates may actually be welcome. However, there are also some subtle changes in f0 and Q that will be addressed in a subsequent article. Stability Considerations Looking at Figure 4, we note that above f0 we have β → 1. This is so because for \(f >> f_0\), C acts as a short, forcing the op-amp to operate in the unity-gain voltage-follower mode. So, if the op-amp is of the fully compensated type, it will be stable also as an integrator. Now, considering that for a given \(f_0\), a well-designed integrator should use an op-amp with \(f_t >> f_0\), we wonder whether an integrator can be implemented with a decompensated op-amp, which enjoys a higher ft than its fully compensated version. The integrator of Figure 8(a) uses a decompensated op-amp with a dc gain of 100 dB and a dominant pole of 100 Hz (for a GBP of 10 MHz), plus a second pole at 1 MHz. Its transition frequency is now \(f_t\) = 3.08 MHz, where its phase angle is –162°. The |1/β| curve will still intercept the |a| curve at \(f_t\), so this integrator's phase margin is \(\phi_m\) = 180 – 162 = 18°, not much of a margin. In fact, if this op-amp were used as a plain unity-gain voltage follower, its ac gain would exhibit peaking of over 10 dB, and its transient response an overshoot of over 60%. However, the ac response of Figure 8(b) shows a much reduced amount of peaking (about 2 dB), indicating that the integrator, being a low-pass filter, attempts to reduce its own peaking. Just like the integrator reduces peaking in the frequency domain, it reduces also the amount of ringing in the time domain. Figure 8. (a) Integrator using a decompensated op-amp having the phase margin \(\phi_m\) = 18°. (b) The low-pass filtering action by the integrator reduces the amount of peaking appreciably. This is depicted in Figure 9, whose plots are generated by changing the input ac source of Figure 8(a) to a 10-kHz pulse source. Figure 9. (a) Response of the integrator of Figure 8(a) to a 10-kHz square wave. (b) Expanded view showing a small amount of ringing following an input step change. In conclusion, the use of decompensated op-amps is generally discouraged, even though in integrator operation both peaking and ringing are reduced considerably. New Semiconductor Products for the Receive Chain in Your RF Designs Make an LED Light Strip AHRS with Arduino and MPU-6050 Evaluating the Robustness of 1200 V SiC MOSFETs Under Short Circuit Conditions Applications of the Op-Amp Voltage Follower Circuit frequency domain operational amplifier gain bandwidth Diodes Incorporated Releases Buck Converter Family Designed to Combat EMI Ampleon's 500W Power Amplifier Transistor Achieves "Best in Class" Efficiency at 75% Qorvo's PMIC for In-vehicle Charging Integrates Analog and Power Functions Characterizing the Moog Filter by Sam Gallagher Toshiba Provides Alternative to NOR Memory with New Family of SLC NAND Devices Analog Ground August 29, 2019 Excellent article. I wanted to add one thing to the assertion "A good way to evaluate an integrator in the time domain is via the quality of the ramps it generates in response to constant input voltages". An overlooked error in an op amp integrator is the "voltage coefficient of capacitance" of the feedback capacitor. As the output voltage changes, the capacitance can change, sometimes quite dramatically. The ramp "wanders" from the ideal straight line as the output voltage increases and decreases. Different types of capacitors have different voltage coefficients. Like. Reply
CommonCrawl
Antipodal map homotopic to identity for $S^{2n-1}$ We want to show that the antipodal map on $S^n$ is homotopic to the identity for $n$ odd. My attempt was to try strong induction. For $n=1$ define $$H((x,y),t)=(\cos(\tan^{-1}(y/x)+\pi t),\sin(\tan^{-1}(y/x)+\pi t)))$$ This is clearly a homotopy. Now we want to try the inductive step. Suppose the proposition is true for $S^n$ with $n=2m-1$. Therefore it is also true for $S^1$ and $S^{2m-2}$ by strong induction. By scaling the spheres, it is also true for $S^1/2$ and $S^{2m-2}/2$. Division here simply means scaling by one half. Therefore there are homotpies $H_1:S^1/2\times I\to S^1/2$ and $H_2:S^{2m-2}/2\times I\to S^{2m-1}2/2$ between the identity and antipodal maps for these two spaces. Define $$F:S^n\times I\to S^n$$ as $$F((x_1,x_2,...,x_n),t)=(H_1(x_1,x_2,t),H_2(x_3,x_4,...,x_n,t))=(x_1^\prime,...,x_n^\prime)$$ Our first step is to show that this is in $S^n$. But $$[(x_1^\prime)^2+(x_2^\prime)^2]+[(x_3^\prime)^2+...+(x_n^\prime)^2]=1/2+1/2=1$$ So it is in $S^n$. And $$F( \textbf{x},0)=(H_1( x_1,x_2,0),H_2(x_3,x_4,...,x_n,0))=((x_1,x_2),(x_3,x_4,....,x_n))=\textbf{x}$$ $$F( \textbf{x},1)=(H_1( x_1,x_2,1),H_2(x_3,x_4,...,x_n,1))=(-(x_1,x_2),-(x_3,x_4,....,x_n))=-\textbf{x}$$ Therefore $F$ is a homotopy between the identity and the antipodal maps for $S^n$. Is this a legitimate way of constructing the new homotopy $F$? algebraic-topology homotopy-theory spheres Angina Seng 149k1414 gold badges8787 silver badges181181 bronze badges Elie BergmanElie Bergman $\begingroup$ math.stackexchange.com/questions/2304554/… $\endgroup$ – Angina Seng Jun 3 '17 at 11:06 Hint Here's a more direct method: Since the sphere is odd-dimensional, we may regard it as the (real) unit sphere in $\Bbb C^n$. Now, consider the family $\phi_t : \Bbb C^n \to \Bbb C^n$ of rotations ${\bf z} \mapsto e^{i t} {\bf z} .$ Travis WillseTravis Willse Not the answer you're looking for? Browse other questions tagged algebraic-topology homotopy-theory spheres or ask your own question. Hairy Ball Theorem: homotopy from the identity to the antipodal map For a function with odd degree $f(x)$ is not homotopic to $f(-x)$ Smash product on the stable homotopy category of spectra Spaces $X$ and $Y$ with $[Z, X]_{\bullet} \cong [Z, Y]_{\bullet}$ for all cogroup objects $Z$ in $\mathsf{hTop}_{\bullet}$ How to see that SL(2,C) is simply connected? $\mathbb CP^1 \approx S^2$ proof check projective space, constant function, homotopy Antipodal map not homotopic to identity without degree Regular homotopy classes of immersions of 3-disks into $\mathbb R^3$ Prove that $R^{n}\setminus R^{k} \simeq S^{n} \setminus S^{k} \simeq S^{n-k-1} $
CommonCrawl
Bacillus species (BT42) isolated from Coffea arabica L. rhizosphere antagonizes Colletotrichum gloeosporioides and Fusarium oxysporum and also exhibits multiple plant growth promoting activity Tekalign Kejela1,2, Vasudev R. Thakkar3 & Parth Thakor3 BMC Microbiology volume 16, Article number: 277 (2016) Cite this article Colletotrichum and Fusarium species are among pathogenic fungi widely affecting Coffea arabica L., resulting in major yield loss. In the present study, we aimed to isolate bacteria from root rhizosphere of the same plant that is capable of antagonizing Colletotrichum gloeosporioides and Fusarium oxysporum as well as promotes plant growth. A total of 42 Bacillus species were isolated, one of the isolates named BT42 showed maximum radial mycelial growth inhibition against Colletotrichum gloeosporioides (78%) and Fusarium oxysporum (86%). BT42 increased germination of Coffee arabica L. seeds by 38.89%, decreased disease incidence due to infection of Colletotrichum gloeosporioides to 2.77% and due to infection of Fusarium oxysporum to 0 (p < 0.001). The isolate BT42 showed multiple growth-promoting traits. The isolate showed maximum similarity with Bacillus amyloliquefaciens. Bacillus species (BT42), isolated in the present work was found to be capable of antagonizing the pathogenic effects of Colletotrichum gloeosporioides and Fusarium oxysporum. The mechanism of action of inhibition of the pathogenic fungi found to be synergistic effects of secondary metabolites, lytic enzymes, and siderophores. The major inhibitory secondary metabolite identified as harmine (β-carboline alkaloids). The word coffee comes from the name of the place in Ethiopia called "Kaffa". "Kaffa" means the plants of God [1]. Coffee classified under the family of Rubiaceae in the genus Coffea. There are many species of coffee, but the two most widely cultivated are C. arabica L. and C. canephora (robusta). Southwestern and southeastern Ethiopia considered as the origin of C. arabica L. (Arabica coffee) [2]. Of the total world production of coffee, C. arabica L. takes the lion's share, which is 66% and C. canephora only of 34%. Although coffee produced in few countries, it is the most traded agricultural products around the globe after oil. According to a 2014 report by the International Coffee Organization, the top six coffee producing countries in our globe are Brazil, Vietnam, Colombia, Indonesia, Ethiopia, and India. In Ethiopia, it is mostly exported cash crop that accounts for 69% of all agriculturally export commodities and it was estimated that at least 15 million of Ethiopian population depend directly or indirectly on coffee production [1]. Similarly, there are around 250,000 coffee growers in India; 98% of them are small-scale growers. One of the challenges in coffee production industry is the impact of the coffee pathogen, especially pathogenic fungi, which results in reduced production and low quality of coffee seeds [3]. The yield loss due to the fungal pathogens, especially Colletotrichum species, and Fusarium species have been repeatedly reported from coffee growing areas [4–7]. Coffee berry disease caused by Colletotrichum khawae is causing a major yield loss in coffee growing areas of Ethiopia [8, 9]. C. khawae and C. gloeosporioides are the most abundantly found pathogens in diseased coffee seeds [9]. C. gloeosporioides also listed as one of important coffee pathogens by the coffee board of India. Apart from the coffee plant, C. gloeosporioides Penz possesses a broad host range (470 genera of plants) and ranked as the most devastating plant pathogen in the genus Colletotrichum [10, 11]. Fusarium species also cause serious impact on the coffee production industry. Coffee wilt disease caused by Gibberella xylarioides (anamorph: Fusarium xylarioides) causes approximately 3360 t of coffee yield losses each year in Ethiopia [12]. This production loss causes a great economic loss around the world, for example, Ethiopia loses an estimated 3.7 million American dollars every year. Chemical pesticides used currently to control coffee pathogens needed to spray 7–8 times annually which is laborious and expensive. Furthermore, the extensive use of chemical pesticides also contributes to emerging pesticide resistant pathogens. The uses of chemical pesticides and fertilizers have also a negative impact on the indigenous microbial community by disturbing the natural distribution of microbial niche. The coffee cultivated with no or less application of chemical pesticides have more consumer acceptance. In addition, the use of environmentally friendly and sustainable way of disease controlling system gained major attention in recent years. In this view, rhizosphere is the ideal place to search potential rhizobacteria that are capable of promoting plant growth and suppressing the phytopathogens. Extensive studies of the use of plant growth promoting rhizobacteria (PGPR) for disease control and plant growth promotion in the coffee plant have not been reported. It is necessary and useful to evaluate and document indigenous beneficial microbe isolated from coffee and test them against coffee pathogens. In the current study, several bacteria from the rhizosphere of C. arabica L. were isolated and the potent bacterium antagonistic to C. gloeosporioides and F. oxysporum was chosen, which also showed multiple plant growth promoting activity. The potent isolate showed maximum similarity with Bacillus amyloliquefaciens by 16 s rRNA gene sequencing and by blasting this sequence against reference sequences found in the international nucleotide database using the program called BLASTn. Isolation of Bacillus species From the rhizosphere of Coffea arabica L. 42 pure Bacillus species were isolated. These isolates were gram-positive, catalase-positive, spore-forming, rod-shaped and able to survive at 80 °C (Fig. 1). It forms central/sub terminal/ellipsoidal endospores. The bacterium grew at the temperature range of 15–50 °C, the optimum temperature of 30–42 °C, optimum pH of 7 in aerobic condition. Gram staining of cells of Bacillus sp BT42 isolated from Coffea arabica L. rhizosphere The antagonistic effect of Bacillus species isolates against C. gloeosporioides and F. oxysporum Bacteria isolates were studied against two fungal pathogens C. gloeosporioides and F. oxysporum for radial mycelial growth inhibition. Sixteen Bacillus species isolates showed greater than 40% mycelial growth inhibition. Among these isolates, BT42 showed maximum radial mycelial growth inhibition against C. gloeosporioides (78%) and F. oxysporum (86%) (Fig. 2). Therefore, BT42 selected for in vitro and vivo studies. In vitro mycelial growth inhibition of C. gloeosporioides and F. oxysporum by Bacillus species isolated from Coffea arabica L. rhizosphere. Error bars represent ± Standard Deviation (SD). Values are means of three replicates Identification of isolate The isolate BT42 selected for identification because it showed highest mycelial growth inhibition of C. gloeosporioides and F. oxysporum when compared to other isolates. The 16S rRNA gene amplified from the genomic DNA of BT42 (1.5 kb) sequenced and analyzed by nucleotide Blast analysis (BLASTn). The BT42 (NCBI accession number KT220617) showed maximum similarity with Bacillus amyloliquefaciens (Fig. 3). When compared to Ez taxon database it showed 86.55% similarity with Bacillus amyloliquefaciens subsp. plantarum FZB42. Neighbor joining tree of isolated Bacillus sp. BT42 and closely related species. Bootstrap values based on 1,000 replications Plant growth promoting characteristics of Bacillus species BT42 BT42 produced 14.56 ± 0.862 μg/ml IAA in the medium supplemented with L-tryptophan, produced ammonia, solubilized 6.36 ± 0.48 μg/ml tri-calcium phosphate, formed 37.5 ± 0.56 mm of holo zone by solubilizing insoluble zinc oxides (Fig. 4), produced 75.90 ± 1.24% siderophore units in the iron free succinate medium, grew on the media supplemented with 1-aminocyclopropane-1-carboxylic acid (ACC) as a sole nitrogen source, and formed robust pellicles when grown in LB broth at the liquid air interference. Zinc solubilization (a), Tricalcium phosphate solubilization (b) and Siderophore production (c) by Bacillus species(BT42) Effect of BT42 on seed germination and disease incidence To study the effect of BT42 on germination of C. arabica L. seeds, suspension of overnight grown rhizobacteria (0.5 McFarland Standard) was coated on the seeds of C. arabica L and the effect on germination were observed and recorded. Percentage germination of C. arabica L. improved from 50% of untreated seeds to higher in the presence of rhizobacteria (Table 1). None of the C. arabica L. seeds infected with C. gloeosporioides could germinate (Fig. 5a), but in the presence of BT42, there was an improvement in germination percentage and decrease in the disease incidence. The disease incidence was as high as 91.67% when the seeds infected by C. gloeosporioides spore suspension alone. Disease incidence reduced by 88.9% when the BT42 was simultaneously surface coated with C. gloeosporioides (Table 1). Table 1 Effects of selected rhizobacteria isolates on germination of C. arabica L. seeds and on disease incidence caused by C. gloeosporioides and F. oxysporum Effect of inoculation of BT42 on coffee seed germination. C. gloeosporioides infected (a), BT42 treated (b), BT42+ C. gloeosporioides (c) and untreated (d) Mechanisms of inhibition Lytic enzyme production BT42 formed a clear zone around its colony on the medium supplemented with colloidal chitin, laminarin, tween 80 and skim milk, indicating extracellular production of chitinase, β-1,3 glucanase, protease, and lipase respectively. The correlation of diameter of hole zone of chitinase with mycelial growth inhibition of C. gloeosporioides (r = 0.905, P < 0.05) and F. oxysporum (r = 0.780, P < 0.05) were positive. Similarly, the correlation of diameter of hole zone of β-1,3 glucanase with mycelial growth inhibition of C. gloeosporioides (r = 0.604, P < 0.01) and F. oxysporum (r = 0.802, P < 0.01) were also positive (Fig. 6). Percent inhibition of mycelial growth of C. gloeosporioides and F.oxysporum (Bar graph) and production of lytic enzymes (line graph) by selected Bacillus species isolates Production of Antifungal compounds To investigate the production of antifungal metabolite/s, 5 ml of growing culture of BT42 (in three replicates) was taken at different time intervals and its supernatant was tested for antifungal activity against C. gloeosporioides and F. oxysporum by agar well diffusion method. The highest antifungal activity was found in 48 h old culture grown at 30 °C under shaking condition of 150 rpm (Fig. 7). The radius of inhibition of pathogenic fungi by metabolite extracted from BT42 at different period of growth. Error bar represents ± SD Stability of antifungal compounds The ethyl acetate extract of 48-hour-old cell-free culture was checked for its antifungal activity at different temperatures and it was found stable for more than 3 months at 4 °C, for 1 h at 100 °C and for 3 months at room temperature. Separation and purification of antifungal compound The ethyl acetate extract was separated on the silica thin layer chromatography (TLC) plate using isopropanol: ammonia: water (10:1.5:1) as the solvent system (Fig. 8a). Five bands with Rf values 0.33, 0.50, 0.61, 0.72 and 0.87 were detected. Compounds corresponding to a metabolite in each band named from bottom to top as C1, C2, C3, C4, and C5 were assayed for antifungal activity against C. gloeosporioides and F. oxysporum. Among the five bands, the metabolite named as C1 showed significant mycelial growth inhibition of C. gloeosporioides (Fig. 8b) and F. oxysporum (Fig. 8c) and was selected further for characterization and identification. The purity of the band showing maximum antifungal activity (that is C1, Rf 0.33) was checked again using different solvent systems on TLC, which showed a single band. TLC showing separation of extract and purified band of C1 (a), inhibition of growth of C. gloeosporioides by C1 (b) inhibition of growth of F. oxysporum by C1 (c) Characterization and identification of potent compound Purified compound C1 was subjected to spectral scan analysis from 100–1100 nm to identify its absorbance maxima, which was found to be 205 nm. The compound C1 was found to be soluble in methanol and sparingly soluble in water. Liquid chromatography-mass spectrometry (LC-MS) data of C1 showed the retention time 4.9 min by the photodiode anode (PDA) detector. The same fragment was subjected to mass spectrometry (MS) analysis. MS analysis clearly indicated that purified C1 compound has the molecular weight 212.10 (Fig. 9a). For the investigation of the numbers of carbon, hydrogen, oxygen and nitrogen atoms, absolute intensity of M + 1 peak and natural abundance of isotopes were considered. Based on the calculation we found that there were 13 carbons, 12 hydrogens, 2 nitrogens and 1 oxygens atom. From m/z cloud the value of mass (i.e. 212.09) (Fig. 9b) is exactly matching with the compound harmine (CAS registry Number. 442-51-3) and its MS spectrum was matched to NIST database. Fourier transform infrared spectroscopy (FTIR) data of the compound C1 showed the presence of functional groups. The frequency at 3405 cm-1 indicates the presence of -NH Indole stretching, while aromatic C-H stretching at 3044 cm-1, -CH of alkane stretching at 2958 cm-1, C = N stretching at 1665 cm-1, -CH of -CH3 bending at 1357 cm-1, asymmetrical C-O-C stretching at 1077 and 1231 cm-1, C-N of Indole at 1100 cm-1, aromatic C = C stretching at 1590 and 1448 cm-1, -CH bending of aromatic ring at 864 cm-1 were observed (Fig. 10). Based on LC-MS data and FTIR assignment, the suggested formula for the compound is C13H12N2O and its structural formula showed in Fig. 11. a Chromatogram based on the retention time in the LC-MS. b Mass spectrum of the peak 4 having retention time 4.9 FTIR data with the presence of functional groups in the potent compound C1, frequency values appeared in the figures represent the respective functional groups Structural formula and name of potent compound C1 Further confirmation of the compound was carried out by qualitative tests using the general reagent for alkaloids (Dragendroff's reagent) and it was found to be positive and specifically for the presence of indole derivates using glyoxylic-sulphuric acid test. Accordingly, the solution containing the C1 showed a positive test for the presence of alkaloid and showed a positive test for the presence of indole derivative, forming a purple to violet ring at the junction of two distinct phases when glyoxylic acid and the test solution mixed in the presence of conc. H2SO4. The secondary metabolite harmine could be the principal reason for inhibition of growth of C. gloeosporioides and F. oxysporum. In the present study, we have isolated potent Bacillus species (BT42) that is able to inhibit C. gloeosporioides (78%) and F. oxysporum (86%). In a review of similar studies, Bacillus sp. strain RMB7, which has broad range antifungal activity showed 71% and 78% mycelial growth inhibition of F. oxysporum and C. gloeosporioides respectively [13]. BT42 produces known extracellular lytic enzymes (chitinase and β-1, 3-glucanase), which are significantly correlated with fungi mycelial growth inhibition. In a similar study conducted elsewhere, significant mycelial growth inhibition of G. xylarioides (anamorph: Fusarium xylarioides) by the chitinase producing Bacillus species isolate named as JU5444 was also reported [12]. Apart from chitinase and β-1,3-glucanase, BT42 produced protease and lipase that might be involved in the inhibition of the mycelial growth of C. gloeosporioides and F. oxysporum. Earlier reports indicated that protease [14, 15] and lipase production [12] by bacteria are associated with inhibition of mycelial growth of different fungi. Since the statistical correlation of growth inhibition of pathogenic fungi and lytic enzyme produced were positive but mostly less than 0.9 r, more than one mechanism of inhibition of pathogenesis was suspected. The production of siderophore by BT42 might also be involved in the inhibition of fungi. Bacteria produce Siderophores during iron starvation. The previous report indicates that siderophores produced by Bacillus subtilis have got potential to inhibit the growth of F. oxysporum [16]. Bacillus amyloliquefaciens FZB42, which has maximum similarity with BT42, harbors genes responsible for the synthesis of siderophores [17]. Apart from production of lytic enzymes and siderophores, from the culture filtrate of BT42, we also isolated and identified antifungal compound, which is confirmed as Harmine [18]. Harmine is β-carboline alkaloid that was first isolated from Peganum harmala L [19, 20]. Harmine is rarely reported from bacteria and it is mostly extracted from higher plants [19, 21]. Harmine production in bacteria was limited to a few species including Enterococcus faecium, Myxobacter and Pseudomonas species [22–25]. However, the recent report by Saad and Zakaria demonstrated that Bacillus flexus isolated from fresh water produced harmine as a mechanism of inhibition of toxic cyanobacteria and this was considered as the first report from Bacillus species [26]. In our present study, we identified harmine from the cultural extract of Bacillus species (BT42) isolated from root rhizosphere of C. arabica L., which is the first report of its type. This is the first report of inhibition of pathogens of C. arabica L., C. gloeosporioides and F. oxysporum by plant growth promoting rhizobacterium by an alkaloid harmine. The inhibitory activity of β-carboline alkaloids against different fungi that include C. gloeosporioides and F. oxysporum was recently described although the β-carboline alkaloids were extracted from the plant [21]. To sum up, the highest inhibition of C. gloeosporioides and F. oxysporum mycelial growth by the Bacillus species (BT42) is due to the synergistic effect of multiple mechanisms, which could be explained as the production of lytic enzymes and harmine, which is produced as an extracellular secondary metabolite. Apart from biocontrol activity, Bacillus species (BT42) also exhibited plant growth promoting characteristics (IAA production, ammonia production, phosphate solubilization and zinc solubilization) that directly involved in plant growth promotion and indirectly in the pathogen suppression. The production of IAA by rhizobacteria is so important in supporting plant growth and development by interfering with the endogenous IAA produced by the plant and serve in defense responses [27–29]. Studies showed that Bacillus amyloliquefaciens is known to produce a substantial amount of IAA [17, 30]. Another important trait displayed by Bacillus species (BT42) is the ability to produce an enzyme ACC deaminase. The production of ACC deaminase by the BT42 is also vital in plant-microbe interaction; that enables the host plant to withstand stress due to drought and flooding by decreasing the endogenous ethylene levels. Studies showed that Bacillus amyloliquefaciens has the versatile potential for both plant growth and biocontrol activity of different phytopathogenic fungi, which is similarly strengthened by our present study [17, 31–33]. The high disease incidence caused by C. gloeosporioides and F. oxysporum were significantly reduced in the presence of BT42 due to the potential biocontrol activity of this bacterium. In BT42 treated C. arabica L seeds, germination occurred much better than that of untreated, which indicates that the isolate facilitated germination of the C. arabica L. seeds by the extracellular production of PGPR traits that supported the germination. In conclusion, we have isolated a Bacillus species (BT42), which shows the versatility of direct and indirect plant growth promoting traits. BT42 showed maximum similarity with Bacillus amyloliquefaciens in NCBI BLASTn result and shows maximum similarity with the same bacterium by Ez taxon database. This bacterium could effectively inhibit pathogens of C. arabica L. The mechanisms of inhibition of C. gloeosporioides and F. oxysporum were found to be the production of lytic enzymes, siderophores as well as antifungal compounds. The major antifungal compound was identified as harmine (a member of β-carboline alkaloids), which is not previously reported as a mechanism of action of PGPR. To this end, the BT42 isolate can be a potential candidate to be used as a biocontrol of C. gloeosporioides and F. oxysporum and also as biofertilizer. Sampling site and sample collection Rhizosphere soil samples of Coffea arabica L. were collected from two different fields at Khusalnagar, Karnataka, South India. These sites are located between 12028'05.7"N and 75057'51.7"E. Soil samples were collected after consent was obtained from coffee farm owners. Isolation and Identification of bacterial isolate Isolation of rhizobacteria from root rhizosphere of C. arabica L. was performed as previously stated [12]. The pure isolates were further confirmed by standard microbiological techniques [34]. Pure isolates were stored at -20 °C with 50% glycerol for further study. Primers fD1 (forward, 5′-AGAGTTTGATCCTGGCTCAG-3′) and rP2 (reverse, 5′-ACGGCTACCTTGTTACGACTT-3′), were obtained from Eurofins (India) and used for the amplification of 16S rRNA gene [35]. A PCR mixture (20 μl) consisting of: 2.5 μl of 10X Mg2+ buffer containing 15 mM Mg2+, 1 μl of Taq polymerase enzyme (1U/μl), 3 μl of dNTP mixture (3 mM), 1 μl of each primer (10 pM), 2 μl of template DNA (50–100 ng/μl) and 9.5 μl of Milli-Q water was prepared. DNA thermocycler was used for the amplification of the DNA at 94 °C for 3 min, followed by 32 cycles of 30 s at 94 °C, 15 s at 54 °C and 1 min at 72 °C with an extension of 72 °C for 5 min. One μl of PCR product along with standard DNA ladder (1.5 kb) were loaded on a 0.8% agarose gel containing 3 μl of ethidium bromide in 1X TAE buffer and electrophoresed at 100 V for 35 min. The PCR product checked and visualized using a UV transilluminator. The PCR product purified and sequenced at Eurofins Genomics India Pvt Ltd, Bangalore, India. Using the NCBI website, Basic Local Alignment Search Tool (BLASTn) used for checking the 16S rRNA gene sequences with comparative sequences of reference strains. Mega 6 software used for the construction of phylogenetic tree after the sequences aligned. To carry out phylogenetic analysis, sequences of 16S rRNA of thirteen reference strains that are important for comparison downloaded from NCBI database http://www.ncbi.nlm.nih.gov. In vitro antagonistic study of bacteria against pathogens In vitro antagonistic study of the bacteria was carried out against two fungal pathogens Colletotrichum gloeosporioids sp.coffee (ITCC 7131) and Fusarium oxysporum (ITCC 3595). Both the cultures were obtained from Indian Type Culture Collection (ITCC), Division of Plant pathology, Indian agricultural research institute, New Delhi 110012 (India). Potato dextrose agar used for dual culture during the antagonistic studies of bacteria against fungal pathogens. Percent fungal radial growth inhibition was calculated as stated below [12]. $$ \mathrm{Fungal}\ \mathrm{mycelial}\ \mathrm{growth}\ \mathrm{inhibition}=\left[\left(\mathrm{C}\hbox{-} \mathrm{T}\right),/,\mathrm{C}\right]\times 100 $$ $$ \mathrm{T} = \mathrm{fungal}\ \mathrm{radial}\ \mathrm{mycelial}\ \mathrm{growth}\ \mathrm{during}\ \mathrm{dual}\ \mathrm{culture}\ \left(\mathrm{Bacteria} + \mathrm{fungus}\right) $$ $$ \mathrm{C} = \mathrm{fungal}\ \mathrm{radial}\ \mathrm{mycelial}\ \mathrm{growth}\ \left(\mathrm{without}\ \mathrm{antagonistic}\ \mathrm{bacteria}\right). $$ Experimental design and bioassay The experiment involved only one factor (the rhizobacteria antagonist). In the experiment Bacillus sp BT42, used as it showed the highest reduction of radial mycelial growth of both C. gloeosporioides and F. oxysporum. Healthy Coffea arabica L. seeds collected from selected healthy coffee plants. First, the exocarp of the coffee fruits was removed and kept in water at 30 °C for 24 h with the other fruit part (mesocarp and endocarp) then mesocarp was removed by washing them and left on a tray to dry [36]. Surface sterilization of coffee seeds was carried out as reported elsewhere [37]. Two hundred sixteen surface sterilized coffee seeds randomly distributed in four groups: untreated (36 seeds), rhizobacteria treated (36 seeds), rhizobacteria + fungi (72 seeds) and fungi (72 seeds) for the experiment. The bacterium grown in nutrient broth under shaking condition of 150 rpm for 18–20 h (1 X 108 cells/ml) at 30 °C were used for coating seeds. The surface sterilized seeds inoculated in the liquid culture of bacteria and dried in aseptic condition under laminar flow hood for 30 min. In the same manner, spore suspensions of C. gloeosporioides (1 X 104 spores/ml) inoculated and dried under laminar flow hood for 30 min. Similarly, bacteria and fungi surface coated on the seed of C. arabica L., in the same manner, one after the other. Surface sterilized seeds without any treatment taken as controls. Seeds in each group (bacterial treated, fungi infected, fungi + bacteria treated and untreated) placed on pre-sterilized Whatman filter paper in separate Petri plate (three replicates). Plates with the coffee seeds incubated at 30 + 1 °C, dampness of the seeds maintained by spraying 1–1.5 ml distilled water on filter paper as necessary and any physiological changes in each group of seeds inspected and recorded daily. The experiment carried out in completely randomized design under controlled conditions. The experiment performed in three replicates. Studies of plant growth promoting traits Qualitative and quantitative test for Siderophore production Rhizobacterium was grown in iron-free medium (K2HPO4, 6.0 g L-1; KH2PO4, 3.0 g L-1; MgSO47H2O,0.2 g L-1;(NH4)2SO4,1.0 g L-1; and Succinic acid 4.0 g L-1, pH 7.0) and incubated for 48 h at 30 °C with constant shaking of 150 rpm [38]. After 48 h of incubation, the fermented broth centrifuged at 10,000 rpm for 15 min and supernatant were taken and checked for the presence of siderophore. For estimation of siderophore, 0.5 ml of supernatant was mixed with CAS reagent and absorbance was calculated at 630 nm [38]. Percent siderophore units calculated using the formula: $$ \%\ \mathrm{s}\mathrm{iderophore}\ \mathrm{units} = \left(\mathrm{A}\mathrm{r}\hbox{-} \mathrm{A}\mathrm{s}\right)/\mathrm{A}\mathrm{r} \times 100 $$ Where, Ar = absorbance of reference at 630 nm (CAS reagent) and As = absorbance of the sample at 630 nm. Test for phytohormones production For the production of IAA, purified bacterium isolate was grown in Luria-Bertani (LB) broth under shaking condition (150 rpm) around the clock at 30 °C. Overnight grown culture centrifuged at 10,000 rpm for 15 min and the supernatant collected. To the supernatant (app. 2 ml) two drops of O-phosphoric acid added; the manifestation of pink color indicates IAA production by the rhizobacteria isolates. Quantitative estimation of IAA was done colorimetrically [39]. For the production of Gibberellic acid, bacterium isolate was grown in 100 ml nutrient broth at 30 °C for 48 h. The growth of bacterium was monitored by measuring turbidity at 600 nm. For extraction of Gibberellic acid, 50 ml of the culture was taken and centrifuged at 7500 rpm for 10 min. The supernatant was collected and pH was adjusted to 2.5 using 37% HCl. Supernatant extracted using ethyl acetate in 1:1 volume ratio [40]. Gibberellic acid quantitatively estimated by using a UV spectrophotometer at 254 nm [41]. Test for ammonia production Rhizobacterium isolate grown in peptone water for 4 days at 30 °C in 50 ml test tubes. In each test tube containing bacterial isolates, 1 ml of Nessler's reagent added. The appearance of a faint yellow color is evidence of weak reaction and deep yellow to brownish color was confirmation of strong reaction [42]. Test for zinc and phosphate solubilization Phosphate solubilizing ability of bacterium isolate was determined by the inoculation of overnight grown bacterial isolates on pre-solidified specific medium for phosphate solubilization test [43]. The rhizobacterium incubated for 96 h at 30 °C and any clear zone around the rhizobacterium colonies indicated phosphate solubilization. Quantitative determination of phosphate solubilizing activity was performed calorimetrically [44]. Zinc solubilization was performed by plate assay using modified Pikovskaya agar [45]. The rhizobacteria isolates were inoculated into a medium consisting of: ammonium sulfate (1 g L-1), dipotassium hydrogen phosphate (0.2 g L-1), glucose (10.0 g L-1), magnesium sulfate (0.1 g L-1), potassium chloride (0.2 g L-1), Yeast (0.2 g L-1), distilled water (1000 ml), pH 7.0 and 0.1% insoluble zinc compounds (ZnO, ZnCO3 and ZnS). The rhizobacteria isolate grown in this medium for 48 h at 28 °C. The clear zone around the colony indicated solubilization of insoluble zinc compounds. Qualitative test for 1-Aminocyclopropane-1-carboxylate (ACC) deaminase ACC deaminase production was checked using Dworkin and Foster (DF) minimal salts medium [46]. To the pre-solidified DF minimal salts medium 3 mM ACC solution sprayed and allowed to dry in aseptic condition for 10 min then bacterial isolates inoculated. After 48 h of incubation at 30 °C, any growth of bacteria on the media was considered as ACC deaminase production [47]. Studies of mechanisms of inhibition of pathogenic fungi Lytic enzymes production Test for Production of β-1, 3 glucanases, and chitinase Test for the production of β-1, 3 glucanases was performed using laminarin as the only carbon source for growth of bacteria. Accordingly, the isolates were inoculated on media containing Na2HPO4 (6 g L-1), KH2PO4 (3 g L-1), NH4Cl (0.5 g L-1), yeast extract (0.05 g L-1), Agar (15 g L-1) and 0.05% laminarin, (Sigma) and incubated at 30 °C for 48 h. After 48 h of incubation, the clear zones obtained. To visualize clearly, plates were flooded with a mixture of 0.666% KI and 0.333% Iodine and isolates showing yellow clear zone around the bacterial colony confirmed as the production of β-1, 3 glucanases. For the checking of chitinase production, colloidal chitin was prepared from chitin powder (Hi-Media) for the preparation of the solid media. The compositions of solid media were colloidal chitin 1% (w/v), Na2HPO4 (6 g L-1), NaCl (0.5 g L-1), KH2PO4 (3 g L-1); NH4Cl (1 g L-1), yeast extract (0.05 g L-1) and agar (15 g L-1). The bacteria isolates were checked for their production of chitinase by observation of clear zone around the colonies after five days incubation at 30 ± 1 °C [48]. Test for Production of Protease and Lipase Test for protease production was done using the protease specific medium as earlier described [49]. Similarly, the rhizobacteria isolates were checked for lipase enzyme using lipase media [2]. This media contains calcium chloride 0.1 g, Peptone 10 g, sodium chloride 5 g, Agar 15 g, distilled water 1 Liter, 10 ml sterile Tween 20. The bacterial isolates streaked on this medium and incubated at 27 °C for 48 h, the clear zone around the bacterial colonies show the activity of lipase enzyme. Production of antifungal compound from isolate BT42 The test for antifungal activity of the culture filtrate was done as described elsewhere with modifications [50, 51]. Briefly, to check the antifungal activity of culture filtrate from strain BT42, a single colony of the isolate inoculated into Luria-Bertani (LB) broth and incubated for 120 h under shaking condition of 150 rpm at 30 °C. During the incubation period, 5 ml samples taken from flasks at different time intervals and centrifuged at 12,500 rpm for 10 min at 4 °C. The cell pellet removed and supernatant filtered through a membrane filter (25 mm) to remove any suspended cell. The collected sample tested for the antifungal activity by agar well diffusion method using 20 μl of the culture filtrate and sterile broth as a control on Potato dextrose agar spread with a spore suspension of pathogenic fungi (1 X 104 spores/ml). The plates incubated at 28 °C for 3 days. Extracellular metabolites extracted from the 48 h grown culture filtrate using ethyl acetate. Accordingly, an equal amount of ethyl acetate added to the culture filtrate and both phases collected and concentrated to dryness. Active compounds obtained from both the phases subjected to the antifungal activity bioassay after dissolving them in methanol Purification, identification and characterization of antifungal compound The crude extract dissolved in methanol was separated by TLC on silica gel plates (20 × 20 cm, 0.5 mm thick, G), developed in Isopropanol: ammonia: water (10:1.5:1, v/v) as the mobile phase. The TLC plate was visualized under UV transilluminator. Each specific band corresponding to specific metabolite eluted by carefully scraping from the TLC plate, suspended in methanol and checked for antifungal activity. The antifungal compound was once again subjected to TLC using the same as well as different solvent systems (chloroform: methanol 90:10; benzene: acetic acid 95:05; ammonia: methanol: chloroform 0.5:1:8.5) as stated above to check its purity. The purified compound characterized and identified using FTIR and LC-MS analysis. The alkaloid nature of the purified antifungal compound was also confirmed by a qualitative test using Dragendorff's reagent and glyoxylic-sulphuric acid test specific for indole derivatives as described elsewhere [52, 53]. IBM SPSS Statistics software version 19 was used to analyze the data related to correlations of fungal radial mycelial growth inhibition and lytic enzyme production. Similarly, one-way ANOVAs was used to compare the mean difference between treatments, and the level of significance was set at P < 0.05. ACC: 1-Aminocyclopropane-1-carboxylate BLAST: Basic Local Alignment Search Tool C. arabica : Coffea arabica C. gloeosporioides : Colletotrichum gloeosporioides Chrome Azul S DF: Dworkin and Foster F. oxysporum : Fusarium oxysporum FTIR: IAA: Indole-3-acetic acid ITCC: Indian Type Culture Collection LC-MS: MS: PDA: Photodiode anode PGPR: Plant growth promoting rhizobacteria TLC: Amamo AA. Coffee production and marketing in Ethiopia. Eur J Bus Manag. 2014;6(37):109–22. Muleta D. Microbial Inputs in Coffee (Coffea arabica L.) Production Systems, Southwestern Ethiopia. Implications for Promotion of Biofertilizer and Biocontrol Agents. Doctoral thesis,Swedish University of Agricultural Sciences/Uppsala.; 2007. Admasu W, Sahile S, Kibret M. Assessment of potential antagonists for anthracnose (Colletotrichum gloeosporioides) disease of mango (Mangifera indica L.) in North Western Ethiopia (Pawe). Arch Phytopathol Plant Prot. 2014;47(18):2176–86. doi:10.1080/03235408.2013.870110. Hindorf H, Omondi CO. A review of three major fungal diseases of Coffea arabica L. in the rainforests of Ethiopia and progress in breeding for resistance in Kenya. J Adv Res. 2011;2(2):109–20. doi:10.1016/j.jare.2010.08.006. Kilambo DL, Mabagala RB, Varzea VMP, Haddad F, Loureiro A, Teri JM. Characterization of Colletotrichum kahawae strains in Tanzania. Int J Microbiol Res. 2013;5(2):382–9. Muleta D, Assefa F, Börjesson E, Granhall U. Phosphate-solubilising rhizobacteria associated with Coffea arabica L. in natural coffee forests of southwestern Ethiopia. J Saudi Soc Agric Sci. 2013. doi:10.1016/j.jssas.2012.07.002. Nguyen THP, Säll T, Bryngelsson T, Liljeroth E. Variation among Colletotrichum gloeosporioides isolates from infected coffee berries at different locations in Vietnam. Plant Pathol. 2009;58(5):898–909. doi:10.1111/j.1365-3059.2009.02085.x. Derso E, Waller JM. Variation among Colletotrichum isolates from diseased coffee berries in Ethiopia. Crop Prot. 2003;22:561–5. doi:10.1016/S0261-2194(02)00191-6. Rutherford MA, Phiri N. Pests and Diseases of Coffee in Eastern Africa: A Technical and Advisory Manual. Wallingford, UK: CAB International; 2006. Sharma M, Kulshrestha S. Colletotrichum gloeosporioides: an anthracnose causing pathogen of fruits and vegetables. Biosci, Biotech Res Asia. 2015;12:1233–46. Martínez EP, Hío JC, Osorio LA, Torres MF, Erika P. Identification of Colletotrichum species causing anthracnose on Tahiti lime, tree tomato, and mango. Agron Colomb. 2009;27(2):211–8. Tiru M, Muleta D, Berecha G, Adugna G. Antagonistic Effects of Rhizobacteria Against Coffee Wilt Disease Caused by Gibberella xylarioides. Asian J Plant Pathol. 2013;7(3):109–22. doi:10.3923/ajppaj.2013.109.122. Ali S, Hameed S, Imran A, Iqbal M, Lazarovits G. Genetic, physiological and biochemical characterization of Bacillus sp.strain RMB7 exhibiting plant growth promoting and broad spectrum antifungal activities. Microb Cell Fact. 2014;13:144. Chaiharn M, Chunhaleuchanon S, Kozo A, Lumyong S. Screening of rhizobacteria for their plant growth promoting activities. KMITL Sci Technol J. 2008;8(1):18–23. Liao CY, Chen MY, Chen YK, et al. Characterization of three Colletotrichum acutatum isolates from Capsicum sp. Eur J Plant Pathol. 2012;133(3):599–608. doi:10.1007/s10658-011-9935-7. Patil S, Bheemaraddi CM, Shivannavar TC, Gaddad MS. Biocontrol activity of siderophore producing Bacillus subtilis CTS-G24 against wilt and dry root rot causing fungi in chickpea. IOSR J Agric Vet Sci. 2014;7(9):63–8. Chen XH, Koumoutsi A, Scholz R, et al. Comparative analysis of the complete genome sequence of the plant growth–promoting bacterium Bacillus amyloliquefaciens FZB42. Nat Biotechnol. 2007;25(9):1007–14. doi:10.1038/nbt1325. Silverstein RM, Webster FX, Kiemle DJ. Spectrometric Identification of Organic Compounds. New York: John Wiley & Sons; 2005. p. 1–550. Berrougui H, Isabelle M, Cloutier M, Hmamouchi M, Khalil A. Protective effects of Peganum harmala L. extract, harmine and harmaline against human low-density lipoprotein oxidation. J Pharm Pharmacol. 2006;58(7):967–74. doi:10.1211/jpp.58.7.0012. de Meester C. Genotoxic potential of beta-carbolines: a review. Mutat Res. 1995;339(3):139–53. Li Z, Chen S, Zhu S, Luo J, Zhang Y, Weng Q. Synthesis and fungicidal activity of β-Carboline alkaloids and their derivatives. Molecules. 2015;20(8):13941–57. doi:10.3390/molecules200813941. Aassila H, Bourguet-Kondracki ML, Rifai S, Fassouane A, Guyot M. Identification of harman as the antibiotic compound produced by a tunicate-associated bacterium. Mar Biotechnol (NY). 2003;5(2):163–6. doi:10.1007/s10126-002-0060-7. Bohlendorf B, Forche E, Bedorf N, et al. Antibiotics from Gliding Bacteria, LXXIII Indole and Quinoline Derivatives as Metabolites of Tryptophan in Myxobacteria. European J Org Chem. 1996. doi:10.1002/jlac.199619960108. Kodani S, Imoto A, Mitsutani A, Murakami M. Isolation and identification of the antialgal compound, harmane (1-methyl-β-carboline), produced by the algicidal bacterium, Pseudomonas sp.K44-1. J Appl Phycol. 2002;14(2):109–14. Zheng L, Chen H, Han X, Lin W, Yan X. Antimicrobial screening and active compound isolation from marine bacterium NJ6-3-1 associated with the sponge Hymeniacidon perleve. World J Microbiol Biotechnol. 2005;21(2):201–6. doi:10.1007/s11274-004-3318-6. Alamri SA, Mohamed ZA. Selective inhibition of toxic cyanobacteria by B-carboline-containing bacterium Bacillus flexus isolated from Saudi freshwaters. Saudi J Biol Sci. 2013;20(4):357–63. doi:10.1016/j.sjbs.2013.04.002. Mohite B. Isolation and characterization of indole acetic acid (IAA) producing bacteria from rhizospheric soil and its effect on plant growth. J Sci Plant Nutr. 2013. doi:10.4067/S0718-95162013005000051. Weselowski B, Nathoo N, Eastman AW, Macdonald J, Yuan Z-C. Isolation, identification and characterization of Paenibacillus polymyxa CR1 with potentials for biopesticide, biofertilization, biomass degradation and biofuel production. BMC Microbiol. 2016. doi:10.1186/s12866-016-0860-y. Shao J, Li S, Zhang N, et al. Analysis and cloning of the synthetic pathway of the phytohormone indole-3-acetic acid in the plant-beneficial Bacillus amyloliquefaciens SQR9. Microb Cell Fact. 2015;14(1):130. doi:10.1186/s12934-015-0323-4. Ait A, Noreddine K, Chaouche K, Ongena M, Thonart P. Biocontrol and plant growth promotion characterization of Bacillus Species isolated from Calendula officinalis Rhizosphere. Indian J Microbiol. 2013;53(4):447–52. doi:10.1007/s12088-013-0395-y. Yu GY, Sinclair JB, Hartman GL, Bertagnolli BL. Production of iturin A by Bacillus amyloliquefaciens suppressing Rhizoctonia solani. Soil Biol Biochem. 2002;34(7):955–63. doi:10.1016/S0038-0717(02)00027-5. Jiang C-H, Wu F, Yu Z-Y, et al. Study on screening and antagonistic mechanisms of Bacillus amyloliquefaciens 54 against bacterial fruit blotch (BFB) caused by Acidovorax avenae subsp. citrulli. Microbiol Res. 2015;170:95–104. doi:10.1016/j.micres.2014.08.009. Shahzad R, Waqas M, Khan AL, et al. Seed-borne endophytic Bacillus amyloliquefaciens RWL-1 produces gibberellins and regulates endogenous phytohormones of Oryza sativa. Plant Physiol Biochem. 2016;106(September):236–43. doi:10.1016/j.plaphy.2016.05.006. Ahmad F, Ahmad I, Khan MS. Screening of free-living rhizospheric bacteria for their multiple plant growth promoting activities. Microbiol Res. 2008;163(2):173–81. http://dx.doi.org/10.1016/j.micres.2006.04.001. Weisburg WG, Barns SM, Pellettier DA, Lane DJ. 16S ribosomal DNA amplification for phylogenetic study. J Bacteriol. 1991;173(2):697–703. Rosa SDVF da, Mcdonald MB, Veiga AD, Vilela FdeL, Ferreira IA. Staging coffee seedling growth : a rationale for shortenning the coffee seed germination test. Seed Sci Technol. 2010;38:421–431. Shanmugam V, Kanoujia N. Biological management of vascular wilt of tomato caused by Fusarium oxysporum f.sp. lycospersici by plant growth-promoting rhizobacteria mixture. Biol Control. 2011;57(2):85–93. doi:10.1016/j.biocontrol.2011.02.001. Sayyed RZ, Badgujar MD, Sonawane HM, Mhaske MM, Chincholkar SB. Production of microbial iron chelators (siderophores) by fluorescent Pseudomonads. Indian J Biotechnol. 2005;4(4):484–90. Tsavkelova EA, Cherdyntseva TA, Klimova SY, Shestakov AI, Botina SG, Netrusov AI. Orchid-associated bacteria produce indole-3-acetic acid, promote seed germination, and increase their microbial yield in response to exogenous auxin. Arch Microbiol. 2007;188(6):655–64. doi:10.1007/s00203-007-0286-x. Cho KY, Sakurai A, Kamiya Y, Takahashi N, Tamura S. Effects of the new plant growth retardants of quaternary ammonium iodides on gibberellin biosynthesis in Gibberella fujikuroi. Plant Cell Physiol. 1979;20(1):75–80. Kumar PKR, Lonsane BK. Immobilized growing cells of Gibberella fujikuroi P-3 for production of gibberellic acid and pigment in batch and semi-continuous cultures. Appl Microbiol Biotechnol. 1986;28:537–42. Cappuccino JC, Sherman N. Microbiology; a laboratory manual. New York: Benjamin/Cumming Pub. Co; 1992. p. 125–79. Pikovskaya RI. Mobilization of phosphorus in soil in connection with the vital activity of some microbial species. Mikrobiologiya. 1948;17:362–70. King EJ. The colorimetric determination of phosphorus. Biochem J. 1932;26:292. Saravanan VS, Subramoniam SR, Raj SA. Assessing in vitro solubilization potential of different zinc solubilizing bacterial (ZSB) isolates. Brazilian J Microbiol. 2004;35(1-2):121–5. doi:10.1590/S1517-83822004000100020. Ramamoorthy V, Viswanathan R, Raguchander T, Prakasam V, Samiyappan R. Induction of systemic resistance by plant growth promoting rhizobacteria in crop plants against pests and diseases. Crop Prot. 2001;20:1–11. Penrose DM, Glick B. Methods for isolating and characterizing ACC deaminase containing plant growth promoting rhizobacteria. Physiol Plant. 2002;118:10–5. Renwick A, Campbell R, Coe S. Assessment of in vivo screening systems for potential biocontrol agents of Gaeumannomyces graminis. Plant Pathol. 1991;40:524–32. Smibert RM, Krieg NR. Phenotypic characterization. In: Gerhardt P, Murray RGE, Wood WA, Krieg NR, editors. Methods for General and Molecular Bacteriology. Washington DC: American Society of Microbiology; 1994. p. 607–54. Kumar A, Saini S, Wray V, Nimtz M, Prakash A, Johri BN. Characterization of an antifungal compound produced by Bacillus sp. strain A(5) F that inhibits Sclerotinia sclerotiorum. J Basic Microbiol. 2012;52(6):670–8. doi:10.1002/jobm.201100463. Vilarinho BR, Silva JP, Pomella AWV, Marcellino LH. Antimicrobial and plant growth-promoting properties of the cacao endophyte Bacillus subtilis ALB629. J Appl Microbiol. 2014;116:1584–92. doi:10.1111/jam.12485. Paech K, Tracey MV. Modern Methods of Plant Analysis/Moderne Methoden der Pflanzenanalyse. 1st edition, Biemann K, Boardman NK, Breyer B, et al., editors. Berlin, Heidelberg: Springer; 1962.p.1–509. doi:10.1007/978-3-642-45993-1. Crotti AEM, Paul J, Gates JLCL, NPL. Based characterization of β -carbolines- mutagenic constituents of thermally processed meat. Mol Nutr Food Res. 2010;54:433–9. doi:10.1002/mnfr.200900064. We would like to thank BRD School of Biosciences, Sardar Patel University for lab facilities and the Indian Council for Cultural Relations (ICCR), as a sponsor of the scholarship for the first author. Mr. Parth Thakor is thankful to DST Inspire program for the fellowship. Authors are also thankful to Mr. Sampark Thakkar and Prof. Arabinda Ray from PDPIAS, Charusat for the critical evaluation of the LC-MS and FTIR portion of the manuscript. We thank DST, New Delhi for the assistance in general and for the PURSE central facility LC-MS sponsored under PURSE program grant vide sanction letter Do.No.SR/59/Z-23/2010/43 dated 16th March 2011. No specific fund was obtained for this study; however, the study was supported by BRD School of Biosciences, Sardar Patel University, Vallabh Vidyanagar, Gujarat, India. The dataset generated for phylogenetic tree construction is available in the Tree BASE repository, http://purl.org/phylo/treebase/phylows/study/TB2:S20152. TK involved in designing of the study, experimentations, data analysis and interpretation, and write-up of the manuscript. VRT designed the study, supervised and guided the experimental process, and prepared the manuscript for publication. PT involved in LC-MS and FTIR data analysis and interpretation and manuscript writing. All the authors read and approved the final manuscript. Department of Biology, Faculty of Natural and Computational Sciences, Mettu University, Mettu, Ethiopia Tekalign Kejela Present Address: BRD school of Biosciences, Sardar Patel University, Vallabh Vidyanagar, 388120, India BRD School of Biosciences, Sardar Patel University, Vadtal Road, Satellite Campus, Post Box No.39, Vallabh Vidyanagar, 388120, Gujarat, India Vasudev R. Thakkar & Parth Thakor Vasudev R. Thakkar Parth Thakor Correspondence to Tekalign Kejela. Kejela, T., Thakkar, V.R. & Thakor, P. Bacillus species (BT42) isolated from Coffea arabica L. rhizosphere antagonizes Colletotrichum gloeosporioides and Fusarium oxysporum and also exhibits multiple plant growth promoting activity. BMC Microbiol 16, 277 (2016). https://doi.org/10.1186/s12866-016-0897-y Received: 28 January 2016 Coffea arabica L
CommonCrawl
Preparation of Graphene Oxide-Based Hydrogels as Efficient Dye Adsorbents for Wastewater Treatment Haiying Guo1, 2, Tifeng Jiao1, 2, 3Email author, Qingrui Zhang2Email author, Wenfeng Guo2Email author, Qiuming Peng1 and Xuehai Yan3 © Guo et al. 2015 Received: 23 February 2015 Accepted: 11 May 2015 Graphene oxide (GO) sheets exhibit superior adsorption capacity for removing organic dye pollutants from an aqueous environment. In this paper, the facile preparation of GO/polyethylenimine (PEI) hydrogels as efficient dye adsorbents has been reported. The GO/PEI hydrogels were achieved through both hydrogen bonding and electrostatic interactions between amine-rich PEI and GO sheets. For both methylene blue (MB) and rhodamine B (RhB), the as-prepared hydrogels exhibit removal rates within about 4 h in accordance with the pseudo-second-order model. The dye adsorption capacity of the hydrogel is mainly attributed to the GO sheets, whereas the PEI was incorporated to facilitate the gelation process of GO sheets. More importantly, the dye-adsorbed hydrogels can be conveniently separated from an aqueous environment, suggesting potential large-scale applications of the GO-based hydrogels for organic dye removal and wastewater treatment. Graphene oxide Nanostructures Dye removal Nowadays, harmful chemical compounds have become the main cause of water pollution. Water pollution exerts negative effects not only on species living in the water but also on the broader biological community. For instance, organic dyes are often discharged with wastewater into the local environment without adequate treatment. Rapid and convenient removal of organic dyes from wastewater has been a challenging issue faced by scientists [1–6]. For example, Kim's groups achieved excellent systematic works in the relative fields of water remediation by various nanocomposites [1, 2]. In particular, large-scale application requires the potential dye adsorbents to exhibit a high dye removal rate within a relatively short period of time and to be environmentally friendly. For the latter, the adsorbents must be able to be properly separated from an aqueous environment after adsorbing waste dyes. In the past years, graphene oxide (GO) sheets have attracted broad attention as potential dye adsorbents because of their unique conjugated, two-dimensional (2D) structure, which exhibits superior adsorption capacity for various dye molecules through π-π stacking interactions [7–12]. In addition, the negative charges in the GO sheets due to various oxygen-rich functional groups (i.e., carboxy, carbonyl, hydroxyl groups) allow additional strong electrostatic interactions with cationic dye molecules [13–18]. However, GO sheets exhibit a high dispersibility in water, which prevents the efficient separation of dye-adsorbed GO sheets from an aqueous environment. Therefore, various GO-based adsorbent materials have been developed to facilitate the separation of dye-adsorbed GO sheets from aqueous solutions [19–21]. For example, Akhavan et al. successfully reported the preparation and magnetic separation application of superparamagnetic ZnFe2O4/reduced graphene oxide (rGO) composites by hydrothermal reaction method [22]. In addition, their group has also investigated some bacteria bioactivity and interaction with the environment by aggregated graphene nanosheets as an encapsulating material and effective photothermal agent [23]. In addition, depositing magnetic Fe3O4 nanoparticles on GO sheets can allow facile separation of dye-adsorbed composites by applying an external magnetic field [19]. GO-based porous materials have also been used to adsorb organic waste dyes [21]. Alternatively, GO-based hydrogels provide an effective solution for the easy separation of dye-adsorbed materials from water [24]. In this work, the facile preparation of GO/polyethylenimine (PEI) hydrogels as efficient dye adsorbents for wastewater treatment was reported. The GO/PEI hydrogels were obtained through both hydrogen bonding and electrostatic interactions between amine-rich PEI and GO sheets. PEI was incorporated to facilitate the gelation process of GO sheets, and the dye adsorption capacity of the hydrogel is mainly attributed to the GO sheets. For both methylene blue (MB) and rhodamine B (RhB), the as-prepared hydrogels exhibit removal rates within 4 h in accordance with the pseudo-second-order model. More importantly, the dye-adsorbed hydrogels can be conveniently separated from an aqueous environment, suggesting potential large-scale applications of the GO-based hydrogels for organic dye removal and wastewater treatment. The starting materials, PEI (Mw = 600 g·mol−1, Aladdin Reagent, Shanghai, China), RhB (Tianjin Kaitong Chemical Reagent Co., Ltd, Tianjin, China), and MB (Tianjin Kaitong Chemical Reagent Co., Ltd., Tianjin, China), were used as received. GO sheets were prepared according to the method described by Hummer [25] with some modification [26]. Deionized (DI) water was used in all cases. PEI was first dissolved in DI water to make aqueous stock solutions with different concentrations (1.2, 1.8, 2.4, and 3.0 mg·mL−1). GO powder (24 mg) was dispersed in 10 mL of DI water to give a stock solution (2.4 mg·mL−1). GO/PEI hydrogels were prepared by combining GO and PEI stock solutions with sonication for a few seconds or without sonication for a few minutes to form gels. The detailed formulations are listed in Fig. 1e. The prepared samples of the solution and hydrogels were designated as #1, #2, #3, and #4. Schematic depiction of the formation of GO/PEI gels. a GO and b amine-rich PEI were combined to give c GO/PEI hydrogels. d, e Gelation pictures The presently used different xerogels were obtained at −50 °C via a lyophilizer (FD-1C-50, Beijing Boyikang Experimental Instrument Co., Ltd., China) to completely remove water over 2–3 days. The morphology of GO and lyophilized GO/PEI hydrogels was characterized by using both field-emission scanning electron microscopy (FE-SEM, S-4800II, Hitachi, Japan) with an accelerating voltage of 5–15 kV and transmission electron microscopy (TEM, HT7700, Hitachi High-Technologies Corporation) with commercial 300-mesh copper grids. Before SEM investigations, the prepared samples were coated with copper foil fixed by a conductive adhesive tape and covered with gold nanoparticles to make them more conductive. X-ray diffraction study was carried out by using an X-ray diffractometer (SmartLab, Rigaku, Japan) equipped with a conventional Cu Kα X-ray radiation (λ = 1.54 Å) source and a Bragg diffraction setup. Transmission Fourier transform infrared (FT-IR) spectra were obtained using a Nicolet iS10 FT-IR spectrophotometer from Thermo Fisher Scientific Inc. (Waltham, MA, USA) with an average of 16 scans and at a resolution of 4 cm−1 by the conventional KBr disk tablet method. Thermogravimetry-differential scanning calorimetry (TG-DSC) analyses of the samples were conducted in air condition by using a Netzsch STA 409 PC Luxx simultaneous thermal analyzer (Netzsch Instruments Manufacturing Co., Ltd., Germany). Raman spectroscopy was performed using a Horiba Jobin Yvon Xplora PLUS confocal Raman microscope equipped with a motorized sample stage. The wavelength of the excitation laser was 532 nm, and the power of the laser was kept below 1 mW without noticeable sample heating. The intensity of a Raman peak was extracted from the maximum value after baseline subtraction over the corresponding spectral range. X-ray photoelectron spectroscopy (XPS) was performed on Thermo Scientific ESCALAB 250Xi using 200-W monochromated Al Kα radiation. The 500-μm X-ray spot was used for XPS analysis. The base pressure in the analysis chamber was about 3 × 10−10 mbar. Typically, the hydrocarbon C(1s) line at 284.8 eV from adventitious carbon is used for energy referencing. Both survey scan and individual high-resolution scan peaks were recorded. The adsorption experiments were designed and modified according to the previous reports [27, 28]. In adsorption experiments, about 1 mL of GO/PEI hydrogel (without lyophilizing) was added to 100 mL of either MB (10 mg·L−1) or RhB (4 mg·L−1) solutions. The dye solutions containing gel adsorbents were stirred slowly and continuously at room temperature in a dark condition. The gel samples were then separated by centrifugation at different time intervals, and the supernatant liquid was collected for subsequent analysis using an UV-vis spectrometer (752, Sunny Hengping, Shanghai, China). The absorbance at 662 nm (MB) and 554 nm (RhB) was used to determine the concentration of residual dyes in the supernatant liquid. Figure 1 depicts the complete preparation process of GO/PEI hydrogels by combining the GO suspension and the PEI aqueous solution using the formulation listed in Fig. 1e. The prepared samples were designated as #1, #2, #3, and #4. GO sheets are rich in hydrophilic functional groups (e.g., carboxyl, hydroxyl, and epoxides) (Fig. 1a). Functional groups like –OH and –COOH can form hydrogen bonds with amines or amine-containing molecules under appropriate conditions [16, 29, 30]. Therefore, amine-rich PEI (Fig. 1b) was chosen to facilitate the gelation of GO sheets in an aqueous solution. In addition, PEI also exhibits good adsorption and adhesion properties [31]. It was found that the dye adsorption capacity of the above porous materials increases with the amount of PEI content, presumably due to the strong electrostatic attractions between the amine-rich PEI chains and the dye molecules. Therefore, the adsorption capacity of the abovementioned porous materials is essentially attributed to PEI instead to GO. On the other hand, it is well known that in most composite materials, as a kind of functional molecule with multi-amine groups, PEI was used widely as a strong chelating agent and organic intermediate. As for the present as-formed GO-based composite gels, after combination with GO, the formed composites showed very stable self-assembly behaviors by strong hydrogen bonding and electronic interactions, which showed a rare possibility to release PEI to cause secondary waste as dye adsorbents for wastewater treatment. In this study, PEI was only used with a concentration slightly above the critical gelation concentration (i.e., the minimum concentration of the gelator needed for gel formation). The formulation of the GO/PEI gels is shown in Fig. 1e, and photos of the GO/PEI gels are shown in Fig. 1d. The samples possessing good gelation properties are designated as #2, #3, and #4 and were used in the dye adsorption experiments as discussed later in this paper. In addition, the morphology of GO and lyophilized GO/PEI hydrogels was characterized using both FE-SEM and TEM. Figure 2a, a' shows a typical 2D flake-like morphology of GO sheets. Figure 2b, b' reveals the porous microstructures of lyophilized GO/PEI gels, suggesting that the GO sheets were cross-linked in the porous PEI networks. Morphology of lyophilized GO sheets (a, a') and GO/PEI gel (b, b'). SEM images (a, b) and TEM images (a', b') In addition, the strong affiliation between PEI and GO was also evidenced by X-ray diffraction studies. Figure 3a shows the diffraction patterns for the GO and the lyophilized GO/PEI hydrogels. The 2θ values were observed at 11.3° (GO), 8.9° (gel #2), 8.6° (gel #3), and 7.8° (gel #4), corresponding to the d-spacing values of 0.77, 0.98, 1.02, and 1.13 nm, respectively. The X-ray patterns of GO display the presence of a strong peak at 11.3° corresponding to the (001) reflection peak with a layer distance of 0.77 nm [32]. Thus, the regular stacking of GO sheets was significantly altered by PEI chains attached on the surface of the GO sheets, even though the structural features of GO remained largely unchanged. Moreover, Raman spectroscopy provides a useful tool to characterize the carbon-based materials [33], as shown in Fig. 3b. Three characteristic bands of graphene sheets in Raman spectra appeared, including the G band (1601 cm−1) originated from the first-order scattering of the E2g phonons of the sp2-hybridized carbon atoms, the D band (1351 cm−1) caused by a breathing mode of κ-point phonons of A1g symmetry of the defects involved in the sp3-hybridized carbon bonds such as hydroxyl and/or epoxide bonds [34], and the 2D band (2692 cm−1) which is much sensitive to stacking of graphene sheets [35]. It is established that the G and 2D bands of single-layer graphene sheets are usually located at 1585 and 2679 cm−1, while for multi-layer graphene sheets (including 2–6 layers), the positions of the G and 2D bands shift into lower and higher wavenumbers, respectively [36, 37]. Furthermore, the 2D/G ratios of single-, double-, triple-, and multi-layer (>4) graphene sheets are typically >1.6, 0.8, 0.30, and 0.07, respectively [38]. For example, Akhavan achieved excellent research work and reported successfully the 2D/G ratios of the single and bilayer GO sheets in the range of 1.53–1.68 and 0.82–0.89, respectively [39]. In our present work, the 2D/G ratios of the GO sheets and three different composite gels showed the values in the range of 0.12–0.14 (seen in Fig. 3d), suggesting the multi-layer nature of the presently prepared graphene sheets. In addition, due to the origination of the G and D bands, the G/D peak intensity ratio is known as a measure of the sp2 domain size of graphene sheets containing sp3 and sp2 bonds. In our present work, it was found that by forming the composite gels, the D/G ratios (shown in Fig. 3c) shifted from 0.97 to the values of 1.06, 1.19, and 1.20 with increment of PEI concentration, respectively. This change can be attributed to the successful cross-linking of GO in the hydrogel networks and the absence of the C–N bonds formed on the surface of the GO sheets. X-ray diffraction patterns (a) and Raman spectra (b) of GO and lyophilized GO/PEI hydrogels (#2, #3, and #4). c, d D/G and 2D/G ratios of the Raman spectra shown in b, respectively The FT-IR spectra of GO and GO/PEI hydrogels are demonstrated in Fig. 4a. In the spectrum of GO, the peak at 3432 cm−1 could be assigned to the –OH vibration stretching. It also showed bands due to carboxyl C=O (1724 cm−1), epoxy C–O (1226 cm−1), and alkoxy C–O (1050 cm−1) groups situated at the edges of the GO nanosheets [16, 29, 30]. In the spectra of GO/PEI gels, an obvious peak could be observed at 1645 cm−1, corresponding to the –NH stretching of polyethylenimine. In addition, the appearance of the bands at 2920 and 2846 cm−1 was also observed, which could be assigned to the methyl stretching in PEI molecules. The obtained FT-IR results clearly indicated that GO-based composite hydrogels have been successfully prepared. In addition, Fig. 4b illustrates the thermograms of GO and GO-based composite hydrogels. The quality of GO declines uniformly from 30 to 150 °C, and the loss of the quality is about 15 %, which is mainly due to the moisture evaporation of the sample. With increment of temperature, the quality of GO declines sharply, especially at 207 °C. This can be due to the pyrolysis of the unstable oxygen-containing functional groups in GO. When the temperature is higher than 450 °C, GO tends to produce further losses. Moreover, according to TG results, GO/PEI hydrogels showed a higher thermal stability compared with the GO sheet, which could be attributed to the higher cross-linking within the network structures in gels. Because of the effect of PEI, it can be found that addition of a low content of PEI can greatly increase the thermal stability of hydrogels, suggesting that a strong interaction exists between GO sheets and PEI even in such a water-abundant hydrogel. It was reported that some GO composites showed different weight retention values at high temperature, probably due to the structural changes from the existence of the carbon net-compound assembly structure in the composites [40–42]. In the present case, the as-formed composite hydrogels enhance the thermal stability of materials to a certain extent. IR spectra (a) and TG curves (b) of GO and lyophilized GO/PEI hydrogels (#2, #3, and #4) In order to further investigate the obtained GO/PEI hydrogel, the survey XPS spectra of the lyophilized GO/PEI hydrogel (#2) in Fig. 5a showed the characteristic peaks, such as C(1s), N(1s), and O(1s). In addition, we obtained the relative elemental composition and calculated the O/C ratios of all lyophilized samples (GO sheet, 37.26 %; GO/PEI gel, 36.08 %), respectively, which suggested the decrement of the oxygen element from GO to the nanocomposite. Moreover, the deconvolution of C(1s), N(1s), and O(1s) of XPS peaks for the GO/PEI hydrogel (#2) nanocomposite was demonstrated. Figure 5b shows XPS peak deconvolution of C(1s) core levels of the gel (#2) nanocomposite. The peak centered at 284.9 eV was attributed to the C–C, C=C, and C–H bonds. The other deconvoluted peak located at the binding energies of 286.7 eV was assigned to the C–OH oxygen-containing bonds [43]. The high-resolution N(1s) spectrum in Fig. 5c reveals the presence of amine (399.4 eV), C–N bond (400.6 eV), and N+ species (401.4 eV), suggesting the presence of PEI polymers in the composite, either in their original amine forms or in grafted forms through the covalent bonding and weak interaction force to the GO sheets [44]. In addition, the O(1s) photoelectron peak of the gel (#2) nanocomposite is shown in Fig. 5d. This peak can be deconvoluted into three Gaussian components with identical FWHM after a Shirley background subtraction. The second component at 532.0 eV can be corresponded to the oxygen of the surface OH− bound in nanocomposite [45]. The third deconvoluted O(1s) peak at 532.9 eV was attributed to the oxygen of water molecules existing in the nanostructure or adsorbed on the GO surface. This means that the surface of the gel (#2) nanocomposite was still porous, which can be an advantage for the surface adsorption process. Survey XPS spectra (a) of samples: a, GO sheet; b, GO/PEI hydrogel (#2). Deconvolution of XPS peaks of the GO/PEI hydrogel (#2) nanocomposite: b C(1s), c N(1s), and d O(1s) The dye adsorption capacity was evaluated by placing the as-prepared GO/PEI hydrogels in MB and RhB aqueous solutions. It should be noted that the adsorption process of freeze-dried samples seemed typical and easy to investigate. In the present work, the in situ adsorption behaviors of hydrogels were chosen, which could demonstrate the real adsorption process of GO-based gels for different dyes in wastewater. In addition, graphene oxide has the probability to act as a visible light photocatalyst for degradation of dyes in the adsorption process. So the present adsorption experiments were measured and repeated in a dark condition. The absorbance at 662 nm (Fig. 6a for MB) and 554 nm (Fig. 6b for RhB) was used to determine the concentration of residual dyes for samples collected at different time intervals. The dye removal rates were calculated according to the equation of K = (A 0 − A T)/A 0 × 100 %, where K is the dye removal rate, A 0 is the absorbance of the dye stock solution, and A T is the absorbance of the supernatant liquid collected at different time intervals. Figure 6c shows the calculated dye removal rate versus time plots for both MB and RhB. The dye removal rates can reach nearly 100 % for both MB and RhB within approximately 4 h, suggesting the as-prepared GO/PEI hydrogels as efficient dye adsorbents. In addition, the thermodynamic behaviors of other reduced graphene oxide-based hydrogels for dye adsorption from aqueous solutions were reported and investigated in detail [46]. Now our primary adsorption kinetic experiments of the as-prepared GO/PEI hydrogel (#2) on MB and RhB were performed, and the results are shown in Fig. 7. The hydrogel exhibits a continuous adsorption process, with equilibrium times of approximately 200 min for MB and RhB, respectively. A 200-min equilibrium time is acceptable for efficient photocatalytic applications. Such kinetic behavior can also be associated with the unique nanocomposite structure, i.e., the large three-dimensional network-like nanostructure cross-linked with polyethylenimine by electrostatic attractions and hydrogen bonding, and highly dispersed GO nanosheets as the adsorption sites. In addition, classical kinetic models were employed to describe the above degradation data as follows: The absorption spectra for MB (a) and RhB (b) acquired for the supernatant liquids collected at different time intervals during dye adsorption experiments. The calculated dye removal rates versus time plots for both MB and RhB (c) Adsorption kinetics curves of as-prepared GO/PEI hydrogel (#2) on MB and RhB at 298 K. a Pseudo-first-order kinetics. b Pseudo-second-order kinetics The pseudo-first-order model: $$ \log \left({q}_e-{q}_t\right)= \log {q}_e-\frac{k}{2.303}t $$ The pseudo-second-order model: $$ \frac{t}{q_{\mathrm{t}}}=\frac{1}{k{q_{\mathrm{e}}}^2}+\frac{t}{q_{\mathrm{e}}} $$ where q e and q t represent the amount of dye adsorbed (mg/g) at equilibrium and time t, respectively, and the k 1 or k 2 values are the kinetic rate constants. The kinetic data (Table 1) can be accurately described by the pseudo-second-order model with a high correlation coefficient (R 2 > 0.994). In addition, it should be noted that only a slightly high concentration of PEI above the gelation condition was chosen and used. Thus, present types of composite materials could not be reused many times with poor recycling ability. The design of stabilized GO-based hydrogel materials and the relative applications are still a challenging problem in the near future. Kinetic parameters of GO/PEI hydrogel (#2) for MB and RhB adsorptions at 298 K (experimental data from Fig. 7) GO/PEI hydrogel (#2) Pseudo-first-order model Pseudo-second-order model q e (mg/g) K 1 (min−1) K 2 (g/min·h) Considering the obtained experimental results described above, some important points should be proposed and discussed. Firstly, in our recent works about some organogel systems based on organic compounds [47–51], functionalized imide derivatives, with the different substituent groups (such as cholesteryl, azobenzene, or luminol), molecular skeletons, or spacers, can have a profound effect on the gelation abilities and the as-formed nanostructures. In another organogel system based on cationic amphiphile-GO nanocomposites, the headgroups in amphiphiles play a crucial role in the gelation behaviors in various organic solvents [52]. For the present GO/PEI hydrogels, the self-assembly and regular stacking of GO sheets were significantly altered by formulation of PEI attached on the surface of GO sheets. In addition, it should be noted that the present GO/PEI hydrogels are more environmentally friendly than organogels from different organic solvents. Now the drug release behaviors and preparation of nanoparticle-containing hybrid hydrogels generated by the present supramolecular gels are under investigation to display the relationship between the as-formed nanostructures and their applications. In summary, the facile preparation and dye adsorption capacity of GO/PEI hydrogels have been investigated. PEI was chosen for its abundant amine groups that can form hydrogen bonds with GO. Both the SEM and XRD studies clearly show that the GO sheets were successfully cross-linked in the PEI network. Meanwhile, the Raman spectra suggest that the structural features of GO sheets remain largely unchanged pre- and post-gelation. The as-prepared GO/PEI hydrogels exhibited good removal rates for both MB and RhB in accordance with the pseudo-second-order model. The current research work provides further insight into the applications of GO-based polymer-containing hydrogels as dye adsorbents for wastewater treatment. This work was financially supported by the National Natural Science Foundation of China (grant nos. 21473153 and 21207112), the Natural Science Foundation of Hebei Province (grant no. B2013203108), the Science Foundation for the Excellent Youth Scholars from Universities and Colleges of Hebei Province (grant nos. Y2011113 and YQ2013026), the Support Program for the Top Young Talents of Hebei Province, and the Open Foundation of National Key Laboratory of Biochemical Engineering (Institute of Process Engineering, Chinese Academy of Sciences). TJ, WG, and QZ participated in the analysis and testing of the nanostructures. HG, XY and QP carried out the synthesis of compounds and characterization of nanostructures. TJ and QZ supervised this work, helped in the analysis and interpretation of data, and, together with WG, worked on the drafting and revisions of the manuscript. TJ, XY and QZ conceived of the study and participated in its design and characterization. QP participated in the design of the study and provided analysis instruments. All authors read and approved the final manuscript. HG is an MD student. TJ, QP, and XY are professors. QZ and WG are associate professors. State Key Laboratory of Metastable Materials Science and Technology, Yanshan University, Qinhuangdao, 066004, People's Republic of China Hebei Key Laboratory of Applied Chemistry, School of Environmental and Chemical Engineering, Yanshan University, Qinhuangdao, 066004, People's Republic of China National Key Laboratory of Biochemical Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing, 100190, People's Republic of China Georgakilas V, Otyepka M, Bourlinos AB, Chandra V, Kim N, Kemp KC, et al. Functionalization of graphene: covalent and non-covalent approaches, derivatives and applications. Chem Rev. 2012;112:6156–214.View ArticleGoogle Scholar Kemp KC, Seema H, Saleh M, Le NH, Mahesh K, Chandra V, et al. Environmental applications using graphene composites: water remediation and gas adsorption. Nanoscale. 2013;5:3149–71.View ArticleGoogle Scholar Dadfarnia S, Haji Shabani AM, Moradi SE, Emami S. Methyl red removal from water by iron based metal-organic frameworks loaded onto iron oxide nanoparticle adsorbent. Appl Surf Sci. 2015;330:85–93.View ArticleGoogle Scholar Zhang YR, Shen SL, Wang SQ, Huang J, Su P, Wang QR, et al. A dual function magnetic nanomaterial modified with lysine for removal of organic dyes from water solution. Chem Eng J. 2014;239:250–6.View ArticleGoogle Scholar Wang Y, Li Z, He Y, Li F, Liu XQ, Yang JB. Low-temperature solvothermal synthesis of graphene-TiO2 nanocomposite and its photocatalytic activity for dye degradation. Mater Lett. 2014;134:115–8.View ArticleGoogle Scholar Namasivayam C, Sangeetha D. Recycling of agricultural solid waste, coir pith: removal of anions, heavy metals, organics and dyes from water by adsorption onto ZnCl2 activated coir pith carbon. J Hazard Mater. 2006;135:449–52.View ArticleGoogle Scholar Geim AK, Novoselov KS. The rise of graphene. Nature Mater. 2007;6:183–91.View ArticleGoogle Scholar Li D, Kaner RB. Graphene-based materials. Science. 2008;320:1170–1.View ArticleGoogle Scholar Geim AK. Graphene: status and prospects. Science. 2009;324:1530–4.View ArticleGoogle Scholar Huang X, Qi X, Boey F, Zhang H. Graphene-based composites. Chem Soc Rev. 2012;41:666–86.View ArticleGoogle Scholar Xu YX, Zhao L, Bai H, Hong WJ, Li C, Shi GQ. Chemically converted graphene induced molecular flattening of 5,10,15,20-tetrakis(1-methyl-4-pyridinio) porphyrin and its application for optical detection of cadmium(II) ions. J Am Chem Soc. 2009;131:13490–7.View ArticleGoogle Scholar Peigney A, Laurent C, Flahaut E, Bacsa RR, Rousset A. Specific surface area of carbon nanotubes and bundles of carbon nanotubes. Carbon. 2001;39:507–14.View ArticleGoogle Scholar Sharma P, Das M. Removal of a cationic dye from aqueous solution using graphene oxide nanosheets: investigation of adsorption parameters. J Chem Eng Data. 2013;58:151–8.View ArticleGoogle Scholar Huang X, Yin ZY, Wu SX, Qi XY, He QY, Zhang QC, et al. Graphene-based materials: synthesis, characterization, properties, and applications. Small. 2011;7:1876–902.View ArticleGoogle Scholar Huang H, Lu S, Zhang X, Shao Z. Glucono-δ-lactone controlled assembly of graphene oxide hydrogels with selectively reversible gel-sol transition. Soft Matter. 2012;8:4609–15.View ArticleGoogle Scholar Sui Z, Zhang X, Lei Y, Luo Y. Easy and green synthesis of reduced graphite oxide-based hydrogels. Carbon. 2011;49:4314–21.View ArticleGoogle Scholar Rao CNR, Sood AK, Subrahmanyam KS, Govindaraj A. Graphene: the new two-dimensional nanomaterial. Angew Chem Int Ed. 2009;48:7752–77.View ArticleGoogle Scholar Wu S, He Q, Tan C, Wang Y, Zhang H. Graphene-based electrochemical sensors. Small. 2013;9:1160–72.View ArticleGoogle Scholar Geng Z, Lin Y, Yu X, Shen Q, Ma L, Li Z, et al. Highly efficient dye adsorption and removal: a functional hybrid of reduced graphene oxide-Fe3O4 nanoparticles as an easily regenerative adsorbent. J Mater Chem. 2012;22:3527–35.View ArticleGoogle Scholar Wang MY, Zhu W, Zhang DE, Li SA, Ma WX, Tong ZW, et al. CeO2 hollow nanospheres decorated reduced graphene oxide composite for efficient photocatalytic dye-degradation. Mater Lett. 2014;137:229–32.View ArticleGoogle Scholar Liu F, Chung S, Oh G, Seo TS. Three-dimensional graphene oxide nanostructure for fast and efficient water-soluble dye removal. ACS Appl Mater Interfaces. 2012;4:922–7.View ArticleGoogle Scholar Meidanchi A, Akhavan O. Superparamagnetic zinc ferrite spinel-graphene nanostructures for fast wastewater purification. Carbon. 2014;69:230–8.View ArticleGoogle Scholar Akhavan O, Ghaderi E, Esfandiar A. Wrapping bacteria by graphene nanosheets for isolation from environment, reactivation by sonication, and inactivation by near-infrared irradiation. J Phys Chem B. 2011;115:6279–88.View ArticleGoogle Scholar Huang YW, Zeng M, Ren J, Wang J, Fan L, Xu Q. Preparation and swelling properties of graphene oxide/poly(acrylic acid-co-acrylamide) super-absorbent hydrogel nanocomposites. Colloids Surf A. 2012;401:97–106.View ArticleGoogle Scholar Hummer WS, Offman RE. Preparation of graphitic oxide. J Am Chem Soc. 1958;80:1339.View ArticleGoogle Scholar Li D, Muller MB, Gilje S, Kaner RB, Wallace GG. Processable aqueous dispersions of graphene nanosheets. Nature Nanotechnol. 2008;3:101–5.View ArticleGoogle Scholar Chen B, Liu M, Zhang L, Huang J, Yao J, Zhang Z. Polyethylenimine-functionalized graphene oxide as an efficient gene delivery vector. J Mater Chem. 2011;21:7736–41.View ArticleGoogle Scholar Chandra V, Kim KS. Highly selective adsorption of Hg2+ by a polypyrrole-reduced graphene oxide composite. Chem Commun. 2011;47:3942–4.View ArticleGoogle Scholar Hou CY, Zhang QH, Li YG, Wang HZ. Graphene-polymer hydrogels with stimulus-sensitive volume changes. Carbon. 2012;50:1959–65.View ArticleGoogle Scholar Adhikari B, Biswas A, Banerjee A. Graphene oxide-based hydrogels to make metal nanoparticle-containing reduced graphene oxide-based functional hybrid hydrogels. ACS Appl Mater Interfaces. 2012;4:5472–82.View ArticleGoogle Scholar Sui ZY, Cui Y, Zhu JH, Han BH. Preparation of three-dimensional graphene oxide-polyethylenimine porous materials as dye and gas adsorbents. ACS Appl Mater Interfaces. 2013;5:9172–9.View ArticleGoogle Scholar Bose S, Kuila T, Uddin ME, Kim NH, Lau AKT, Lee JH. In-situ synthesis and characterization of electrically conductive polypyrrole/graphene nanocomposites. Polymer. 2010;51:5921–8.View ArticleGoogle Scholar Du X, Guo P, Song HH, Chen XH. Graphene nanosheets as electrode material for electric double-layer capacitors. Electrochim Acta. 2010;55:4812–9.View ArticleGoogle Scholar Perera SD, Mariano RG, Vu K, Nour N, Seitz O, Chabal Y, et al. Hydrothermal synthesis of graphene-TiO2 nanotube composites with enhanced photocatalytic activity. ACS Catal. 2012;2:949–56.View ArticleGoogle Scholar Balashov T, Takács AF, Wulfhekel W, Kirschner J. Magnon excitation with spin-polarized scanning tunneling microscopy. Phys Rev Lett. 2006;97:187201.View ArticleGoogle Scholar Calizo I, Balandin AA, Bao W, Miao F, Lau CN. Temperature dependence of the Raman spectra of graphene and graphene multilayers. Nano Lett. 2007;7:2645–9.View ArticleGoogle Scholar Kudin KN, Ozbas B, Schniepp HC, Prudhomme RK, Aksay IA, Car R. Raman spectra of graphite oxide and functionalized graphene sheets. Nano Lett. 2008;8:36–41.View ArticleGoogle Scholar Kim KS, Zhao Y, Jang H, Lee SY, Kim JM, Kim KS, et al. Large-scale pattern growth of graphene films for stretchable transparent electrodes. Nature. 2009;457:706–10.View ArticleGoogle Scholar Akhavan O. Bacteriorhodopsin as a superior substitute for hydrazine in chemical reduction of single-layer graphene oxide sheets. Carbon. 2015;81:158–66.View ArticleGoogle Scholar Xue P, Lu R, Chen G, Zhang Y, Nomoto H, Takafuji M, et al. Functional organogel based on a salicylideneaniline derivative with enhanced fluorescence emission and photochromism. Chem Eur J. 2007;13:8231–9.View ArticleGoogle Scholar Konwer S, Boruah R, Dolui SK. Studies on conducting polypyrrole/graphene oxide composites as supercapacitor electrode. J Electron Mater. 2011;40:2248–55.View ArticleGoogle Scholar Sharma A, Kumar S, Tripathi B, Singh M, Vijay YK. Aligned CNT/polymer nanocomposite membranes for hydrogen separation. Int J Hydrogen Energy. 2009;34:3977–82.View ArticleGoogle Scholar Akhavan O, Ghaderi E. Self-accumulated Ag nanoparticles on mesoporous TiO2 thin film with high bactericidal activities. Surf Coat Tech. 2010;204:3676–83.View ArticleGoogle Scholar Moulder JF, Stickle WF, Sobol PE, Bomben KD. Hand book of X-ray photoelectron spectroscopy. Eden Prairie, MN: Perkin-Elmer Corporation, Physical Electronics Division; 1992.Google Scholar Akhavan O, Azimirad R, Moshfegh AZ. Low temperature self-agglomeration of metallic Ag nanoparticles on silica sol-gel thin films. J Phys D Appl Phys. 2008;41:195305.View ArticleGoogle Scholar Tiwari JN, Mahesh K, Le NH, Kemp KC, Timilsina R, Tiwari RN, et al. Reduced graphene oxide-based hydrogels for the efficient capture of dye pollutants from aqueous solutions. Carbon. 2013;56:173–82.View ArticleGoogle Scholar Jiao TF, Wang YJ, Zhang QR, Zhou JX, Gao FM. Regulation of substituent groups on morphologies and self-assembly of organogels based on some azobenzene imide derivatives. Nanoscale Res Lett. 2013;8:160.View ArticleGoogle Scholar Jiao TF, Huang QQ, Zhang QR, Xiao DB, Zhou JX, Gao FM. Self-assembly of organogels via new luminol imide derivatives: diverse nanostructures and substituent chain effect. Nanoscale Res Lett. 2013;8:278.View ArticleGoogle Scholar Jiao TF, Wang YJ, Gao FQ, Zhou JX, Gao FM. Photoresponsive organogel and organized nanostructures of cholesterol imide derivatives with azobenzene substituent groups. Prog Nat Sci. 2012;22:64–70.View ArticleGoogle Scholar Jiao TF, Gao FQ, Wang YJ, Zhou JX, Gao FM, Luo XZ. Supramolecular gel and nanostructures of bolaform and trigonal cholesteryl derivatives with different aromatic spacers. Curr Nanosci. 2012;8:111–6.View ArticleGoogle Scholar Jiao TF, Gao FQ, Zhang QR, Zhou JX, Gao FM. Spacer effect on nanostructures and self-assembly in organogels via some bolaform cholesteryl imide derivatives with different spacers. Nanoscale Res Lett. 2013;8:406.View ArticleGoogle Scholar Jiao TF, Wang YJ, Zhang QR, Yan XH, Zhao XQ, Zhou JX, et al. Self-assembly and headgroup effect in nanostructured organogels via cationic amphiphile-graphene oxide composites. PLoS One. 2014;9:e101620.View ArticleGoogle Scholar This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. Citations: 127 more information
CommonCrawl
About Me CV BLOG Advent of Code: Day 3 Part Two Sat November 10, 2018 by Ben Yarmis in Posts tagged Python, Advent of Code, AoC 2017 The value 1 is stored in square 1. Then, in the same allocation order as above, the sum of the values in all adjacent squares is stored, including diagonals. So, the first few squares' values are chosen as follows: Square 1 starts with the value 1. Square 2 has only one adjacent filled square (with value 1), so it also stores 1. Square 3 has both of the above squares as neighbors and stores the sum of their values, 2. Square 4 has all three of the aforementioned squares as neighbors and stores the sum of their values, 4. Square 5 only has the first and fourth squares as neighbors, so it gets the value 5. Once a square is written, its value does not change. Therefore, the first few squares would receive the following values: $$ \begin{matrix} 147 & 142 & 133 & 122 & 59 \\ 304 & 5 & 4 & 2 & 57 \\ 330 & 10 & 1 & 1 & 54 \\ 351 & 11 & 23 & 25 & 26 \\ 362 & 747 & 806 & \rightarrow & \dots \end{matrix} $$ What is the first value written that is larger than your puzzle input? Interpretation, Planning, and Discussion Wow, man, that's unfortunate. Normally, the second part of the problems add on to what was made before. Here, it seems like the only thing that's reused is, at best, the building out part of things, but even then, Part 2 is built out very differently compared to Part 1. Our previous data structure is no longer really helpful at all. Previously, we simplified the matrix by explicitly not caring about what was on either side of the cell, just which direction it should go. Now, we need to know what's to a cell's left / right, top / bottom, and its diagonals. Forntunately, there's no walking back through that we need to worry about, just build until the value we insert is above a certain number. We could possibly use a Pandas DataFrame to represent a matrix, but there's a lot of overhead there that we don't need. Regardless of performance, I want to solve all of the Advent of Code problems using just the Python standard library. Also, I think what I came up with may be more performant! In this case, a matrix can be represented by a dictionary where the keys are a tuple with the X and Y values of a point. They have to be measured against something that doesn't move-- the origin in this case is the first cell. One benefit of a dictionary is that keys can be added and accessed in constant time so we don't have to worry about the speed of lookups affecting the run time. The grid is built out in a sprial and we'll have to keep track of the maximum value since that's our stopping point. Let's start this solution with a class to represent a point. This will make doing math later a bit easier since we'll be able to add a point to a point and get a point back. class Point: def __init__(self, x, y): self.x = x self.y = y def __add__(self, other): return Point(self.x + other.x, self.y + other.y) return 'Point({x}, {y})'.format(x=self.x, y=self.y) def __eq__(self, other): return self.x == other.x and self.y == other.y return hash((self.x, self.y)) Its initializer (__init__) just takes two arguments, an X and a Y value, the point's offset from the origin. The __add__ method is called when you add two objects together. In this case, you can only add a Point to another Point which makes sense. The __repr__ method is just what the class looks like when you print it. Instead of just seeing Point and its address in memory, we can see the values contained in the class. The __eq__ and __hash__ methods allow us to use the class as keys in a dictionary. In Python 2, you can have a __hash__ method without also defining an __eq__ method, which is a really bad idea. There can be hash collisions where two different objects have the same hash values but are not the same. This is why both __hash__ and __eq__ are required in Python 3 (and strongly suggested in Python 2). For this part of the problem, I decided to implement the solution as a class instead of a function, so let's define it and its initialization method now! from itertools import product class part_2: self.DIRECTIONS = tuple(Point(p[0], p[1]) for p in product([-1,0,1], repeat=2) if p != (0,0)) self.points = defaultdict(int) self.points[Point(0,0)] = 1 self.point_stack = [1] DIRECTIONS is a tuple representing the 8 directions around a cell, excluding the cell itself. product is an itertools function that returns all the different possible combinations of the first argument that it's passed repeat number of times. In this case, it's \((-1, -1)\), \((-1, 0)\), \((-1, 1)\), \((0, -1)\), and so on. There are only 8 different directions, but this is easier to type, less likely to result in mistakes, and easier to increase the number if we become interested in more directions at some point. points is the dictionary that represents the grid that we're going to build out. It's initialized with the origin and its starting value. A defaultdict is effectively the same as a dictionary with one useful addition-- if you try to access a key that doesn't exist, instead of raising a KeyError, defaultdict returns a value. Since we passed in int when we initialized it, it returns the default value, 0 (the same as just calling int()). The point_stack is used to keep track of the highest value since we're going to build out the spiral a bit more than we strictly need to in order to keep the code a bit more readable. We'll also need a function to add up everone around a point: def neighbor_sum(self, p): return sum(self.points[p+d] for d in self.DIRECTIONS) This function is fairly straightforward-- it takes a point, goes through all the directions we've calculated, and adds up everying. The benefit of using a defaultdict is that we don't have to catch any errors and can just sum up everything as though they all exist, even though most of them might not. The benefit of defining our own Point class earlier also is shown here since we can add a point to a point and use that as the key in our dictionary. We don't have to worry about doing tuple math down here since the logic that already takes care of that is in the Point class, where it should be. Now, it's time for the real meaty function, I'll call run. def run(self, goal): while self.point_stack[-1] <= goal: self.point_stack = [] # Add right for y in range(-r+1, r+1): ns = self.neighbor_sum(Point(r, y)) self.points[Point(r, y)] = ns self.point_stack.append(ns) run is called with one argument, the goal that we want to build up to and past. r is the effective radius of the spiral. We're going to build it out one layer at a time and r will track that layer. The stopping condition for the loop is that we've added a value that's greater than the input and point_stack is used to keep track of that maximum value. Each time we go through the loop, we can clear out the stack since we don't care about previous layers, just the one that's most recently built. The spiral is first built by going up the right side (starting at 3 o'clock, if the grid were a clock). r is the X value for all the points going up the right side and the only thing that varies is their Y values, going from one row off the bottom of the grid up to the top right corner. For the first iteration, r is 1 so this will go from 0 upu to and including 1. ns is the sum of a point's neighbors and that's calculated using the function we defined earlier. That's added to our dictionary as well as pushed onto the stack. Now we just need to repeat for the top, left, and bottom! # Add top, right to left for x in range(r, -r-1, -1): ns = self.neighbor_sum(Point(x, r)) self.points[Point(x, r)] = ns # Add left, top to bottom for y in range(r, -r-1, -1): ns = self.neighbor_sum(Point(-r, y)) self.points[Point(-r, y)] = ns # Add bottom for x in range(-r, r+1): ns = self.neighbor_sum(Point(x, -r)) self.points[Point(x, -r)] = ns r += 1 Since the neighbor_sum function doesn't care if a point already has a value (since DIRECTIONS purposefully doesn't include \((0,0)\)), we don't have to worry about double-counting the corners. The order that we add a point definitely does matter though, so we have to build the top and left opposite from how we're building the right and bottom, hence the -1 argument to the range function indicating that instead of adding 1 each time, it should subtract 1. We've now finished adding a whole layer to the spiral, so it's time to increment r and start our loop again. The while loop checks if it should continue at the start of each loop so the stack will be full with the layer that we just built. Lastly, now that the spiral is built, we have to get what the problem is actually looking for while self.point_stack[-1] > goal: # Pop from the stack until we go less than the value we're looking for ret_val = self.point_stack.pop() return ret_val While our stack has too many values, we should remove them and save the thing we removed to ret_val. This gets overwritten until the loop finds the crossover point, where the goal is between two values-- the larger one is returned. Algorithmic Runtime This one's also a bit tricky since it's hard to estimate how many cells the grid would have based off of the input value. Going function-by-function though, we should be able to get a good estimate, or at least one that's good enough to compare this algorithm to others. The neighbor_sum function is constant time since no matter the size of the input, it looks at 8 and only 8 different values. Each one of those is a constant time lookup because we used a dictionary. Despite the four for loops, each one only hits a cell once so building out the sprial is \(O(N)\) where \(N\) is the number of cells in the spiral. The various stack manipulations during the construction phase are also \(O(N)\) since each cell is pushed onto the stack once. Popping from the stack at the end is, worst-case, \(O(\sqrt{N})\) since there are, at most, \(\sqrt{N}\) items on the stack at the end. There are \(N\) items in the spiral, but the stack only has the last layer from the spiral. If the length of a spiral's side is \(S\), then there are \(S^2\) cells in the matrix and \(4S\) items in the stack (four sides in a layer). Since \(N\) is the number of cells in the matrix, \(N = S^2\) so \(O(\sqrt{N})\) items in the stack (the constant factor \(4\) drops away). Similarly, the memory used is \(O(N)\) since each cell is stored in the matrix once. There is some memory used by the stack, but again, its dwarfed by the matrix. This isn't my first solution to this problem. I originally started with a list of lists representing the matrix, but the code for that was terrible with large chunks copy-pasted and slightly modified to go along each side and special cases had to be made for corner pieces as well as pieces touching corner pieces. The runtime complexity of it was also terrible since lists were prepended to multiple times, an \(O(N)\) operation in and of itself since the whole list has to be shifted around to add an item to the beginning of it. My solutions for the Advent of Code problems are available on GitLab and GitHub. this site's repo © 2020 Ben Yarmis - Proudly powered by pelican with the pelican-simplegrey theme, modified.
CommonCrawl
arXiv.org > hep-ex > arXiv:1104.3922 High Energy Physics - Experiment Title:Active to sterile neutrino mixing limits from neutral-current interactions in MINOS Authors:MINOS Collaboration: P. Adamson, D. J. Auty, D. S. Ayres, C. Backhouse, G. Barr, M. Bishai, A. Blake, G. J. Bock, D. J. Boehnlein, D. Bogert, S. Cavanaugh, D. Cherdack, S. Childress, J. A. B. Coelho, S. J. Coleman, L. Corwin, D. Cronin-Hennessy, I. Z. Danko, J. K. de Jong, N. E. Devenish, M. V. Diwan, M. Dorman, C. O. Escobar, J. J. Evans, E. Falk, G. J. Feldman, M. V. Frohne, H. R. Gallagher, R. A. Gomes, M. C. Goodman, P. Gouffon, N. Graf, R. Gran, N. Grant, K. Grzelak, A. Habig, D. Harris, J. Hartnell, R. Hatcher, A. Himmel, A. Holin, X. Huang, J. Hylen, J. Ilic, G. M. Irwin, Z. Isvan, D. E. Jaffe, C. James, D. Jensen, T. Kafka, S. M. S. Kasahara, G. Koizumi, S. Kopp, M. Kordosky, A. Kreymer, K. Lang, G. Lefeuvre, J. Ling, P. J. Litchfield, L. Loiacono, P. Lucas, W. A. Mann, M. L. Marshak, N. Mayer, A. M. McGowan, R. Mehdiyev, J. R. Meier, M. D. Messier, W. H. Miller, S. R. Mishra, J. Mitchell, C. D. Moore, J. Morfín, L. Mualem, S. Mufson, J. Musser, D. Naples, J. K. Nelson, H. B. Newman, R. J. Nichol, T. C. Nicholls, J. A. Nowak, W. P. Oliver, M. Orchanian, J. Paley, R. B. Patterson, G. Pawloski, G. F. Pearce, D. A. Petyt, S. Phan-Budd, R. Pittam, R. K. Plunkett, X. Qiu, J. Ratchford, T. M. Raufer, B. Rebel, P. A. Rodrigues, C. Rosenfeld, H. A. Rubin M. C. Sanchez , J. Schneps, P. Schreiner, R. Sharma, P. Shanahan, A. Sousa, P. Stamoulis, M. Strait, N. Tagg, R. L. Talaga, E. Tetteh-Lartey, J. Thomas, M. A. Thomson, G. Tinti, R. Toner, D. Torretta, G. Tzanakos, J. Urheim, P. Vahle, B. Viren, J. J. Walding, A. Weber, R. C. Webb, C. White, L. Whitehead, S. G. Wojcicki, R. Zwaska et al. (26 additional authors not shown) (Submitted on 20 Apr 2011 (v1), last revised 1 Jul 2011 (this version, v4)) Abstract: Results are reported from a search for active to sterile neutrino oscillations in the MINOS long-baseline experiment, based on the observation of neutral-current neutrino interactions, from an exposure to the NuMI neutrino beam of $7.07\times10^{20}$ protons on target. A total of 802 neutral-current event candidates is observed in the Far Detector, compared to an expected number of $754\pm28\rm{(stat.)}\pm{37}\rm{(syst.)}$ for oscillations among three active flavors. The fraction $f_s$ of disappearing \numu that may transition to $\nu_s$ is found to be less than 22% at the 90% C.L. Comments: 5 pages, 3 tables, 2 figures. Published in Physical Review Letters Subjects: High Energy Physics - Experiment (hep-ex) Journal reference: Phys.Rev.Lett.107:011802,2011 Report number: Fermilab-Pub-11-183-E, BNL-95065-2011-JA Cite as: arXiv:1104.3922 [hep-ex] (or arXiv:1104.3922v4 [hep-ex] for this version) From: Alexandre Sousa [view email] [v1] Wed, 20 Apr 2011 02:14:05 UTC (21 KB) [v2] Thu, 21 Apr 2011 22:20:22 UTC (21 KB) [v3] Thu, 2 Jun 2011 22:57:34 UTC (21 KB) [v4] Fri, 1 Jul 2011 23:42:26 UTC (21 KB) hep-ex
CommonCrawl
Fine Tuning the Function Field Sieve Algorithm for the Medium Prime Case Palash Sarkar and Shashank Singh Abstract: This work builds on the variant of the function field sieve (FFS) algorithm for the medium prime case introduced by Joux and Lercier in 2006. We make several contributions. The first contribution uses a divisibility and smoothness technique and goes on to develop a sieving method based on the technique. This leads to significant practical efficiency improvements in the descent phase and also provides improvement to Joux's pinpointing technique. The second contribution is a detailed analysis of the degree of freedom and the use of a walk technique in the descent phase of the algorithm. Such analysis shows that it is possible to compute discrete logarithms over certain fields which are excluded by the earlier analyses performed by Joux and Lercier (2006) and Joux (2013). In concrete terms, we present computations of discrete logs for fields with 16 and 19-bit prime characteristic. We also provide concrete analysis of the effectiveness of the FFS algorithm for certain fields of characteristic ranging from 16-bit to 32-bit primes. The final contribution is to perform a complete asymptotic analysis of the FFS algorithm for fields $\mathbb{F}_Q$ with $p=L_Q(1/3,c)$. This closes gaps and corrects errors in the analysis earlier performed by Joux-Lercier and Joux and also provides new insights into the asymptotic behaviour of the algorithm. Category / Keywords: public-key cryptography / discrete logarithm problem Original Publication (with minor differences): IEEE Transactions on Information Theory DOI: 10.1109/TIT.2016.2528996 Date: received 29 Jan 2014, last revised 4 Mar 2020 Contact author: sha2nk singh at gmail com Note: A rounding error in the published version in IEEE.IT has been corrected.
CommonCrawl
Earthquake Research in China 2019, Vol. 33 Issue (4): 617-631 DOI: 10.19743/j.cnki.0891-4176.201904011 WANG Peng, WANG Baoshan. ETAS Model Analysis on the Chang Island Earthquake Swarm in Shandong Province, China[J]. Earthquake Research in China, 2019, 33(4): 617-631. ETAS Model Analysis on the Chang Island Earthquake Swarm in Shandong Province, China WANG Peng1,2 , WANG Baoshan1,3 1. Institute of Geophysics, China Earthquake Administration, Beijing 100081, China; 2. Shandong Earthquake Agency, Jinan 250102, China; 3. School of Earth and Space Sciences, University of Science and Technology of China, Hefei 230026, China Received on April 29, 2019; revised on June 4, 2019 This project is sponsored by the National Key R&D Program of China (2016YFE0109300), the Seismological Science and Technology Spark Program (XH18026Y), the Natural Science Foundation of Shandong Province (ZR2017QD014) and the Key R&D Program of Shandong Province (2016GSF120011). About the Author: WANG Peng, born in 1983, is an associate researcher at Shandong Earthquake Agency. His major research is seismic activity and microseismic identification. E-mail:[email protected] Abstract: Influenced by the layout of seismic network and the location of earthquakes, earthquake catalogs are often incomplete; such incompleteness of earthquake catalogue directly affects the analysis of sequence activity characteristics. In this paper, the GPU-acceleration-based g template matching method is used to scan the continuous waveforms of Chang Island earthquake swarm in Shandong Province from February 9 to August 20, 2017. In total, 15, 286 earthquakes events were detected, which was more than 6 times compared with those in network catalogue and thus reduced the magnitude of completeness from 1.0 to 0.5. Based on the intergrated catalogue of earthquakes, the characteristics of Chang Island earthquake swarm were then analyzed using the Epidemic Type Aftershock Sequences (ETAS) model. The stochastic components in the ETAS model are used as a proxy for possible earthquake triggered by external forces (fluids). The results show that the proportion of earthquakes triggered by external forces of Chang Island swarm increases gradually (from 31.9% to 63.5%) and then decreases. The latter stage of swarm development is mainly affected by the self-excitation of earthquakes, suggesting that the fluids play an important role in the development of the Chang Island swarm. However, the triggering intensity of fluids to microseismicity is divergent in different periods, which may be related to the process of fluid permeation. Key words: ETAS model Chang Island swarm Template matching Magnitude of completeness b-value Fluid triggering Since February 14, 2017, seismic activities have occurred continually in Chang Island area, Shandong Province. Until August, 2017, more than 2, 300 earthquakes have been recorded in the local catalog, including 43 ML≥3.0 events. The largest earthquake was the ML4.5 earthquake occurred on March 3, 2017 (Fig. 1). The frequency and intensity of the Chang Island earthquake swarm exceed those before the 1976 Tangshan MS7.8 earthquake in the same area, which is unwonted in the history. Chang Island area has always been considered as an information window of the seismicity investigation in North China (Su Luansheng, 1993). Therefore, detailed analysis of the characteristics of Chang Island earthquake swarm is conducive to understand the seismogenic mechanism of earthquakesand further assist estimate the seismic risk in this area. Fig. 1 The location of the Chang Island swarm (red dots) and local seismic stations (triangles) Blue triangles are the stations used for earthquake detection. The red box in the inset indicates the study area of the main map The Chang Island swarm is located at the eastern end of the NWW-oriented Zhangjiakou-Penglai fault zone. The faults in this area are complex and mainly consist of the NWW-oriented Penglai-Weihai fault and several NE-trending small faults (Fig. 1). Due to the difficulty of tectonic stress measurement as well as the complexity of the seismic triggering process, it is difficult to establish a physical model for accurate description. However, the essential rules can be explored from complex phenomena using statistical methods. Since Ogata (Ogata Y., 1988) proposed the Epidemic Type Aftershock Sequence (ETAS) model based on the "Omori-Utsu" law (Omori F., 1894; Utsu T., 1961), the model has received extensive attention and gained considerable development. With the support of ETAS model, the parameters of seismic sequence can be accurately estimated, the variation law of seismic activity can be effectively found (Ogata Y., 1992, 2001; Jiang Haikun et al., 2007; Jiang Changsheng et al., 2013(a), 2013(c), 2015, 2017), the relative quietness of seismic activity can be monitored (Zhuang Jiancang, 2000; Jiang Changsheng et al., 2013(b)), slight stress change can be detected (Helmstetter A. et al., 2003; Lei Xinglin et al., 2008(a), 2017; Jia Ke et al., 2014, 2018), and induced earthquakes related to fluid action can be identified (Hainzl S. et al., 2005; Lei Xinglin et al., 2008(b); Long Feng et al., 2010; Jiang Haikun et al., 2011, 2012; Peng Yajun et al., 2012) The statistical seismic modelling method depends largely on the earthquake catalogue. The seismic sequence activity characteristics and the aftershock probability prediction of large earthquakes are usually affected by the magnitude of completeness (Mc) (Jiang Changsheng et al., 2013(c); Zhuang Jiancang et al., 2017). Also, the artificial setting of the magnitude of completeness may affect the estimation of ETAS model parameters (Jiang Haikun et al., 2012; Li Zhichao et al., 2014), which mainly depends on the monitoring ability of regional seismic network. In areas like Chang Island, the signal-to-noise ratios of seismic waveforms are usually low, and seismic stations are sparse or the distribution is not ideal. Thus, the clustering occurrence of earthquake sequences in a short time usually results in the omission of earthquake in the catalog. However, the reliability of analysis can be improved by using a more complete earthquake catalogue (Schaff D. P., 2008). The template matching filtering technique based on waveform cross-correlation is suitable for microseismic detection (Gibbons S. J. et al., 2006), which has developed rapidly and has been widely used recently. It was applied for detecting tremors (Shelly D. R. et al., 2007), studying the aftershock sequence changes (Peng Zhigang et al., 2009; Hou Jinxin et al., 2017; Wang Peng et al., 2017), determining the seismogenic structure (Yang Hongfeng et al., 2009; Tan Yipei et al., 2016), analyzing the stress changes in the source region (Meng Xiaofeng et al., 2012), identifying repeating earthquakes (Ma Tengfei, 2015), and monitoring nuclear explosions (Zhang Miao et al., 2015). In this paper, a GPU-based template matching method is used to detect the missing earthquakes of the Chang Island earthquake swarm. The statistical parameters of the ETAS model of the earthquake swarm are analyzed based on the completed earthquake catalogue, and the possible seismogenic mechanism of the Chang Island earthquake swarm is then discussed. 1 GEOLOGICAL BACKGROUND AND SEISMIC ACTIVITY IN THE STUDY AREA The Chang Island earthquake swarm is located on the Penglai-Weihai fault zone, which is the southeast segment of the NWW-trending Zhangjiakou-Bohai fault zone (Xu Jie et al., 1998). It is composed of a series of NW-trending faults developed between Penglai and Weihai sea area in the northern Shandong Peninsula. The major part is located between the Chang Island and the Dazhu Island; most faults are normal and strike-slip faults, some of which are thrusting (Zheng Jianchang et al., 2018). The Penglai-Weihai fault zone has hosted many historical earthquakes. Among them, the Chang Island MS7.0 earthquake in 1948 occurs in the west of the fault zone and the Weihai MS6.0 in 1948 occurs in the east of the fault zone (Wang Zhicai et al., 2006). From February 14 to August 30, 2017, an earthquake swarm with 2, 377 earthquakes (from the Shandong Seismic Network) has occurred in the Chang Island, including 1, 459 ML1.0-1.9 earthquakes, 271 ML2.0-2.9 earthquakes, 39 ML3.0-3.9 earthquakes and 43 ML≥4.0 earthquakes. Among them, the largest ML4.5 earthquake occurred on March 3. There are 16 seismic stations within 150km from the swarm, all are three-component broadband seismographs with a sampling rate of 100Hz. They are mainly distributed in the land area to south of the swarm. In the following, we select Chang Island Station (CHD), Beihuangcheng Station (BHC), Longkou Station (LOK), and Yantai Station (YTA), which are close to and well distributed to the earthquake swarm, to detect missing earthquakes by using the three-component continuous waveforms. 2 METHODS 2.1 Template Matching Method The procedure of the template matching method used in this paper follows the work by Peng and Zhao (Peng Zhigang and Zhao Peng, 2009), and uses the similar processing steps as Hou and Wang (Hou Jinxin and Wang Baoshan, 2017). The procedure is shown in Fig. 2, which is roughly divided into four steps and is briefly described as follows: Fig. 2 Template matching method calculation process (1) Data preprocessing and template selection. Using the three-component records of the four stations mentioned above (blue triangles in Fig. 1), the mean values of the continuous waveform data from the period February 9 to August 30, 2017 are removed. Then, the data are detrended and band-pass filtered to 2-10Hz. In total, there are 302 earthquakes with ML≥2.0 and SNR>5 from Chang Island swarm selected as templates, and the P- and S-wave arrival times of the template earthquakes are obtained from the local catalog. (2) The acquisition of cross-correlation coefficients. The template waveforms are then cross-correlated to the corresponding continuous waveforms. The correlating time window is 1s before and 5s after the P- and S-wave arrival times of the template. The cross-correlation coefficient of the four stations are then shifted and averaged. (3) The detection of missing earthquakes.We calculate the Median Absolute Deviation (MAD) of averagedcross-correlation coefficients, and select 12 times of the MAD as threshold. When the cross-correlation coefficient is higher than the threshold, it is marked as a tentative event; the maximum value of the average correlation coefficient within 5s is set as a detected event. (4) The determination of earthquake time and magnitude. Based on the assumption that the missing event has similar travel times as the corresponding template event, the origin time of the missing event is obtained. The ratio of maximum amplitude of the horizontal component of the missing event to the corresponding template event is used to estimate the magnitude of the missing earthquake. 2.2 ETAS Model The conditional intensity function λ(t) of the ETAS model is (Ogata Y., 1988; Jiang Haikun et al., 2012): $ \lambda \left( t \right) = \mu \left( t \right) + K\sum\limits_{{t_i} < t} {\frac{{{e^{\alpha \left( {{M_i} - {M_z}} \right)}}}}{{{{\left( {t - {t_i} + c} \right)}^p}}}} , $ (1) The first item on the right μ(t) is the theoretical number of earthquakes per unit time after eliminating the clustering of earthquakes. It also indicates that the intensity of microseismic activity triggered by external force (fluid), which has no correlation with the self-excitation process between earthquakes (Hainzl S. et al., 2005). The second item describes the contribution of the ith earthquake to the occurrence of following earthquakes. i traverse all earthquakes in the earthquake sequence. Mz is the reference magnitude no less than the magnitude of completeness Mc. p represents the attenuation factor of seismic sequence: large p means fast attenuation, while small pmeans slow attenuation. In the mainshock-aftershock type earthquake sequence, when Mz takes the magnitude of the mainshock, the smaller α value has higher secondary aftershock excitation ability (Song Jin et al., 2009); when Mz takes the complete magnitude of Mc, α indicates the ability of triggering secondary aftershocks, and greater α indicates the stronger ability (Jiang Haikun et al., 2012). The initial physical implication of c is a very small positive number, ensuring that the denominator of the second term of equation (1) is not zero, the exact value of c is determined by the completeness of the data in a short time after the mainshock (Jiang Haikun et al., 2007) and the stress state of the source region (Narteau C. et al, 2009). k is the level of seismic activity associated with the magnitude of the mainshock and the b, p, α, c, μ parameters, where b is the scale factor in the G-R law (Song Jin et al., 2009). One observed seismic sequence can be fitted with multiple statistical models mathematically, andmultiple sets of qualified parameters can be obtained. Ogata (Ogata Y., 1988) uses the maximum likelihood method to estimate the parameters of the ETAS model and uses the Akaike information criterion (AIC) to minimize the optimal model parameters (Akaike H., 1974). The AIC criterion not only considers the pros and cons of the model fitting of the observed data, but also punishes the behavior of increasing the model parameters in an unrestricted manner for improving the fitting degree. The log likelihood of the ETAS model are: $ \lg L = \sum\limits_{i = 1}^{{N_{{\rm{AIC}}}}} {\lg \lambda \left( t \right) - \int\limits_{{t_{\rm{b}}}}^{{t_{\rm{e}}}} {\lambda \left( t \right){\rm{d}}t} } , $ (2) $ {\rm AIC} = - \lg L + 2k, $ (3) In equation (2), tb and te are the beginning and end time used for AIC calculation, and NAIC is the number of earthquakes in the calculation time period. It is necessary to ensure that the earthquakes in the calculation period are basically complete. In formula (3), k is the number of parameters to be fitted. In this paper, k=5, and the model with minimum AIC is taken as the final model. It can be seen from equation (1) that the seismic rate represented by ETAS model consists of two parts, the first term is independent of the aftershock excitation triggered by the external forces, while the second item is the seismic cluster caused by the "aftershock excitation aftershock". Therefore, through the separation of the two parts of the ETAS model, the ratio of the external-force-triggered earthquake Rb can be expressed by formula (4), and the proportion of self-excited earthquake is Rt=1-Rb, The relationship between the triggering of external factors (fluid action) and seismic activity can be investigated according to their respective proportions. (Jiang Haikun et al., 2011). $ {{R}_b} = \frac{{\int_{{t_{\rm{b}}}}^{{t_{\rm{e}}}} {\mu \left( t \right){\rm{d}}t} }}{{\int_{{t_{\rm{b}}}}^{{t_{\rm{e}}}} {\lambda \left( t \right){\rm{d}}t} }}, $ (4) 3 RESULTS 3.1 Missing Earthquake Detection Results Using the template matching technique, we detected 15, 286 earthquakes from February 9 to August 20, 2017, including 302 self-detected templates and 14, 984 missing events. The seismic catalogue obtained by template matching method is hereinafter referred to as detection catalog. The event number in the detection catalog is more than 6 times compared with the number in Shandong Seismological Network Catalog. Fig. 3 is an example showing the detection of a missing ML2.6 earthquake. The seismic rates of the detection and network catalogues are shown in Fig. 4, from which we suggest that there are few earthquakes when magnitude ML > 2.5. The difference between the detection and network catalogues is related to the monitoring ability of the network in this area. The monitoring ability of one network can be described with the magnitude of completeness. To evaluate the magnitude of completeness, we used the maximum curvature method, and the corresponding magnitude with the maximum slope in the cumulative frequency-magnitude curve is usually selected as the magnitude of completeness(Mc)(Wiemer S. et al., 2000). In practical applications, this magnitude usually corresponds to the magnitude of the non-cumulative frequency-magnitude distribution with the largest seismic frequency (Huang Yilei et al., 2016). The estimated completeness magnitudes of the detection catalogue and the network catalogue are 0.5 and 1.0, respectively. The cumulative frequency-magnitude curve is then fitted using the maximum likelihood method, and the best-fitted b values for detection catalogue and the network catalogue are 1.04±0.01 and 0.79±0.02, respectively. Fig. 3 The example of template matching method for detecting missing earthquakes (taking the ML2.6 earthquake as an example). Waveform of cross-correlation coefficient (a) after shifting and averaging, blue dots line is the threshold 0.75, red dot is one of the two missing events. (b) shows the waveforms of detected missing earthquake event, red color denotes the template P-wave and blue color corresponding to the S-wave, red dashed line shows the earthquake origin time, the detected event is ML2.6, 07:35:57, 2017-04-16 Fig. 4 Comparison of the frequency and b-value of the detection catalogue and the network catalogue The triangles indicate the detection catalogue, the gray circles represent the network catalogue; Two dashed lines mark the vertex of non-cumulative frequency curves, which represents the magnitude of completeness(Mc). The Mc for detection catalogue and the network catalogue are 0.5 and 1.0, respectively. The b values fitted by the maximum likelihood method are 1.04 and 0.79, respectively 3.2 Characteristics of the Earthquake Swarm Based on the detection catalogue, we analyze the temporal evolution of the seismicity of Chang Island earthquake swarm. Judging from the variation of cumulative frequency and magnitude with time (Fig. 5), we divide the Chang Island earthquake swarm into four periods: in the first stage, the seismic activity with low magnitudes started to increase from February 13, 2017, the activity continued until the occurrence of the largest ML4.5 earthquake (March 3, 2017). During this time, the network catalog revealed a relative quiet scenario (black point in Fig. 5); the second stage is mainly concentrated between March and April, 2017. After the ML4.5 earthquake on March 3, the seismic activity increased obviously with higher intensity and frequency. However, the seismic activity gradually weakened after reaching the highest value in April, 2017; the third stage occurred in May, 2017, and a new round of enhancement occurred in the earthquake activity from May 2 to 4, and then gradually weakened. The fourth stage is from June to August, 2017. Similar to the third phase, the seismic activity appeared again in the process of enhancement-attenuation, which slowly attenuated after a short time increase in early June. The detection catalogue and the network catalogue have the same seismic activity period, although the number of earthquakes in the low-magnitude section is different. According to the development process of the earthquake swarm, each stage can be further divided into several sub-stages (Fig. 5). In the following, the ETAS model will be used to model these different stages separately; we will obtain the variations of each model parameter and discuss the influence of external force (fluid) action and self-excitation on the development of the earthquake swarm. Fig. 5 The magnitude-time relation of the network catalogue and the detection catalogue and the cumulative frequency curve of the detection catalogue The left ordinate is the magnitude, the right ordinate is the cumulative frequency, the gray dot indicates the detection catalogue, and the black dot indicates the network catalogue. The long red dotted line divides the earthquake sequence into four stages according to the cumulative frequency. The short red dotted line divides each stage into several sub-stages The cut-off magnitude has great influence in ETAS modelling (Jiang Haikun et al., 2012; Jiang Changsheng et al., 2013c) and is usually set greater than the magnitude of completeness. To model the seismic activities of all stages, we need to estimate the Mc of the corresponding stage. We use the maximum curvaturemethod and 'goodness of fit' test method to estimate the change of complete magnitude (Mc) with time (Fig. 6). From the results of the optimal solution (the black solid line in Fig. 6), it can be seen that the magnitude of completeness (Mc) is basically constant in the process of swarm development, all are no greater than 0.5. Therefore, in the subsequent parameter inversion of ETAS model, we can set the cut-off magnitude as 0.5. Fig. 6 The magnitude-sequence map of magnitude of completeness (Mc) The horizontal coordinate represents the earthquake sequence numbe; the gray circle represents earthquake of the detection catalogue; the red line, blue line and cyan dot line represents the magnitude of completeness (Mc) solved by the maximum curvature method, 90% and 95% reliability solved by goodness-of-fit method, respectively. The black solid line represents the optimal solution of the three methods. The window lengths of these methods are all 500 earthquake events 3.3 Inversion of ETAS Model Parameters According to the stages defined in Fig. 5, the earthquake catalogues of each stage ML≥0.5 are taken, and the respective ETAS model parameters are estimated (Table 1). We select the first and second stages as examples for detailed analysis (Fig. 7). Table 1 ETAS model parameters at different stages of Chang Island swarm The stage μ Rb K0 α p c Ⅰ 5.4668 31.9% 0.000048 0.608 2.94 0.0061 Ⅱ1 17.9568 46.9% 0.000095 0.628 3.00 0.0114 Ⅱ5 7.4546 34.4% 0.014931 0.803 1.36 0.0012 Ⅲ1 33.4324 16.9% 0.030899 0.609 1.31 0.0018 Ⅲ2 2.3071 18.5% 0.049375 0.155 1.21 0.0014 Ⅳ1 54.8119 24.3% 0.000197 0.794 3.00 0.0147 Ⅳ2 8.2332 25.8% 0.018342 0.928 1.33 0.0025 Fig. 7 Results of ETAS model at different stages. Fig. 7(e) corresponds to the ETAS model parameters of the first stage (Ⅰ) in Fig. 5 and the five sub-phases (Ⅱ1-Ⅱ5) of the second stage, where the horizontal ordinate is the transformed 17:00:48 For the first stage, the ratio of random components is 31.9%, indicating that 31.9% of the seismic activity is triggered by external forces, and 68.1% of the earthquakes are generated by self-excitation of aftershocks. The a-value is 0.608, which is consistent with the activity characteristics of the earthquake cluster (0.35-0.85) (Ogata Y., 1992) and lower than the general tectonic earthquake (2.0). A small a-value indicates that the aftershock excitation ability is not strong. Meanwhile, low α-value means low dependence on earthquake magnitude, therefore, the stress variation caused by the earthquake has only a slight change (Lei Xinglin et al., 2008a). The p-value is larger than the typical tectonic earthquakes (around 1.0), indicating that the sequence decays rapidly. The fitting between theoretical and observed values of the model is poor, and the observed earthquake occurrence rate (blue solid line in Fig. 7(a)) is significantly higher than that of the theoretical curve (grey point line in Fig. 7(a)). This indicates that the self-triggering of Omori's law has only a slight promoting effect, and the proportion of seismic activity caused by external force may be underestimated (Lei Xinglin et al., 2008a). The second stage corresponds to the period when the seismic activity rate gradually increases and then decreases. According to the activity rate, we divided stage Ⅱ into five sub-phases (Ⅱ1-Ⅱ5, Fig. 5), of which Ⅱ2 has the highest seismic activity rate. The observations and theoretical values of these five sub-phases are well fitted, and the proportion of external force (fluid) action becomes significantly larger and gradually increases, from 46.9% in Ⅱ1 to 63.5% in Ⅱ3, indicating the presence of active external force (fluid). The number of earthquakes triggered by fluids increases, and even more than half of the earthquakes are caused by external forces, instead of being influenced by "aftershocks triggering aftershocks" scenario in Omori's law. At the same time, the corresponding α values are higher than that in the first stage, suggesting that the excitation ability of aftershocks is enhanced. The p-value remains high from stage Ⅰ to Ⅱ3, but decreases significantly in stage Ⅱ4and Ⅱ5, indicating that the decay rate of the sequence decreases. We calculate the b-value of each stage using maximum likelihood method, and analyze the evolution of the b-value of Chang Island earthquake swarm as well as Rb, α, p and μ from ETAS model in the whole process of swarm development (Fig. 8). The results show that, except for the individual activity stages, the Rb, μ and α are stable and positively correlated. All the values increase at first and then decrease (Fig. 8(a)-(c)), the trend of b-value and p-value are not obviously synchronized, but their value are relative high at early stages and low in later stages. (Fig. 8(d)-(e)). The μ-value is the background seismicity rate. Larger μ value represents stronger external force, and Rb is defined by the ratio of "triggered" earthquakes and "natural" earthquakes unrelated to the external force, which also indicates the triggering intensity (Formula 4). It can be seen from Rb values of different stages that the fluid plays a role in the development of the earthquake swarm to some extent, the effect of fluid increase at first and then weakened, indicating that the extent of external triggering intensity varies at different times. Fig. 8 Temporal evolution diagrams of ETAS model parameters and b values, in which horizontal ordinate represents 11 stages (including sub-stages) as shown in Fig. 5 and Table 1. Longitudinal coordinates represent changes of parameters We use the ratio of earthquakes induced by external force (fluid) and self-excited earthquakes Rb (Formula (4)) to investigate the influence of external force on the swarm activity. Firstly (stage Ⅰ), the proportion of earthquake triggered by external force is not high (31.9%), then the triggering ratio increases gradually until stage Ⅱ3, reaching the maximum (63.5%). Subsequently, the triggering effect of fluids i weakened obvioously. The b-value usually reflects the stress level of the study area: a low b-value reflects high stress state, while a high b-value corresponds to a low stress state. The high b-value in the early stage of the earthquake swarm indicates that the stress at the beginning of the earthquake swarm is not high (Fig. 8(d)), but the largest ML4.5 earthquake in the swarm occurred, which could be attributed to fluid triggering. Rb indicates the external force (fluid) effect and is generally thought to be related to the stress state of the source region (Peng Yajun et al., 2012; Jia Ke et al., 2014, 2018). In the initial stage of the earthquake swarm, the b-value is high, while the Rb value is relatively low, and the actual earthquake occurrence rate at this stage may be significantly higher than the fitted theoretical curve (Fig. 7(a)) and the external force may be underestimated (Lei Xinglin et al., 2008a). The b-value in other stages is also consistent with the process of increasing at first and then decreasing. Besides, the change of external force is also consistent. When the proportion of fluid action effect is large, the b-value stays in a high level, indicating the fluid reduces the effective normal stress of the fault. Fluids greatly reduce the strength imposed by the fault hence even relatively low local stress can cause earthquakes. The process of increase and decrease of the earthquake proportion of external force (fluid) triggering may be related to the fluid infiltration process. With the penetration of the fluid, the pore pressure gradually increases, the strength of the fault (fracture) decreases, leading to a larger possibility of earthquake occurrence. Fluid effect at the beginning of the second stage is obviously stronger than that in the firststage; however, after a certain period of time, the fluids tends to be saturated in the pore, and the change of pore pressure decreases gradually, which may weaken the triggering effect of external factors of fluid, and the ratio coefficient Rb decreases from 63.5% to less than 30%. Seismic activities in the third and fourth stages are mainly caused by the self-excitation of earthquakes. In each stage of swarm development, Rb is positively correlated with μ value, indicating the strength of external force. Larger value corresponds to stronger fluid effect. This is also consistent with the trend of earthquake self-excitation (α). The parameters vary in different stages, however, the fluid interaction may generally have a greater impact on the early stage of the swarm, and the self-excitation of the earthquake plays a more important role in the whole process of swarm development. In the initial stage of the swarm, the p value has been at a high level, result that indicates the swarm attenuates rapidly, suggesting that it may be related to the larger proportion of microearthquakes in the swarm. Alternatively, the p value is positively correlated with the focal depth and the temperature at the focal depth (Creamer F. H. et al., 1993; Nyffenegger P. et al., 2000; Song Jin et al., 2009). Seismic sequences with larger source depths attenuate faster, stress relaxation in regions with higher crustal temperature is faster, and aftershock activity attenuates relatively faster. The focal depth of Chang Island swarm is about 11km, which is larger than that of earthquakes induced by reservoirs and water injection(Jiang Haikun et al., 2012; Lei Xinglin et al., 2008b); Meanwhile, whether the source of fluids is related to high-pressure crustal melting materials in the study area (Liu Lihua et al., 2015) remains questionable. Both of these factors may lead to the accelerated attenuation of earthquake swarms. Detailed changes of background earthquake occurrence rate and p-value can be observed, and subsequent discussions can be completed in the way of iteration and continuous sliding window proposed by Peng Yajun et al. (2012). In this paper, the GPU-acceleration-based template matching method is used to detect micro-seismic event from the continuous seismic waveforms of the Chang Island earthquake swarm from February 9 to August 20, 2017. A total of 15, 286 earthquake events are identified, of which 302 are self-detection of template events and 14, 984 are missing earthquake events. Events in detected catalogue is more than 6 times of that of the network catalogue, and the magnitude of completeness of the earthquake swarm reduces from 1.0 to 0.5. Based on the detected earthquake catalogue, the ETAS model is used to analyze the characteristics of the Chang Island earthquake swarm. It is preliminarily inferred that fluid triggering plays a role in the development of the Chang Island swarm to some extent, and has a greater impact in the early stage of the swarm, while the self-excitation of the swarm also plays an important role in the whole development of the swarm. In this paper, the seismogenic mechanism of Chang Island swarm is explored only from the statistical point of view. As for the detailed mechanism and further analyzation of fluid triggering and the change of driving forces in the physical process of swarm development, more methods are needed. Thanks to the two reviewers for their valuable comments. In this paper, the template matching algorithm is developed by Professor Peng Zhigang, and the ETAS model is calculated by the GeoTaos software developed by Professor Lei Xinglin. Akaike H.Akaike H. A new look at the statistical model identification[J]. IEEE Transactions on Automatic Control, 1974, 19(6): 716-723. DOI:10.1109/TAC.1974.1100705 Creamer F.H., Kisslinger C.Creamer F.H., Kisslinger C. The relation between temperature and the Omori decay parameter for aftershock sequences nearJapan[J]. EOS, 1993, 74(S43): 417. Gibbons S.J., Ringdal F.Gibbons S.J., Ringdal F. The detection of low magnitude seismic events using array-based waveform correlation[J]. Geophysical Journal International, 2006, 165(1): 149-166. DOI:10.1111/j.1365-246X.2006.02865.x Hainzl S., Ogata Y.Hainzl S., Ogata Y. Detecting fluid signals in seismicity data through statistical earthquake modeling[J]. Journal of Geophysical Research:Solid Earth, 2005, 110(B5): B05S07. DOI:10.1029/2004JB003247 Helmstetter A., Sornette D., Grasso J.R.Helmstetter A., Sornette D., Grasso J.R. Mainshocks are aftershocks of conditional foreshocks:How do foreshock statistical properties emerge from aftershock laws[J]. Journal of Geophysical Research:Solid Earth, 2003, 108(B1): 2046. DOI:10.1029/2002JB001991 Hou Jinxin, Wang BaoshanHou Jinxin, Wang Baoshan. Temporal evolution of seismicity before and after the 2014 Ludian Ms6.5 earthquake[J]. Chinese Journal of Geophysics, 2017, 60(4): 1446-1456 (in Chinese with English abstract). Huang Yilei, Zhou Shiyong, Zhuang JiancangHuang Yilei, Zhou Shiyong, Zhuang Jiancang. Numerical tests on catalog-based methods to estimate magnitude of completeness[J]. Chinese Journal of Geophysics, 2016, 59(4): 1350-1358 (in Chinese with English abstract). Jia Ke, Zhou Shiyong, Zhuang Jiancang, Jiang Changsheng, Guo Yicun, Gao Zhaohui, Gao SheshengJia Ke, Zhou Shiyong, Zhuang Jiancang, Jiang Changsheng, Guo Yicun, Gao Zhaohui, Gao Shesheng. Did the 2008 MW7.9 Wenchuan earthquake trigger the occurrence of the 2017 MW6.5 Jiuzhaigou earthquake in Sichuan, China?[J]. Journal of Geophysical Research:Solid Earth, 2018, 123(4): 2965-2983. DOI:10.1002/2017JB015165 Jia Ke, Zhou Shiyong, Zhuang Jiancang, Jiang ChangshengJia Ke, Zhou Shiyong, Zhuang Jiancang, Jiang Changsheng. Possibility of the Independence between the 2013 Lushan Earthquake and the 2008 Wenchuan Earthquake on Longmen Shan Fault, Sichuan, China[J]. Seismological Research Letters, 2014, 85(1): 60-67. DOI:10.1785/0220130115 Jiang Changsheng, Zhuang Jiancang, Long Feng, Han Libo, Guo LujieJiang Changsheng, Zhuang Jiancang, Long Feng, Han Libo, Guo Lujie. Statistical analysis of ETAS parameters in the early stage of the 2013 Lushan MS7.0 earthquake sequence[J]. Acta Seismologica Sinica, 2013a, 35(5): 661-669 (in Chinese with English abstract). Jiang Changsheng, Wu Zhongliang, Zhuang JiancangJiang Changsheng, Wu Zhongliang, Zhuang Jiancang. ETAS model applied to the Earthquake-Sequence Association (ESA) problem:the Tangshan sequence[J]. Chinese Journal of Geophysics, 2013b, 56(9): 2971-2981 (in Chinese with English abstract). Jiang Changsheng, Wu Zhongliang, Han Libo, Guo LujieJiang Changsheng, Wu Zhongliang, Han Libo, Guo Lujie. Effect of cutoff magnitude Mc of earthquake catalogues on the early estimation of earthquake sequence parameters with implication for the probabilistic forecast of aftershocks:the 2013 Minxian-Zhangxian, Gansu, MS6.6 earthquake sequence[J]. Chinese Journal of Geophysics, 2013c, 56(12): 4048-4057 (in Chinese with English abstract). Jiang Changsheng, Wu Zhongliang, Yin Fengling, Guo Lujie, Bi Jinmeng, Wang YawenJiang Changsheng, Wu Zhongliang, Yin Fengling, Guo Lujie, Bi Jinmeng, Wang Yawen. Stability of early-estimation sequence parameters for continuous forecast of the aftershock rate:a case study of the 2014 Ludian, Yunnan MS6.5 earthquake[J]. Chinese Journal of Geophysics, 2015, 58(11): 4163-4173 (in Chinese with English abstract). Jiang Changsheng, Zhuang Jiancang, Wu Zhongliang, Bi JinmengJiang Changsheng, Zhuang Jiancang, Wu Zhongliang, Bi Jinmeng. Application and comparison of two short-term probabilistic forecasting models for the 2017 Jiuzhaigou, Sichuan, MS7.0 earthquake[J]. Chinese Journal of Geophysics, 2017, 60(10): 4132-4144 (in Chinese with English abstract). Jiang Haikun, Zheng Jianchang, Wu Qiong, Qu Yanjun, Li YongliJiang Haikun, Zheng Jianchang, Wu Qiong, Qu Yanjun, Li Yongli. Earlier statistical features of ETAS model parameters and their seismological meanings[J]. Chinese Journal of Geophysics, 2007, 50(6): 1778-1786 (in Chinese with English abstract). Jiang Haikun, Yang Maling, Sun Xuejun, Lü Jian, Yan Chunheng, Wu Qiong, Song Jin, Zhao Yong, Huang Guohua, Zhang Hua, Yao Hong, Mu Jianying, Li Jin, Qu JunhaoJiang Haikun, Yang Maling, Sun Xuejun, Lü Jian, Yan Chunheng, Wu Qiong, Song Jin, Zhao Yong, Huang Guohua, Zhang Hua, Yao Hong, Mu Jianying, Li Jin, Qu Junhao. A typical example of locally triggered seismicity in the boundary area of Lingyun and Fengshan following the large rainfall event of June 2010[J]. Chinese Journal of Geophysics, 2011, 54(10): 2606-2619 (in Chinese with English abstract). Jiang Haikun, Song Jin, Wu Qiong, Li Jin, Qu JunhaoJiang Haikun, Song Jin, Wu Qiong, Li Jin, Qu Junhao. Quantitative investigation of fluid triggering on seismicity in the Three Gorge Reservoir area based on ETAS model[J]. Chinese Journal of Geophysics, 2012, 55(7): 2341-2352 (in Chinese with English abstract). Lei Xinglin, Ma Shengli, Wen Xueze, Su Jinrong, Du FangLei Xinglin, Ma Shengli, Wen Xueze, Su Jinrong, Du Fang. Integrated analysis of stress and regional seismicity by surface loading-a case study of Zipingpu reservoir[J]. Seismology and Geology, 2008a, 30(4): 1046-1064 (in Chinese with English abstract). Lei Xinglin, Yu Guozheng, Ma Shengli, Wen Xueze, Wang QiangLei Xinglin, Yu Guozheng, Ma Shengli, Wen Xueze, Wang Qiang. Earthquakes induced by water injection at ~3km depth within the Rongchang gas field, Chongqing, China[J]. Journal of Geophysical Research:Solid Earth, 2008b, 113(B10): B10310. DOI:10.1029/2008JB005604 Lei Xinglin, Huang Dongjian, Su Jinrong, Jiang Guomao, Wang Xiaolong, Wang Hui, Guo Xin, Fu HongLei Xinglin, Huang Dongjian, Su Jinrong, Jiang Guomao, Wang Xiaolong, Wang Hui, Guo Xin, Fu Hong. Fault reactivation and earthquakes with magnitudes of up to Mw4.7 induced by shale-gas hydraulic fracturing in Sichuan Basin, China[J]. Scientific Reports, 2017, 7(1): 7971. DOI:10.1038/s41598-017-08557-y Li Zhichao, Huang QinghuaLi Zhichao, Huang Qinghua. Assessment of detectability of the Capital-circle Seismic Network by using the probability-based magnitude of completeness (PMC) method[J]. Chinese Journal of Geophysics, 2014, 57(8): 2584-2593 (in Chinese with English abstract). Liu Lihua, Hao Tianyao, Lü Chuanchuan, You Qingyu, Pan Jun, Wang Fuyun, Xu Ya, Zhao Chunlei, Zhang JianshiLiu Lihua, Hao Tianyao, Lü Chuanchuan, You Qingyu, Pan Jun, Wang Fuyun, Xu Ya, Zhao Chunlei, Zhang Jianshi. Crustal structure of Bohai Sea and adjacent area (North China) from two onshore-offshore wide-angle seismic survey lines[J]. Journal of Asian Earth Sciences, 2015, 98: 457-469. DOI:10.1016/j.jseaes.2014.11.034 Long Feng, Du Fang, Ruan Xiang, Deng Yuanqing, Zhang TiebaoLong Feng, Du Fang, Ruan Xiang, Deng Yuanqing, Zhang Tiebao. Water injection triggered earthquakes in the Zigong mineral wells in ETAS model[J]. Earthquake Research in China, 2010, 26(2): 164-171 (in Chinese with English abstract). Ma Tengfei. "Repeating Earthquakes" Extracted from Continuous Waveforms: Analysis of the Wenchuan Sequence in Connection to the WIFSD Drilling Site[D]. Beijing: Institute of Geophysics, China Earthquake Administration, 2015 (in Chinese). Meng Xiaofeng, Yu Xiao, Peng Zhigang, Hong BoMeng Xiaofeng, Yu Xiao, Peng Zhigang, Hong Bo. Detecting earthquakes around Salton Sea following the 2010 MW7.2 El Mayor-Cucapah earthquake using GPU parallel computing[J]. Procedia Computer Science, 2012, 9: 937-946. DOI:10.1016/j.procs.2012.04.100 Narteau C., Byrdina S., Shebalin P., Schorlemmer D.Narteau C., Byrdina S., Shebalin P., Schorlemmer D. Common dependence on stress for the two fundamental laws of statistical seismology[J]. Nature, 2009, 462(7273): 642-646. DOI:10.1038/nature08553 Nyffenegger P., Frohlich C.Nyffenegger P., Frohlich C. Aftershock occurrence rate decay properties for intermediate and deep earthquake sequences[J]. Geophysical Research Letters, 2000, 27(8): 1215-1218. DOI:10.1029/1999GL010371 Ogata Y.Ogata Y. Statistical models for earthquake occurrences and residual analysis for point processes[J]. Journal of the American Statistical Association, 1988, 83(401): 9-27. DOI:10.1080/01621459.1988.10478560 Ogata Y.Ogata Y. Detection of precursory relative quiescence before great earthquakes through a statistical model[J]. Journal of Geophysical Research:Solid Earth, 1992, 97(B13): 19845-19871. DOI:10.1029/92JB00708 Ogata Y.Ogata Y. Increased probability of large earthquakes near aftershock regions with relative quiescence[J]. Journal of Geophysical Research:Solid Earth, 2001, 106(B5): 8729-8744. DOI:10.1029/2000JB900400 Omori F.Omori F. On the aftershocks of earthquake[J]. Journal of the College of Science, Imperial University of Tokyo, 1894, 7: 11-200. Peng Yajun, Zhou Shiyong, Zhuang Jiancang, Shi JiaPeng Yajun, Zhou Shiyong, Zhuang Jiancang, Shi Jia. An approach to detect the abnormal seismicity increase in Southwestern China triggered co-seismically by 2004 Sumatra MW9.2 earthquake[J]. Geophysical Journal International, 2012, 189(3): 1734-1740. DOI:10.1111/j.1365-246X.2012.05456.x Peng Zhigang, Zhao PengPeng Zhigang, Zhao Peng. Migration of early aftershocks following the 2004 Parkfield earthquake[J]. Nature Geoscience, 2009, 2(12): 877-881. DOI:10.1038/ngeo697 Schaff D.P.Schaff D.P. Semiempirical statistics of correlation-detector performance[J]. Bulletin of the Seismological Society of America, 2008, 98(3): 1495-1507. DOI:10.1785/0120060263 Shelly D.R., Beroza G.C., Ide S.Shelly D.R., Beroza G.C., Ide S. Non-volcanic tremor and low-frequency earthquake swarms[J]. Nature, 2007, 446(7133): 305-307. DOI:10.1038/nature05666 Song Jin, Jiang HaikunSong Jin, Jiang Haikun. A review on decay and generation of aftershock activity[J]. Seismology and Geology, 2009, 31(3): 559-571 (in Chinese with English abstract). Su LuanshengSu Luansheng. Relationship between the fractal structure variation in Changdao area and the arealsei smicity and short-term and impending precursor feature[J]. Journal of Seismological Research, 1993, 16(1): 33-40 (in Chinese with English abstract). Tan Yipei, Deng Li, Cao Jingquan, Shan LianjunTan Yipei, Deng Li, Cao Jingquan, Shan Lianjun. Seismological mechanism analysis of 2015 Luanxian swarm, Hebei province[J]. Chinese Journal of Geophysics, 2016, 59(11): 4113-4125 (in Chinese with English abstract). Utsu T.Utsu T. A statistical study on the occurrence of aftershocks[J]. Geophysical Magazine, 1961, 30: 526-605. Wang Peng, Hou Jinxin, Wu PengWang Peng, Hou Jinxin, Wu Peng. Temporal evolution of the seismicity of the 2017 Jiuzhaigou MS7.0 earthquake sequence[J]. Earthquake Research in China, 2017, 33(4): 453-462 (in Chinese with English abstract). Wang Zhicai, Deng Qidong, Chao Hongtai, Du Xiansong, Shi Ronghui, Sun Zhaomin, Xiao Lanxi, Min Wei, Ling HongWang Zhicai, Deng Qidong, Chao Hongtai, Du Xiansong, Shi Ronghui, Sun Zhaomin, Xiao Lanxi, Min Wei, Ling Hong. Shallow-depth sonic reflection profiling studies on the active Penglai-Weihai fault zone offshore of the northern Shandong peninsula[J]. Chinese Journal of Geophysics, 2006, 49(4): 1092-1101 (in Chinese with English abstract). Wiemer S., Wyss M.Wiemer S., Wyss M. Minimum magnitude of completeness in earthquake catalogs:Examples from Alaska, the Western United States, and Japan[J]. Bulletin of the Seismological Society of America, 2000, 90(4): 859-869. DOI:10.1785/0119990114 Xu Jie, Song Changqing, Chu QuanzhiXu Jie, Song Changqing, Chu Quanzhi. Preliminary study on the Seismotectonic characters of the Zhangjiakou Penglai fault zone[J]. Seismology and Geology, 1998, 20(2): 146-154 (in Chinese with English abstract). Yang Hongfeng, Zhu Lupei, Chu RishengYang Hongfeng, Zhu Lupei, Chu Risheng. Fault-plane determination of the 18 April 2008 Mount Carmel, Illinois, earthquake by detecting and relocating aftershocks[J]. Bulletin of the Seismological Society of America, 2009, 99(6): 3413-3420. DOI:10.1785/0120090038 Zhang Miao, Wen LianxingZhang Miao, Wen Lianxing. Seismological evidence for a low-yield nuclear test on 12 May 2010 in North Korea[J]. Seismological Research Letters, 2015, 86(1): 138-145. DOI:10.1785/02201401170 Zheng Jianchang, Xu Changping, Li Dongmei, Zhou CuiyingZheng Jianchang, Xu Changping, Li Dongmei, Zhou Cuiying. Recent change of stress field in North China derived from station composite focal mechanisms[J]. Recent Developments in World Seismology, 2018(4): 48-57 (in Chinese with English abstract). Zhuang JiancangZhuang Jiancang. Statistical modeling of seismicity patterns before and after the 1990 Oct 5 Cape Palliser earthquake, New Zealand[J]. New Zealand Journal of Geology and Geophysics, 2000, 43(3): 447-460. DOI:10.1080/00288306.2000.9514901 Zhuang Jiancang, Ogata Y., Wang TingZhuang Jiancang, Ogata Y., Wang Ting. Data completeness of the Kumamoto earthquake sequence in the JMA catalog and its influence on the estimation of the ETAS parameters[J]. Earth, Planets and Space, 2017, 69(1): 36. [收稿日期] 2019-04-29; [修订日期] 2019-06-04
CommonCrawl
Journal Home About Issues in Progress Current Issue All Issues Feature Issues Vol. 28, Issue 21, pp. 30810-30823 •https://doi.org/10.1364/OE.404302 Oscillations of charge carrier domains in photorefractive bipolar semiconductors Andrzej Ziółkowski and Ewa Weinert-Raczka Andrzej Ziółkowski* and Ewa Weinert-Raczka Faculty of Electrical Engineering, West Pomeranian University of Technology al. Piastów 17, 70-310 Szczecin, Poland *Corresponding author: [email protected] A Ziółkowski E Weinert-Raczka Andrzej Ziółkowski and Ewa Weinert-Raczka, "Oscillations of charge carrier domains in photorefractive bipolar semiconductors," Opt. Express 28, 30810-30823 (2020) Numerical method for an analysis of nonlinear light propagation in photorefractive media - time... Andrzej Ziółkowski Opt. Express 22(S7) A1907-A1925 (2014) Self-bending of light in photorefractive semiconductors with hot-electron effect Opt. Express 22(4) 4599-4605 (2014) Hot-carrier-transport-governed nonresonant optical nonlinearity: transient Gunn-domain grating L. Subačius, et al. J. Opt. Soc. Am. B 15(7) 2045-2056 (1998) Table of Contents Category Nonlinear Optics Kerr effect Laser beams Light beams Multiple quantum wells Nonlinear wave mixing Original Manuscript: July 31, 2020 Revised Manuscript: September 9, 2020 Manuscript Accepted: September 9, 2020 Article Outline Suppl. Mat. (6) Equations (8) The article presents analysis of the formation of charge carrier domains generated by a localized optical beam and the phenomenon of their oscillations. The research was carried out for bipolar photorefractive semiconductors characterized by nonlinear transport of electrons. The analysis allowed us to determine a set of basic quantitative parameters that have an impact on the appearance of carrier oscillations and their character. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Although it is not always noticeable, nonlinear phenomena are strongly inscribed in the nature of physical systems of the world around us. As a result, they play an important role in almost all branches of modern science. The measurable result of research on this class of phenomena are, among others, nonlinear effects occurring in semiconductor media. In this article, we would like to draw attention to the phenomena occurring in semiconductors exhibiting photorefractive nonlinearity. In particular, we will focus our attention on materials such as photorefractive GaAs [1–3] or InP [4–6] crystals and on structures of reduced dimensionality such as semi-insulating multiple quantum wells based on the GaAs/AlGaAs material system [7,8]. The materials selected for this study are well-suited for research on nonlinear phenomena and at the same time provide a good platform for their applications. The practical aspect includes the possibility of innovative solutions in integrated optics systems, with particular emphasis on applications in optical telecommunications. Considering the discussed class of media from a fundamental point of view, worth noting is the fact that they have a unique combination of two types of nonlinearity: optical nonlinearity [9] and nonlinearity of electronic transport [10,11]. Photorefractive semiconductors research is usually carried out in the context of multiwave mixing [12–14] or self-trapping of light [15,16]. Nonlinear transport of electrons, incorporated into the photorefractive mechanism, is considered in this case only as a factor having an impact on the photorefractive response, e.g., during the build-up of a photorefractive grating [17–19], or photorefractive amplification in nonlinear mixing of waves [20,21]. However, it is common knowledge that nonlinear electron transport has a wide use in electronics, being the cause for the formation of high electric field domains (the Gunn domains) [22], and a basis for the construction of high-frequency electronic oscillators. Considering the scope of this paper, it is worth mentioning that as early as at the initial phase of research on the Gunn effect, it was discovered that electric field domains could be triggered by means of light [23,24]. However, the concept of combining the generation of high electric field domains with the photorefractive mechanism was not proposed until two decades later. First, the excitation of low-frequency current oscillations, distinctly different than high-frequency Gunn oscillations was observed. The oscillations were triggered both by a single laser beam [25] and as a response to an interference pattern [26]. Next, the possibility of excitation of high-field Gunn domains was investigated in the context of amplification of a moving photorefractive grating [27]. Synchronised with the moving interference pattern, electric field domains are able to amplify optically-induced changes of the refractive index. A numerical analysis of the phenomenon, referred to as the photorefractive Gunn effect, shows that it can lead to both periodic and chaotic solutions [28]. Within the area of research on the photorefractive Gunn effect, noteworthy are also studies showing the possibility of the external control of high-frequency oscillations by varying parameters of a transient grating [29,30]. It is worth noting that research indicates several, different sources of domain instability and current oscillations. While high-frequency Gunn oscillations are caused by an intervalley scattering mechanism [10,11], low-frequency oscillations result from the mechanism of electric-field-enhanced trapping of carriers at defect levels [31]. It is commonly accepted that both mechanisms lead to the phenomenon of the negative differential resistance. Different, not related to negative differential resistance, mechanism of instability is presented by Korneev et al. [32]. In this case instability originates from a special type of photoconductivity temporal response that is related with dynamics of filling shallow level traps. In this paper, the authors describe a new, to their best knowledge, phenomenon, which has its source in the mechanism of intervalley scattering of electrons. The described effect consists in the formation of charge carrier domains induced by a localized optical beam. We are not dealing here with a separate, traveling Gunn domain, but we observe local oscillations of electron and hole domains, which are coupled with oscillations of the electric field. The numerical analysis allowed us to isolate the basic parameters that affect the appearance of oscillations and their nature. 2. Theoretical background Semiconductors such as GaAs or InP have an interesting band structure. In the conduction band of both materials apart from the central $\mathrm{\Gamma }$ valley, characterised by high electron mobility, there are several side valleys with higher energies. The impact of side valleys on material properties, described in the framework of the so-called intervalley scattering model [10,11], relies on the assumption that 'heated' by the electric field electrons pass from the central $\mathrm{\Gamma }$ valley to the side L valleys. The L valleys (0.31eV away for GaAs and 0.53eV for InP), have significantly smaller curvature than the central valley and the associated higher values of the effective mass of electrons. As a consequence, after exceeding the critical value of the electric field strength, the previously increasing speed of electrons begins to decrease. This phenomenon is referred to as negative differential resistance (NDR). Devices which use this effect are known as transferred-electron-devices (TED) [33]. From a formal point of view, the modeling of nonlinear electron transport can be reduced to establishing the appropriate relationship between the speed of electron drift and the electric field. Among the various possible approaches (whose discussion goes beyond the scope of this paper), good results can be obtained based on a simple interpolation formula [11]: (1)$${v_n}(E )= {v_s}\left[ {1 + \frac{{E/{E_s} - 1}}{{1 + A{{({E/{E_s}} )}^\beta }}}} \right],$$ where ${v_s}$ - drift saturation velocity, ${E_s}$ - saturation field, while A and $\beta $ are constants determined on the basis of the value of ${\mu _n}$ denoting electron mobility in the central valley. For bulk GaAs crystal, the values are, respectively, ${v_s} = 8.5 \times {10^4}m/sec$, ${E_s} = 1.7 \times {10^5}V/m$, $A = 0.04$, and $\beta = 4$. For quantitative description of nonlinear transport of electrons, an additional parameter ${\mu _{dn}} = d{v_n}/dE,$ describing the so-called differential electron mobility is introduced. Figure 1 shows the relations described by Eq. (1), i.e., the dependence of electron drift velocity and differential electron mobility on the electric field intensity for bulk GaAs crystal. It can be seen that a decrease in electron velocity and negative values of differential mobility are available for electric fields higher than 4 kV/cm. For an InP crystal, relationships ${v_n}(E )$ and ${\mu _{dn}}(E )$ are shaped in a similar way, save for the fact that the threshold value of the electric field is higher (the side L valley is located higher) and equals ca. 12 kV/cm [34]. Fig. 1. Dependence of electron drift velocity and differential mobility on intensity of the electric field, for bulk GaAs crystal. Download Full Size | PPT Slide | PDF The area of negative differential electron mobility is also available for GaAs/AlGaAs heterostructures. The threshold value of the electric field in this case is assumed at ca. 4 kV/cm, similar to that for bulk GaAs crystal [18,21,35]. An important factor affecting the considered oscillations is the bipolar nature of charge carrier transport. It is assumed therefore that the media under analysis are characterised by nonlinear transport of electrons and linear transport of holes. Outcomes of most of the numerical calculations shown herein have been obtained for comparable concentrations of electrons and holes (with only a slight advantage of one type of carrier). A theoretical description of the photorefractive mechanism occurring in materials with such characteristics can be based on a standard system of nonlinear differential equations, including equations of generation, transport and trapping of carriers, continuity equation, and Gauss's law. In the case under analysis, deep traps are donors with a concentration ND, which compensate the impact of shallow acceptors with a concentration NA. Such system of dopants ensures that dark conductivity is low and concentration of ionised donors is different from zero. The model under analysis is limited to physical processes occurring in one dimension. However, to make the description more clear, it can be assumed that the described processes are induced by an optical beam of intensity I, propagating towards the z axis and polarised towards the x axis. Below is a system of equations which meet the assumptions referred to above: (2a)$$\partial n/\partial t - ({1/q} )\partial {J_n}/\partial x = [{{S_n}({I + {I_B}} )+ {\beta_n}} ]({{N_D} - N_D^ + } )- {\gamma _n}nN_D^ + ,$$ (2b)$$\partial p/\partial t + ({1/q} )\partial {J_p}/\partial x = [{{S_p}({I + {I_B}} )+ {\beta_p}} ]N_D^ +{-} {\gamma _p}p({{N_D} - N_D^ + } ),$$ (2c)$${J_{\boldsymbol {n}}} = qn{v_n}(E )+ q\partial [{{D_n}(E )n} ]/\partial x,$$ (2d)$${J_{\boldsymbol {p}}} = qp{\mu _p}E - q{D_p}\partial \textrm{p}/\partial x,$$ (2e)$$\partial ({n + N_A^ -{-} p - N_D^ + } )/\partial t = ({1/q} )\partial ({{J_n} + {J_p}} )/\partial x,$$ (2f)$$\partial E/\partial x = ({q/\varepsilon {\varepsilon_o}} )({N_D^ +{+} p - n - N_A^ - } ),$$ Independent variables in the equations above include the spatial variable x, along which the currents flow, and the time t. Dependent variables are as follows: the $I({x,t} )$ distribution of intensity falling on the medium of light, the $n({x,t} )$ concentration of electrons, the $p({x,t} )$ concentration of holes, the $N_D^ + ({x,t} )$ concentration of ionised donors, the $J({x,t} )$ current density, the $E({x,t} )$ electric field intensity, the $v(E )$ carrier drift velocity, and the $D(E )$ diffusion coefficient, which for electrons is dependent on the electric field. The diffusion coefficient for holes is smaller. Its value can be determined using Einstein's relation $D = {\mu _p}{k_B}T/q$, where ${\mu _p}$ - holes mobility, ${k_B}$ - the Boltzman constant, T - temperature, and q – the elementary charge. The other quantities include the $\beta $ coefficient of thermal generation of carriers from deep traps, the $\gamma $ coefficient of recombination of carriers with traps, the S photoionization cross section divided by photon energy, and the ${I_B}$ intensity of homogeneous background illumination. The indices n and p refer to quantities describing electrons and holes, respectively. The issue described by Eqs. (1) and (2) can be solved numerically, by the finite differences method with an algorithm similar to that discussed in [36]. In order to adapt the numerical procedure described therein to the issue under analysis, it has been complemented with nonlinear electron transport and extended with hole transport. In the simulations carried out as part of this study, the semiconductor was illuminated by an optical beam with the Gaussian distribution and a step switching characteristic: (3)$$I({x,t} )= {I_0}\textrm{exp}[{ - {{({x - L/2} )}^2}/{w^2}} ]\theta (t ), $$ where L - is the width of the tested sample, w - beam radius, $\theta (t )$ - the Heaviside step function. The problem of domain instability analyzed with numerical methods is inextricably linked with the issue of boundary conditions. Contrary to the case when traveling electric domains cause current oscillations in the external circuit, the phenomena analyzed here can be seen as local, i.e. the oscillations of electric charge domains occur in the vicinity of a narrow optical beam illuminating much wider sample. For this reason, the boundary conditions used here are similar to those used in the soliton propagation analysis, i.e. the constant electrical potential values were used at the edges of the sample. It is also worth adding that the disadvantage of our numerical approach is the long computational time. Since the dynamics of the phenomena under analysis depends on light intensity, in order to shorten the calculation time, most of the simulations were performed for a light intensity relatively high for photorefractive materials (more than 500 mW/cm2). The studied sample had a width of $L$ = 2 mm, and the optical beam was positioned in the centre. The photorefractive mechanism is based on an electro-optic phenomenon. For semiconductors, several types of this effect are available. The most obvious is to use the linear electro-optic effect that occurs, for example, in bulk GaAs crystals. The effect is available in a wide spectral range, in which it is easy to find a point with a small absorption coefficient (that allows propagation of light). However, it should be remembered that the value of the linear electro-optic coefficient for GaAs is relatively low, which may require the use of high electric fields. Another option is to use the quadratic electro-optic effect which occurs in such type of media. This effect results from the Franz-Keldysh phenomenon present around the edge of absorption of semiconductors [2,37]. In this case, the effect strength is greater, however, it is associated with a higher value of absorption. The third option is to use the phenomenon of near-resonant excitonic electro-absorption occurring near the absorption edge of quantum well structures. It is worth noting here that these structures can work in transmission geometry [7,12,14] (with the light propagating across quantum wells), or constitute the guiding layer of the waveguide [15,38]. As a point of reference in this study, we chose an analysis carried out as part of research into self-trapping of light, where the linear electro-optic effect in bulk GaAs crystal is used [16]. In spite of certain differences in the model, the phenomena presented herein can be treated as an extension of the work [16] in the field of analyzing the impact of nonlinear electron transport on the photorefractive response of gallium arsenide. It is worth adding, however, that the phenomena under analysis do not depend on the type of electro-optic effect and will occur regardless of which of the above-mentioned options we choose. All the results presented in this paper have been generated using material parameters typical of semi-insulating GaAs crystals. These parameters are listed below in Table 1. Table 1. Material parameters used in calculations. For non-diffractive propagation of a Gaussian beam of a radius w = 14.5 µm (FWHM ≈ 24 µm), discussed in [16], an electric field of about 34 kV/cm is required. These values determine the conditions for which the initial calculations have been made. A preliminary analyses allowed us to extract a set of basic physical quantities which have an impact on the occurrence and nature of local oscillations of charge carrier domains. The parameters whose values are controllable include: a) the degree of nonlinearity of electron transport (described, in quantitative terms, by the electric field-dependent value of negative differential mobility), b) the ratio of electron to hole concentration, which depends mainly on the concentration of dopants (or traps), and c) the optical beam intensity. For a better understanding of the study results, let us begin with a situation in which the medium does not produce oscillations, i.e., the transport of charge carriers is dominated by holes. As an illustration of this situation, the case corresponding to the concentration of traps ${N_D} = 2.11 \times {10^{16}}c{m^{ - 3}}$, ${N_A} = 0.6 \times {10^{16}}c{m^{ - 3}}$ was chosen. For homogenous lighting with IB = 1 W/cm2 and the parameters referred to above, the ratio of electron to hole concentration n0/p0 ≈ 0.24. Figure 2 shows time evolutions of the charge carrier concentration and the electric field intensity (Visualization 1, Visualization 2, and Visualization 3). Time evolution of electron and hole concentration can be divided into two phases. The first, initial phase, begins when the lighting is switched on and finishes once quasi-stationary distributions are obtained (Visualization 1). The duration of this phase can be roughly estimated based on time constants ${\tau _n} = 1/({{\gamma_n}{N_A}} )$, ${\tau _p} = 1/{\gamma _p}({{N_D} - {N_A}} )$. In the case under analysis, quasi-stationary distributions settle after about 2 ns. Fig. 2. Photorefractive response of GaAs, induced by a Gaussian beam of a full width at half medium of 24 µm and the maximum light intensity of I0 = IB = 1 W/cm2 for the ratio of electron to hole concentration n0/p0 ≈ 0.24 and the external electric field of 34 kV/cm. A higher electric potential was applied to the left electrode (i.e. for x = 0). The simulation results cover: a) the initial phase of the evolution of electron and hole distributions (Visualization 1), b) the second phase of the evolution of electron and hole distributions (Visualization 2), and c) the time evolution of the electric field distribution (Visualization 3). The second phase of evolution of charge carriers lasts longer (Visualization 2). In the case under analysis, the time window in which this phase takes place extends to 1 ms. Changes in the distributions of charge carriers are correlated with the establishment of a stationary distribution of the electric field (Visualization 3). A characteristic feature of the photorefractive response shown in Fig. 2 is its nonlocality. The induced electron and hole concentrations as well as the space charge field are characterized by asymmetry in distribution. Asymmetry of the electric field causes asymmetry of the refractive index distribution, and in consequence, the curvature of the trajectory of the self-trapping optical beam. The effect is discussed in [16]. For better clarity, all the distributions shown in the media have been rescaled by dividing the obtained values by the boundary values. As the boundary values of the carrier concentrations, the values generated under homogeneous illumination were used: ${n_0} = 2.89 \times {10^{14}}c{m^{ - 3}}$, ${p_0} = 1.18 \times {10^{15}}c{m^{ - 3}}$. Let us see how the photorefractive response changes in the situation when electron transport have a stronger impact on it than in the case discussed above. An indicator of such a situation may be a sufficiently high ratio of electron to hole concentration. As an example, let us analyse the response obtained for the concentration of traps: ${N_D} = 5 \times {10^{16}}c{m^{ - 3}}$, ${N_A} = 0.6 \times {10^{16}}c{m^{ - 3}}$. For such parameters, the ratio of electron to hole concentration (for a homogenous lighting IB = 1 W/cm2) is n0/p0 ≈ 2.07. The results are shown in Fig. 3, Visualization 4, Visualization 5 and Visualization 6. A presentation of the first, initial phase of evolution of charge carrier distributions has been skipped here, as its course is similar to that shown in Visualization 1. Fig. 3. Photorefractive response of GaAs, induced by a Gaussian beam of a full width at half medium of 24 µm and the maximum intensity of I0 = IB = 1W/cm2, for the ratio of electron to hole concentration n0/p0 ≈ 2.07 and the external electric field of 34 kV/cm. A higher electric potential was applied to the left electrode. The following boundary values were adopted: ${n_0} = 8.43 \times {10^{14}}c{m^{ - 3}}$, ${p_0} = 4.06 \times {10^{14}}c{m^{ - 3}}$, $N_D^ + \; = 6 \times {10^{21}}c{m^{ - 3}}$. The simulation results cover: a) the evolution of electron and hole distributions (Visualization 4), b) the time evolution of the donor concentration (Visualization 5), and c) the time evolution of the electric field distribution (Visualization 6). As illustrated by the movie Visualization 4, during the initial phase of evolution of electrons and holes, the charge carrier domain separates and travels (electrons and holes in the same direction) towards the electrode of a higher potential. The detachment of the domain occurs at the border of the area illuminated by the optical beam. Having travelled a certain distance (300 µm in the case under analysis, i.e. about 12.5 beam widths), the domain stops and begins to oscillate. In order to quantify the dynamics of the phenomenon under consideration, it is worth referring the characteristic times to the dielectric relaxation constant. For assumed material parameters and illumination of 1 W/cm2, the dielectric relaxation time has a value of about τd = 1.6 µs. The formation time of the domain is about 70 times longer, while the time it takes for the domain to travel (until it stops and oscillates) is about 1100 τd. An estimated domain speed is around 15 cm/s. The oscillations of the carrier domains are coupled with oscillations of the space charge and the oscillating distribution of the electric field, as shown in Visualization 5 and Visualization 6. Careful observation of the oscillating distributions of carrier concentrations shows that once the domains of electrons and holes reach point x = 680 µm, they begin to disappear, entailing a change in the space charge and the electric field distribution in this area. At the same time, several dozen micrometers behind (at point x = 730 µm), new local maxima are formed. The whole process repeats itself many times with a period of about 0.55 ms. It is worth noting here that the spatial charge generated in the described process takes the form of a dipole domain, one part of which breaks off, moves a certain distance and begins to oscillate. Although in this article we have focused on the oscillations of the free-charge domains, it should be remembered that they are coupled to oscillations of a much larger space charge. This fact may turn out to be interesting in the context of optical phenomena, the analysis of which, however, is beyond the scope of this article. Since the films presented herein cover only several periods of oscillation, Fig. 4 shows the results of a simulation carried out in a time window of 8 ms. The results shown in this figure lead to a finding that the oscillation amplitude within the time window under analysis does not change (the oscillations are non-damped). Fig. 4. Time evolution of concentration distributions of: a) electrons and b) holes, determined for a time window of 8 ms. The calculations have been performed taking into account the parameters of the simulation shown in Fig. 3. An important parameter characterising oscillations of carrier domains is their frequency. In the Gunn effect, domain-induced oscillations of the external current usually have frequencies in the gigahertz range. The phenomenon described in this article has lower frequencies. In the numerical experiments carried out as part of the study, estimated frequencies of the oscillating domains equalled less than ten kilohertz. What is important, both the frequency and the amplitude of oscillations are influenced by the parameters referred to above, such as: the value of the external electric field, which determines the degree of nonlinearity of electron transport, the ratio of electron to hole concentration, and the intensity of the optical beam. The impact of these quantities on the observed oscillations is briefly discussed below. 3.1 Impact of light intensity The value of light intensity is a parameter on which the concentration of optically excited carriers depends. Since the concentration of carriers is an important parameter in the phenomenon under analysis, we have conducted a series of numerical experiments consisting in the observation of oscillations frequency and amplitude for various light intensities. All the experiments were carried out at the same values of the external electric field and the ratio of electron to hole concentration. Figure 5 shows how the frequency f and amplitude of oscillations A depend on the maximum optical beam intensity I0 whose value is varying between 0.5 W/cm2 and 3 W/cm2. The measurement points marked in the chart are determined for the external electric field of a value of 34 kV/cm and n0/p0 ≈ 2.07. The background illumination in each experiment had an intensity equal to the maximum beam intensity IB = I0. Fig. 5. Dependence of frequency a) and amplitude b) of charge carrier domain oscillations on light intensity. The simulations have been performed for E0 = 34 kV/cm, n0/p0 ≈ 2.07, and IB = I0. As can be seen in the analyzed range of light intensity, the frequency of observed oscillations increases linearly. However, the amplitude curve has a different shape. It remains almost unchanged within the range above 1 W/cm2. Below this threshold, the value of amplitude increases as the light intensity decreases. As mentioned earlier, in the initial phase of evolution of carrier distribution, the formed domain travels towards the electrode of a higher potential. The observations show that velocity of the detached domain falls in line with a decline in intensity of the optical beam. As a consequence, the time after which oscillations are observed increases. For example, for a light intensity of 3 W/cm2, oscillations are observed about 640 µs after the domains parts detaches, while for a light intensity of 500 mW/cm2, oscillations occur after 4 ms. In both cases, the domains oscillate after traveling a distance of about 300 µm from the optical beam. Detachment of the charge carrier domain is not observed when the light intensity is too low. As an example, Fig. 6 shows time evolution of electron concentration distribution determined for an optical beam intensity of 10 mW/cm2 and 100 mW/cm2 . During the first simulation, which lasted 13.5 ms and was carried out for an optical beam of 10 mW/cm2, the creation of a domain was not observed. During the simulation for an optical beam of 100 mW/cm2, a domain was formed; however, the time window was too short to observe oscillations in it. Both simulations were carried out for an electric field of E0 = 34 kV/cm and the same ratio of electron to hole concentration n0/p0 ≈ 2.07. A characteristic feature that differentiates both cases is the value of electron concentration, which in the first case is lower than in the second by as much as an order of magnitude. Fig. 6. Time evolution of the electron concentration distribution for a light intensity of a) I0 = 10 mW/cm2 and b) I0 = 100 mW/cm2, within a time window of 13.5 ms. 3.2 Impact of the ratio of electron to hole concentration The ratio of electron to hole concentration indicates how strongly nonlinear transport of electrons contributes to the photorefractive response. Technically, the dominance of electron transport over hole transport can be obtained by means of proper doping. For a high value of the electric field, such as that used in the previous simulations (34 kV/cm), the nonlinearity of electron transport is manifested as in a photorefractive response when the advantage of the concentration of electrons over those of holes is at least about twofold (like n0/p0 ≈ 2.07 in Figs. 3–5). Studies carried out for smaller concentrations of donors show that a slight decline in the value of the n0/p0 ratio causes the ceasing of the oscillation phenomenon. The threshold at which nonlinear transport of electrons is manifested in the photorefractive response is n0/p0 = 1.89. Below this threshold, the influence of hole transport on the photorefractive response is strong enough to prevent the formation of charge domains. The photorefractive response for the n0/p0 values smaller than 1.89 is similar to that shown in Fig. 2. As an example, Fig. 7(a) shows the time evolution of electron concentration distribution for n0/p0 = 1.63. For comparison, Fig. 7(b) presents electron distribution for the threshold value of n0/p0. In this case, the domain forms then it drifts towards the anode and stops after traveling a distance of about 200 µm (approximately 8 beam widths). The oscillations begin, but have a small amplitude and quickly disappear. Fig. 7. Time evolution of electron concentration distributions for two values of the electron to hole concentration ratio: a) n0/p0 = 1.63, b) n0/p0 = 1.89. Other simulation parameters are as follows: I0 = 1 W/cm2, E0 = 34 kV/cm, IB = I0. In both cases, electron concentration distributions have been rescaled for the boundary values: a) ${n_0} = 7.47 \times {10^{14}}c{m^{ - 3}}$, b) ${n_0} = 8.05 \times {10^{14}}c{m^{ - 3}}$. If the concentration of donors is increased, the n0/p0 ratio goes up and thus the influence of electron transport nonlinearity increases. Along with the stronger influence of electron transport, the phenomenon of domain oscillation appears. It is worth mentioning that two types of oscillation can be observed here. One is similar in character to those discussed before (Fig. 3 and Fig. 4). A domain is formed and travels towards the electrode of a higher potential, then, after passing a certain distance, starts to oscillate at a typical frequency of several kilohertz. An example of this type of oscillation is shown in Fig. 8(a), where the time evolution of electron concentration distribution has been determined for the ratio n0/p0 = 2.17. When the n0/p0 ratio goes beyond the threshold of 2.3, the oscillations change. Figure 8(b) shows oscillations of the other type, determined for n0/p0 = 2.34. As can be seen, in this case the initial phase of the domain drift, preceding the oscillation phase, does not take place. The oscillating movement starts at the point where the optical beam is located. The amplitude has a significantly higher value (comparable to the length of the initial drift of the domain in the cases discussed above), and the frequency clearly falls. Further increase of the n0/p0 ratio drives up the amplitude and reduces the oscillation frequency. In consequence, the boundary phenomena occurring in the areas of electrodes grow in importance. 3.3 Impact of electric field A change in intensity of the external electric field directly influences the strength of the nonlinearity of electron transport. Therefore, the electric field intensity is of fundamental importance to the phenomenon under analysis. The results discussed so far have been obtained for relatively high values of the electric field, which induce changes in the refractive index of a strength sufficient to affect a propagation of the optical beam. However, the domain oscillation effect can be also obtained at lower electric fields. A necessary condition for the oscillations to take place is the existence of an electric field within the range of negative differential mobility of electrons and the appropriate value of the electron-hole ratio. For an electric field of 34 kV/cm, differential electron mobility attains a value of ${\mu _{dn}} \approx{-} 2,2c{m^2}/s \cdot V$. At such a small value, phenomena originating in nonlinear electron transport manifest only if the n0/p0 ratio causes a greater concentration of electrons than holes. When the electric field is reduced to, e.g., 15 kV/cm, the strength of nonlinearity of electron transport increases by an order of magnitude (${\mu _{dn}} \approx{-} 52c{m^2}/s.V$). In this case, domain oscillations are observed even if the transport is not clearly dominated by electrons. Figure 9 shows how frequency and amplitude of oscillations depend on the external electric field, whose value varies between 15 kV/cm and 18.5 kV/cm. The charts have been drawn for the maximum intensity of the optical beam I0 = 1 W/cm2 and n0/p0 ≈ 1. The background illumination in each experiment has an intensity equal to the maximum beam intensity IB = I0. Fig. 9. Dependence of frequency a) and amplitude b) of charge carrier domain oscillations on the external electric field. The simulations have been performed for I0 = 1 W/cm2, n0/p0 ≈ 1, IB = I0. In a situation where carrier transport is not dominated by either electrons or holes (n0/p0 ≈ 1), the impact of electron nonlinearity is clearly visible for electric fields of intensity up to 19 kV/cm. For the electric field intensity close to this value, the induced oscillations of charge carrier domains have a fading character, which is shown in Fig. 10(a). Above the 19 kV/cm threshold, nonlinearity of electron transport is too weak to manifest itself in the formation of charge carrier domains. In this case, the formation of stationary, asymmetric distributions of electrons and holes, similar to those shown in Fig. 2, is observed. A characteristic feature of the photorefractive response obtained for electric fields below the 19 kV/cm threshold is an increase in the length of the initial drift of domains, which rises as the electric field decreases. In the analyzed range, it reaches about 300 µm (approximately 12.5 beam widths) for an electric field of 19 kV/cm, and about 600 µm (or 25 beam widths) for an electric field of 15.5 kV/cm (see Fig. 10(b)). At the field intensity below 15 kV/cm, the nonlinearity of electron transport is so strong that the domains reach the electrode without performing any oscillations. Fig. 10. Time evolution of electron concentration distribution for an external electric field of a) E = 19 kV/cm and b) E = 15.5 kV/cm. Other simulation parameters are as follows: I0 = 1 W/cm2, n0/p0 ≈ 1, IB = I0. In both cases, electron concentration distributions have been rescaled for the boundary value: ${n_0} = 5.94 \times {10^{14}}c{m^{ - 3}}$. The paper discusses the photorefractive response of a semiconductor material characterised by bipolar conductivity and nonlinear transport of electrons. The main issue to which the analysis has been limited is the phenomenon of the formation of charge carrier domains induced by a localized optical beam, and the effect of their oscillations. It follows from the analyses that bipolar nature of carrier transport has a significant impact on the occurrence of these phenomena. In highly doped n-type semiconductors oscillations of the charge carrier domains were not observed, and a nonlinear character of electron transport leads directly to the formation of high-intensity Gunn domains. Therefore, the phenomena described in this article should appear in materials in which we deal with electron-hole competition. It has been found that light intensity, electron to hole concentration ratio, and value of the external electric field are among the basic quantitative parameters which have an impact on the character of the induced oscillations. The oscillations observed during the study had frequencies ranging from several hundred hertz to several kilohertz. While the main aim of this article was to draw attention to the phenomenon of optically induced oscillations of the charge carrier domains, the influence of the presented process on the propagating optical beam is also interesting. Since in photorefractive materials the nonlinear beam propagation is closely related to the induced electric field distribution, we can expect that the effect described by us should be related to the induction of a wide waveguide channel on one side of the beam (depending on the direction of the electric field). It seems likely that after formation of the waveguide, the beam would shift and widen significantly. Since this is only a prediction, it can be concluded that the issue merits further investigation. Although the phenomena considered here have been analysed using the material parameters of bulk gallium arsenide, similar effects can be expected in materials with a limited dimensionality, such as GaAs multiple quantum well structures. 1. G. Albanese, J. Kumar, and W. H. Steier, "Investigation of the photorefractive behavior of chrome-doped GaAs by using two-beam coupling," Opt. Lett. 11(10), 650 (1986). [CrossRef] 2. A. Partovi and E. M. Garmire, "Band-edge photorefractivity in semiconductors: Theory and experiment," J. Appl. Phys. 69(10), 6885–6898 (1991). [CrossRef] 3. S. Rychnovsky, G. C. Gilbreath, and A. Zavriyev, "Measurements of photorefractive and absorptive gratings in GaAs:EL2 and their use in extracting material parameters," J. Opt. Soc. Am. B 13(10), 2361 (1996). [CrossRef] 4. B. Mainguet, "Characterization of the photorefractive effect in InP:Fe by using two-wave mixing under electric fields," Opt. Lett. 13(8), 657 (1988). [CrossRef] 5. G. Picoli, P. Gravey, C. Ozkul, and V. Vieux, "Theory of two-wave mixing gain enhancement in photorefractive InP:Fe: A new mechanism of resonance," J. Appl. Phys. 66(8), 3798–3813 (1989). [CrossRef] 6. D. Wolfersberger, N. Khelfaoui, C. Dan, N. Fressengeas, and H. Leblond, "Fast photorefractive self-focusing in InP:Fe semiconductor at infrared wavelengths," Appl. Phys. Lett. 92(2), 021106 (2008). [CrossRef] 7. D. D. Nolte, "Semi-insulating semiconductor heterostructures: Optoelectronic properties and applications," J. Appl. Phys. 85(9), 6259–6289 (1999). [CrossRef] 8. D. D. Nolte, Photorectractive Effects and Materials (Kluwer, 1995). 9. G. Di Domenico, "Introduction to Nonlinear Optics in Photorefractive Media," in Electro-Optic Photonic Circuits: From Linear and Nonlinear Waves in Nanodisordered Photorefractive Ferroelectrics (Springer International Publishing, 2019), pp. 1–17. 10. S. M. Sze and K. K. Ng, "Transferred-Electron and Real-Space-Transfer Devices," in Physics of Semiconductor Devices, 2nd ed. (Wiley-Interscience, 1981), p. 510. 11. M. Shur, "Ridley-Watkins-Hilsum-Gunn Effect," in GaAs Devices and Circuits (Plenum Press, 1989), p. 173. 12. D. D. Nolte, T. Cubel, L. J. Pyrak-Nolte, and M. R. Melloch, "Adaptive beam combining and interferometry with photorefractive quantum wells," J. Opt. Soc. Am. B 18(2), 195 (2001). [CrossRef] 13. H. J. Eichler, Y. Ding, and B. Smandek, "Photorefractive two-wave mixing in semiconductors of the 4¯3 m space group in general spatial orientation," Phys. Rev. A 52(3), 2411–2418 (1995). [CrossRef] 14. R. M. Brubaker, Q. N. Wang, D. D. Nolte, E. S. Harmon, and M. R. Melloch, "Steady-state four-wave mixing in photorefractive quantum wells with femtosecond pulses," J. Opt. Soc. Am. B 11(6), 1038 (1994). [CrossRef] 15. A. Ziółkowski and E. Weinert-Raczka, "Dark screening solitons in multiple quantum well planar waveguide," J. Opt. A: Pure Appl. Opt. 9(7), 688–698 (2007). [CrossRef] 16. A. Ziółkowski, "Self-bending of light in photorefractive semiconductors with hot-electron effect," Opt. Express 22(4), 4599 (2014). [CrossRef] 17. D. D. Nolte, S. Balasubramanian, and M. R. Melloch, "Nonlinear charge transport in photorefractive semiconductor quantum wells," Opt. Mater. (Amsterdam, Neth.) 18(1), 199–203 (2001). [CrossRef] 18. S. Balasubramanian, I. Lahiri, Y. Ding, M. R. Melloch, and D. D. Nolte, "Two-wave-mixing dynamics and nonlinear hot-electron transport in transverse-geometry photorefractive quantum wells studied by moving gratings," Appl. Phys. B: Lasers Opt. 68(5), 863–869 (1999). [CrossRef] 19. M. Wichtowski, A. Ziolkowski, E. Weinert-Raczka, B. Jablonski, and W. Karwecki, "Influence of nonlinear electron mobility on response time in photorefractive semiconductor quantum wells," J. Nonlinear Opt. Phys. Mater. 21(04), 1250050 (2012). [CrossRef] 20. M. Wichtowski and E. Weinert-Raczka, "Temporal response of photorefractive multiple quantum wells in Franz-Keldysh geometry," Opt. Commun. 281(5), 1233–1243 (2008). [CrossRef] 21. Q. N. Wang, R. M. Brubaker, and D. D. Nolte, "Photorefractive phase shift induced by hot-electron transport: multiple-quantum-well structures," J. Opt. Soc. Am. B 11(9), 1773 (1994). [CrossRef] 22. J. B. Gunn, "Instabilities of current in III-V semiconductors," IBM J. Res. Dev. 8(2), 141–159 (1964). [CrossRef] 23. M. G. Cohen, S. Knight, and J. P. Elward, "Optical modulation in bulk GaAs using the gunn effect," Appl. Phys. Lett. 8(11), 269–271 (1966). [CrossRef] 24. R. F. Adams and H. J. Schulte, "Optically triggerable domains in GaAs gunn diodes," Appl. Phys. Lett. 15(8), 265–267 (1969). [CrossRef] 25. H. Rajbenbach, J. M. Verdiell, and J. P. Huignard, "Visualization of electrical domains in semi-insulating GaAs:Cr and potential use for variable grating mode operation," Appl. Phys. Lett. 53(7), 541–543 (1988). [CrossRef] 26. D. D. Nolte, D. H. Olson, and A. M. Glass, "Spontaneous current oscillations in optically pumped semi-insulating InP," J. Appl. Phys. 68(8), 4111–4115 (1990). [CrossRef] 27. M. Segev, B. Collings, and D. Abraham, "Photorefractive gunn effect," Phys. Rev. Lett. 76(20), 3798–3801 (1996). [CrossRef] 28. L. L. Bonilla, M. Kindelan, and P. J. Hernando, "Photorefractive Gunn Effect," Phys. Rev. B 58(11), 7046–7052 (1998). [CrossRef] 29. L. Subacius, V. Gruzinskis, E. Starikov, P. Shiktorov, and K. Jarasiu-barnas, "Light diffraction on Gunn-domain gratings," Phys. Rev. B 55(19), 12844–12847 (1997). [CrossRef] 30. L. Subačius and I. Kašalynas, "Optically Driven Domain Instability and High-Frequency Current Oscillations in Photoexcited GaAs under Nonuniform Electron Heating," Acta Phys. Pol., A 107(2), 275–279 (2005). [CrossRef] 31. M. Kaminska, J. M. Parsey, J. Lagowski, and H. C. Gatos, "Current oscillations in semi-insulating GaAs associated with field-enhanced capture of electrons by the major deep donor EL2," Appl. Phys. Lett. 41(10), 989–991 (1982). [CrossRef] 32. N. A. Korneev, J. J. S. Mondragon, and S. I. Stepanov, "Running space charge wave instability in photorefractive crystals," Opt. Commun. 133(1-6), 109–115 (1997). [CrossRef] 33. Sheng S. Li, "High-Speed III-V Semiconductor Devices," in Semiconductor Physical Electronics (Plenum Press, 1993), p. 455. 34. T. G. Sanchez, J. E. Velazquez Perez, P. M. Gutierrez Conde, D. Pardo Collantes, T. Gonzalez Sanchez, J. E. Velazquez Perez, P. M. Gutierrez Conde, and D. Pardo Collantes, "Electron transport in InP under high electric field conditions," Semicond. Sci. Technol. 7(1), 31–36 (1992). [CrossRef] 35. W. T. Masselink, N. Braslau, W. I. Wang, and S. L. Wright, "Electron velocity and negative differential mobility in AlGaAs/GaAs modulation-doped heterostructures," Appl. Phys. Lett. 51(19), 1533–1535 (1987). [CrossRef] 36. N. Singh, S. P. Nadar, and P. P. Banerjee, "Time-dependent nonlinear photorefractive response to sinusoidal intensity gratings," Opt. Commun. 136(5-6), 487–495 (1997). [CrossRef] 37. J. E. Millerd, E. M. Garmire, and A. Partovi, "Near-Resonant Photorefractive Effects in Bulk Semiconductors," in Photorefractive Effects and Materials (Springer US, 1995), pp. 311–372. 38. A. Ziółkowski, "Temporal analysis of solitons in photorefractive semiconductors," J. Opt. 14(3), 035202 (2012). [CrossRef] Article Order G. Albanese, J. Kumar, and W. H. Steier, "Investigation of the photorefractive behavior of chrome-doped GaAs by using two-beam coupling," Opt. Lett. 11(10), 650 (1986). [Crossref] A. Partovi and E. M. Garmire, "Band-edge photorefractivity in semiconductors: Theory and experiment," J. Appl. Phys. 69(10), 6885–6898 (1991). S. Rychnovsky, G. C. Gilbreath, and A. Zavriyev, "Measurements of photorefractive and absorptive gratings in GaAs:EL2 and their use in extracting material parameters," J. Opt. Soc. Am. B 13(10), 2361 (1996). B. Mainguet, "Characterization of the photorefractive effect in InP:Fe by using two-wave mixing under electric fields," Opt. Lett. 13(8), 657 (1988). G. Picoli, P. Gravey, C. Ozkul, and V. Vieux, "Theory of two-wave mixing gain enhancement in photorefractive InP:Fe: A new mechanism of resonance," J. Appl. Phys. 66(8), 3798–3813 (1989). D. Wolfersberger, N. Khelfaoui, C. Dan, N. Fressengeas, and H. Leblond, "Fast photorefractive self-focusing in InP:Fe semiconductor at infrared wavelengths," Appl. Phys. Lett. 92(2), 021106 (2008). D. D. Nolte, "Semi-insulating semiconductor heterostructures: Optoelectronic properties and applications," J. Appl. Phys. 85(9), 6259–6289 (1999). D. D. Nolte, Photorectractive Effects and Materials (Kluwer, 1995). G. Di Domenico, "Introduction to Nonlinear Optics in Photorefractive Media," in Electro-Optic Photonic Circuits: From Linear and Nonlinear Waves in Nanodisordered Photorefractive Ferroelectrics (Springer International Publishing, 2019), pp. 1–17. S. M. Sze and K. K. Ng, "Transferred-Electron and Real-Space-Transfer Devices," in Physics of Semiconductor Devices, 2nd ed. (Wiley-Interscience, 1981), p. 510. M. Shur, "Ridley-Watkins-Hilsum-Gunn Effect," in GaAs Devices and Circuits (Plenum Press, 1989), p. 173. D. D. Nolte, T. Cubel, L. J. Pyrak-Nolte, and M. R. Melloch, "Adaptive beam combining and interferometry with photorefractive quantum wells," J. Opt. Soc. Am. B 18(2), 195 (2001). H. J. Eichler, Y. Ding, and B. Smandek, "Photorefractive two-wave mixing in semiconductors of the 4¯3 m space group in general spatial orientation," Phys. Rev. A 52(3), 2411–2418 (1995). R. M. Brubaker, Q. N. Wang, D. D. Nolte, E. S. Harmon, and M. R. Melloch, "Steady-state four-wave mixing in photorefractive quantum wells with femtosecond pulses," J. Opt. Soc. Am. B 11(6), 1038 (1994). A. Ziółkowski and E. Weinert-Raczka, "Dark screening solitons in multiple quantum well planar waveguide," J. Opt. A: Pure Appl. Opt. 9(7), 688–698 (2007). A. Ziółkowski, "Self-bending of light in photorefractive semiconductors with hot-electron effect," Opt. Express 22(4), 4599 (2014). D. D. Nolte, S. Balasubramanian, and M. R. Melloch, "Nonlinear charge transport in photorefractive semiconductor quantum wells," Opt. Mater. (Amsterdam, Neth.) 18(1), 199–203 (2001). S. Balasubramanian, I. Lahiri, Y. Ding, M. R. Melloch, and D. D. Nolte, "Two-wave-mixing dynamics and nonlinear hot-electron transport in transverse-geometry photorefractive quantum wells studied by moving gratings," Appl. Phys. B: Lasers Opt. 68(5), 863–869 (1999). M. Wichtowski, A. Ziolkowski, E. Weinert-Raczka, B. Jablonski, and W. Karwecki, "Influence of nonlinear electron mobility on response time in photorefractive semiconductor quantum wells," J. Nonlinear Opt. Phys. Mater. 21(04), 1250050 (2012). M. Wichtowski and E. Weinert-Raczka, "Temporal response of photorefractive multiple quantum wells in Franz-Keldysh geometry," Opt. Commun. 281(5), 1233–1243 (2008). Q. N. Wang, R. M. Brubaker, and D. D. Nolte, "Photorefractive phase shift induced by hot-electron transport: multiple-quantum-well structures," J. Opt. Soc. Am. B 11(9), 1773 (1994). J. B. Gunn, "Instabilities of current in III-V semiconductors," IBM J. Res. Dev. 8(2), 141–159 (1964). M. G. Cohen, S. Knight, and J. P. Elward, "Optical modulation in bulk GaAs using the gunn effect," Appl. Phys. Lett. 8(11), 269–271 (1966). R. F. Adams and H. J. Schulte, "Optically triggerable domains in GaAs gunn diodes," Appl. Phys. Lett. 15(8), 265–267 (1969). H. Rajbenbach, J. M. Verdiell, and J. P. Huignard, "Visualization of electrical domains in semi-insulating GaAs:Cr and potential use for variable grating mode operation," Appl. Phys. Lett. 53(7), 541–543 (1988). D. D. Nolte, D. H. Olson, and A. M. Glass, "Spontaneous current oscillations in optically pumped semi-insulating InP," J. Appl. Phys. 68(8), 4111–4115 (1990). M. Segev, B. Collings, and D. Abraham, "Photorefractive gunn effect," Phys. Rev. Lett. 76(20), 3798–3801 (1996). L. L. Bonilla, M. Kindelan, and P. J. Hernando, "Photorefractive Gunn Effect," Phys. Rev. B 58(11), 7046–7052 (1998). L. Subacius, V. Gruzinskis, E. Starikov, P. Shiktorov, and K. Jarasiu-barnas, "Light diffraction on Gunn-domain gratings," Phys. Rev. B 55(19), 12844–12847 (1997). L. Subačius and I. Kašalynas, "Optically Driven Domain Instability and High-Frequency Current Oscillations in Photoexcited GaAs under Nonuniform Electron Heating," Acta Phys. Pol., A 107(2), 275–279 (2005). M. Kaminska, J. M. Parsey, J. Lagowski, and H. C. Gatos, "Current oscillations in semi-insulating GaAs associated with field-enhanced capture of electrons by the major deep donor EL2," Appl. Phys. Lett. 41(10), 989–991 (1982). N. A. Korneev, J. J. S. Mondragon, and S. I. Stepanov, "Running space charge wave instability in photorefractive crystals," Opt. Commun. 133(1-6), 109–115 (1997). Sheng S. Li, "High-Speed III-V Semiconductor Devices," in Semiconductor Physical Electronics (Plenum Press, 1993), p. 455. T. G. Sanchez, J. E. Velazquez Perez, P. M. Gutierrez Conde, D. Pardo Collantes, T. Gonzalez Sanchez, J. E. Velazquez Perez, P. M. Gutierrez Conde, and D. Pardo Collantes, "Electron transport in InP under high electric field conditions," Semicond. Sci. Technol. 7(1), 31–36 (1992). W. T. Masselink, N. Braslau, W. I. Wang, and S. L. Wright, "Electron velocity and negative differential mobility in AlGaAs/GaAs modulation-doped heterostructures," Appl. Phys. Lett. 51(19), 1533–1535 (1987). N. Singh, S. P. Nadar, and P. P. Banerjee, "Time-dependent nonlinear photorefractive response to sinusoidal intensity gratings," Opt. Commun. 136(5-6), 487–495 (1997). J. E. Millerd, E. M. Garmire, and A. Partovi, "Near-Resonant Photorefractive Effects in Bulk Semiconductors," in Photorefractive Effects and Materials (Springer US, 1995), pp. 311–372. A. Ziółkowski, "Temporal analysis of solitons in photorefractive semiconductors," J. Opt. 14(3), 035202 (2012). Abraham, D. Adams, R. F. Albanese, G. Balasubramanian, S. Banerjee, P. P. Bonilla, L. L. Braslau, N. Brubaker, R. M. Cohen, M. G. Collings, B. Cubel, T. Dan, C. Di Domenico, G. Ding, Y. Eichler, H. J. Elward, J. P. Fressengeas, N. Garmire, E. M. Gatos, H. C. Gilbreath, G. C. Glass, A. M. Gonzalez Sanchez, T. Gravey, P. Gruzinskis, V. Gunn, J. B. Gutierrez Conde, P. M. Harmon, E. S. Hernando, P. J. Huignard, J. P. Jablonski, B. Jarasiu-barnas, K. Kaminska, M. Karwecki, W. Kašalynas, I. Khelfaoui, N. Kindelan, M. Knight, S. Korneev, N. A. Kumar, J. Lagowski, J. Lahiri, I. Leblond, H. Li, Sheng S. Mainguet, B. Masselink, W. T. Melloch, M. R. Millerd, J. E. Mondragon, J. J. S. Nadar, S. P. Ng, K. K. Nolte, D. D. Olson, D. H. Ozkul, C. Pardo Collantes, D. Parsey, J. M. Partovi, A. Picoli, G. Pyrak-Nolte, L. J. Rajbenbach, H. Rychnovsky, S. Sanchez, T. G. Schulte, H. J. Segev, M. Shiktorov, P. Shur, M. Singh, N. Smandek, B. Starikov, E. Steier, W. H. Stepanov, S. I. Subacius, L. Sze, S. M. Velazquez Perez, J. E. Verdiell, J. M. Vieux, V. Wang, Q. N. Wang, W. I. Weinert-Raczka, E. Wichtowski, M. Wolfersberger, D. Wright, S. L. Zavriyev, A. Ziolkowski, A. Ziólkowski, A. Acta Phys. Pol., A (1) Appl. Phys. B: Lasers Opt. (1) Appl. Phys. Lett. (6) IBM J. Res. Dev. (1) J. Appl. Phys. (4) J. Nonlinear Opt. Phys. Mater. (1) J. Opt. (1) J. Opt. A: Pure Appl. Opt. (1) J. Opt. Soc. Am. B (4) Opt. Commun. (3) Opt. Express (1) Opt. Lett. (2) Opt. Mater. (Amsterdam, Neth.) (1) Phys. Rev. A (1) Phys. Rev. B (2) Phys. Rev. Lett. (1) Semicond. Sci. Technol. (1) Visualization 1 The simulation results of the initial phase of the evolution of electron and hole distributions in photorefractive GaAs. Visualization 2 The simulation results of the second phase of electron and hole distributions in photorefractive GaAs. Visualization 3 The simulation results of the time evolution of the electric field distribution in photorefractive GaAs. Visualization 4 The simulation results of the charge oscillations in photorefractive GaAs. Visualization 5 The simulation results of the electric field oscillations in photorefractive GaAs. Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) v n ( E ) = v s [ 1 + E / E s − 1 1 + A ( E / E s ) β ] , (2a) ∂ n / ∂ t − ( 1 / q ) ∂ J n / ∂ x = [ S n ( I + I B ) + β n ] ( N D − N D + ) − γ n n N D + , (2b) ∂ p / ∂ t + ( 1 / q ) ∂ J p / ∂ x = [ S p ( I + I B ) + β p ] N D + − γ p p ( N D − N D + ) , (2c) J n = q n v n ( E ) + q ∂ [ D n ( E ) n ] / ∂ x , (2d) J p = q p μ p E − q D p ∂ p / ∂ x , (2e) ∂ ( n + N A − − p − N D + ) / ∂ t = ( 1 / q ) ∂ ( J n + J p ) / ∂ x , (2f) ∂ E / ∂ x = ( q / ε ε o ) ( N D + + p − n − N A − ) , (3) I ( x , t ) = I 0 exp [ − ( x − L / 2 ) 2 / w 2 ] θ ( t ) , James Leger, Editor-in-Chief Feature Issues Material parameters used in calculations. Cross section for photogeneration from traps Sn = 1·10−17 cm2 Sp = 1·10−16 cm2 Cross section for recombination to traps γn = 4.7·10−7 cm3/s γp = 1.8·10−7 cm3/s Linear carrier mobility μn = 5000 cm2/Vs μp = 300 cm2/Vs Carrier thermal emission rate βn ≈ 0.5 s−1 βp ≈ 1 s−1 Wavelength λ = 1.06 µm Dielectric constant ε = 12.58
CommonCrawl
Find Study Questions by Subject Disease in Health and Medicine Disorders and Treatment in Health Introduction to the Arts Tech and Engineering Tech and Engineering Electrical engineering Scattering parameters A sound wave has a wavelength of 0.66 meters. If the frequency of the wave is 512 Hz (high C),... A sound wave has a wavelength of 0.66 meters. If the frequency of the wave is 512 Hz (high C), what is the wave velocity? Wave Velocity: Waves transmit information throughout their oscillatory journey. Waves can transmit signals of information (as in with Wi-Fi) or energy (think about a whip and why it would hurt to be hit with one). Waves can transmit these quantities with a speed that is dependent on the wavelength and frequency of the wave. This is the wave velocity and it is essentially written as: {eq}\displaystyle v = \lambda f {/eq} {eq}\displaystyle \lambda {/eq} is the wavelength {eq}\displaystyle f {/eq} is the frequency Answer and Explanation: 1 Become a Study.com member to unlock this answer! Create your account View this answer {eq}\displaystyle \lambda = 0.66\ m {/eq} is the wavelength {eq}\displaystyle f = 512\ Hz {/eq} is the frequency In order to determine... See full answer below. Become a member and unlock all Study Answers Our experts can answer your tough homework and study questions. Ask a question Ask a question Search Answers Learn more about this topic: Get access to this video and our entire Q&A library Wave Parameters: Wavelength, Amplitude, Period, Frequency & Speed Chapter 5 / Lesson 2 What is the frequency of a wave? Learn the wave parameters including amplitude, frequency, and period. Understand the differences between these wave parameters. Related to this Question If a wave has a frequency of 10 Hz and a wavelength of 100 m, what is this wave's velocity? What is the speed of a sound wave if its frequency is 160 Hz and its wavelength is 2.2 m? A sound wave of wavelength 0.808 m and velocity 333 m / s is produced for 0.616 s. What is the frequency of the wave? A wave of wavelength 52 cm travels with frequency 400 Hz. What is the speed of the wave? What is the speed of a wave whose frequency is 9.00 Hz and wavelength is 0.654 m? If velocity of a wave is 0.15 m/s and the frequency is 20 HZ, what is its wavelength? A sound wave of wavelength 0.90 m and a velocity of 330 m/s is produced for 2.0 seconds. What is the frequency of the wave? An ocean wave has a wavelength of 10 m and a frequency of 4.0 hz. What is the velocity of the wave? A transverse wave has a wavelength of 0.43 m and a frequency of 14 Hz. What is the wave speed? If the wave speed is 6 m/s and wavelength is 1 m, what is the wave's frequency? If a wave had a speed of 100 m/s and a wavelength of 200 m, what is the frequency of the wave? What is the wavelength of a sound wave with a frequency of 440 Hz if the speed of sound in air is 340 meters per second? The velocity of sound in a certain medium is 1000 m / s. If sound wave has a frequency of 500 HZ, the wavelength would be what? What is the wavelength of a sound wave with a frequency of 440 Hz if the speed of sound in air is 340 m / s? A sound with a frequency of 261.6 Hz travels through water at a speed of 1435 m/s. What is the wavelength of the wave? If the velocity of a wave is 0.15m/s, and the frequency is 20Hz, what is the wavelength? The speed at which a wave propagates down a string is 300 m/s. If the frequency of this wave is 150 Hertz, what is the wavelength of this wave? A water wave has frequency 0.40 Hz and wavelength 2.0 m. What is its speed? If a wave is coming in with a velocity of 10 m/s and it has a wavelength of 5 m, what is its frequency? What is the wavelength in meters of a sound wave in air having a frequency of 440 Hz ? When the speed of sound is 340 m/s, what is the wavelength (in m) of a 750 Hz sound? What is the speed of a periodic wave disturbance that has a frequency of 2.88 Hz and a wavelength of 0.606 m? What is the speed of a periodic wave disturbance that has a frequency of 2.58 Hz and a wavelength of 0.33 m? A periodic wave with wavelength \lambda =1\:m has frequency f= 3\: Hz. What is the wave's speed? What is the frequency of a wave having wavelength 20 cm and velocity 0.5 m/s? How fast does a 200-Hz sound wave with a wavelength of 1.7 m travel? If the speed of a longitudinal wave is 340 meters/sec, and the freq is 500 Hz, what is the wavelength of the wave? A certain AM radio wave has a frequency of 550 kHz (kilo-hertz) and travels with a speed of 3.0\times10^8 m/s. What is the wavelength of such a wave? A sound wave has a frequency of 1995 Hz. What is the distance between crests or compressions of the wave? (Take the speed of sound to be 344 m/s.) A sound wave has a frequency of 2003 Hz. What is the distance between crests or compressions of the wave? ( Take the speed of sound to be 344 m/s.)$ A sound wave has a frequency of 1996 Hz. What is the distance between crests or compressions of the wave? (Take the speed of sound to be 344 m/s.) (in m) If the wavelength of the sound wave is 1.3 meters, what is the frequency? What is the speed of a wave that has a wavelength of 3 cm and a frequency of 200 Hz? How far would that wave travel in 60 seconds? A certain ocean wave has a wavelength of 0.3 m and a frequency of 5 Hertz. What is the speed of this wave? A sound wave moving through water has a frequency of 256 Hz and a wavelength of 5.77m. What is the speed of sound in water? Electromagnetic waves and sound waves can have the same frequency. (a) What is the wavelength of a 1.00-kHz electromagnetic wave? (b) What is the wavelength of a 1.00-kHz sound wave? (The speed of s Wave amplitude is related to what? a. wave velocity b. frequency c. energy d. wavelength A sound wave in a solid has a frequency of 15.0 KHz and a wavelength of 0.333 m. What would be the wave speed and how much faster is this speed than the speed of sound in air? A sound wave in a solid has a frequency of 15.0 kHz and a wavelength of 0.333 m. (a) What would be the wave speed, and how much faster is this speed than the speed of sound in air? What is the speed of a water wave of frequency 2 Hz and wavelength 1.5 m? Waves of frequency 200 Hz and wavelength is 36 cm velocity of the wave is _____ m/s. A sound wave with a frequency of 440 Hz has a wavelength of 0.77 meters. Find the speed of sound in m/s. A wave has an amplitude of 0.25 m, a wavelength of 0.68 m and a frequency of 3.2 Hz. Find it's velocity. A sound wave has a frequency of 1640 Hz. The air temperature is 46 ^\circ F. a. What is the speed of the sound wave? and b. What is the wavelength of the sound wave? What is the wavelength of each of these sound waves? For sound waves with a speed of 344 m/s and frequencies of (a) 20 Hz and (b) 20 kHz. A wave has a speed of 240 m/s and a wavelength of 3.2 m. What are a) the frequency and b) the period of the wave? Suppose that a sound has a frequency of 110 wave/s and a wavelength of 3.1 meters. Calculate the wave-speed. A wave with a frequency of 90.0 Hz travels through steel with a wavelength of 3.46 m. What is the speed of this wave? If a wave has a frequency of 24 Hz and a wavelength of 15m, how do you calculate the speed of the wave? If a wave has a frequency of 24 Hz and a wavelength of 15 m, how do you calculate the speed of the wave? A water wave has a frequency of 9.0 Hz and a wavelength of 5.0 m. a) What is the period (in seconds) of these waves? b) What is the wave velocity in m/s? What is the frequency of a sound wave in air at 20^{\circ}C that has a wavelength of 0.4 m? What frequency sound has a 0.10-m wavelength when the speed of sound is 340 m/s? What is wavelength of wave that has a frequency of 1 Hz and travels at 300,000 km / s? Given the speed of sound is 331 m/s in air, and a sound wavelength of 7.11 m, what is the frequency of the sound in Hz? If the wavelength of a sound wave in an unknown material is 20 centimeters and the frequency is 1700 Hz. What is the speed of the sound wave? A 60 vibration per second wave travels 30 meters in one second, what is its frequency and wave speed? A 501 Hz sound wave travels through a substance. The wavelength of the sound wave is measured to be 2.1 meters. What is the speed of sound in the substance? What is the frequency of a sound wave in air at 20 C that has a wavelength of 0.4 m? A wave, having a wavelength of 4.0 meters and an amplitude of 25 meters, travels a distance of 24 meters in 8.0 seconds. What is the frequency and the period of the wave? A sound wave, of frequency 500 hertz, covers a distance of 1000m, in 5 sec. What is the wavelength? A harmonic traveling wave is described by the following equation: a) What is the wavelength of the wave? b) What is the frequency in Hz? c) What is the wave speed? d) Which way is the wave travelling? A water wave oscillates up and down three times each second. The distance between wave crests is 2 m. a. what is its frequency? b. what is its wavelength? c. what is its wave speed? The speed of sound in a solid medium is 15 times greater than that in air. If the frequency of a wave in the solid is 20 kHz, then what is the wavelength? 1.calculate the speed of waves in water that are 0.4m apart and have a frequency of 2hz? 2. sound waves travel at approximately 340m/s, a. what is the wavelength of a sound with a frequency of 20Hz A periodic wave with frequency f = 4 Hz has speed VF = 12 m/s. What is the wave's wavelength? If a radio wave has speed 3.00 \times 10^8 m/s and frequency 99.7 MHz, what is its wavelength? 1. An electromagnetic wave has a wavelength of 6.2 times 10^14 m . What is the frequency of the wave? A. 2.0 times 10^6 Hz B. 4.8 times 10^{-7} Hz C. 1.8 times 10^24 Hz D. 5.3 times 10^{-2 Suppose the water waves have a velocity of 1.2 m per s and wavelength of 2.4 m. What is the frequency of the waves? a) What is the amplitude of this wave? b) What is the frequency of this wave? c) What is the wavelength of this wave? The wavelength of an electromagnetic wave is measured to be 600 m. What is the frequency of the wave? The wavelength of an electromagnetic wave is measured to be 1.80\times 10^{-7} m. (a) What is the frequency of the wave? How do you calculate the wave speed of waves that are 0.15 m apart, and have a frequency of 2 Hz? Wavelength is given, frequency is given. frequency*wavelength = wave speed. Thanks a whole lot How long is the sound wave if the frequency is 320, Hz ? (Note: Speed of sound = 340, m/s .) What is the speed of sound in a medium where a 100-kHz frequency produces a 5.96-cm wavelength? 1. A sound wave with a frequency of 440 Hz has a wavelength of 0.77 meters. Find the speed of sound in m/s. 2. Ocean waves are measured to be traveling at 1.5 m/s. If their wavelength is 2.0 meters, Given that the speed of light is 2.998 times 10^8 m / s, what is the frequency of a wave that has a wavelength of 3.55 times 10^{-8} meters? What is the wavelength of a 40 Hz wave traveling at 500 m/s? If a radio wave has speed 3.00 x 10^8 m/s and frequency 99.7 MHz, what is its wavelength? A sound wave in the air has a frequency of 240 Hz and travels with a speed of 334.7 m/s. How far apart are the wave crests (compressions)? A sound wave in the air has a frequency of 276 Hz and travels with a speed of 341 m/s. How far apart are the wave crests (compressions)? A sound wave in air has a frequency of 262 \ Hz and travels with a speed of 349 \ m/s. How far apart are the wave crests (compressions)? What is the speed of an electromagnetic wave that has a frequency of 7.8 \times 10^6 Hz? A sound wave of frequency 500 Hz travels from air into water. The speed of sound in air is 330 ms-1 and in water 1490 ms-1. What is the wavelength of the wave in (a) air and (b) water? The wavelength of an electromagnetic wave is measured to be 0.059 m. (a) What is the frequency of the wave in kHz? (b) What type of EM wave is it? A 10 kHz sound wave travels in air at T = 100 degrees F, what is its velocity and wavelength? A sound wave coming from a tuba has a wavelength of 1.50 m and travels to your ears at a speed of 345 m/s. What is the frequency of the sound you hear? A sound wave has frequency of 631 Hz and a wavelength of 0.54 m in air. What is the temperature of the air? Assuming the velocity of sound at 0 ^\circ C is 334 m/s. A 524-Hz longitudinal wave in air has a speed of 345 m/s. What is the wavelength? A musical tone, sounded on a piano, has a frequency of 820 Hz and a wavelength in air of 0.400 m. What is the wave speed? What is the frequency of wavelength 5.5cm traveling at a speed of 1.6m/s? What is the wavelength of a 40 Hz wave traveling at 500 m/s? a. 0.08 m b. 12.5 m c. 80 m d. 20,000 km A periodic wave with wavelength \lambda = 0.5\:m has speed v = 2 \:m/s. What is the wave's frequency? What is the wavelength of an electromagnetic wave that has a frequency of 3000 Hertz? A. 100m B. 1,000m C. 100,000. D 10,000,000m a) At 20^{\circ}C, what is the frequency of a sound wave in air with a wavelength of 17.0 cm? b) What is the frequency of an electromagnetic wave with a wavelength of 17.0 cm? c) What would be the wavelength of a sound wave in water with the same frequenc A source traveling at 100 m/s emits sound waves of frequency 900 Hz. What is the velocity of the sound? The wavelength of an electromagnetic wave is measured to be 300 m. (a) What is the frequency of the wave? (b) What type of EM wave is it? A 488Hz sound wave travels through a substance. The wavelength of the sound wave is measured to be 2.9 meters. What is the speed of sound in the substance? A water wave has frequency 0.40 Hz and wavelength 2.0m. Its speed is .. Explore our homework questions and answers library To ask a site support question, click here © copyright 2003-2023 Homework.Study.com. All other trademarks and copyrights are the property of their respective owners. All rights reserved. Honor Code For Students Question to be answered
CommonCrawl
Basic theory and concepts Richard P. Shefferson, Shun Kurokawa, and Johan Ehrlén INTRODUCTION TO PACKAGE LEFKO3 R package lefko3 is a complete environment for the construction and analysis of matrix projection models (MPMs) and integral projection models (IPMs) (Shefferson, Kurokawa, and Ehrlén 2021). It was originally developed to estimate and analyze historical size-classified matrix projection models (hMPMs), which are matrices designed to include not only current state but also past history in terms of previous states, where stages are defined in terms of size. Such matrices are large, typically having dimensions higher than their standard, ahistorical counterparts (the latter will be hereafter referred to as ahistorical MPMs, or ahMPMs, while the acronym MPM will be used to refer to all matrix projection models, whether historical or not). As this package has developed, we have broadened the scope of this package to include age-classified, age-by-stage, and stage-classified MPMs and IPMs, with function-based MPMs and IPMs capable of handling a large range of possible response distributions. We have also added the ability to project matrices and IPMs in customized, complex ways. Through it all, we have prioritized the development of core algorithms and methods to construct these models and projections quickly, efficiently, and at least relatively painlessly. Finally, we have incorporated quality control routines in the core functions of all steps of the workflow, giving users the opportunity to assess where weaknesses and errors lie. Package lefko3 introduces a complete suite of functions covering the MPM/IPM workflow, from dataset management to model construction to the analysis of population dynamics. Dataset management functions standardize demographic datasets from the dominant formats into a format that facilitates matrix parameterization while accounting for individual identity and other parameters. Demographic vital rate models may be estimated using demographic datasets with this standardized format, and these models take the form of generalized linear or mixed linear models using a variety of response distributions. Matrix estimation functions produce all necessary matrices from a single dataset, including all times, patches, and populations in a single output object, and do so quickly through core binaries engineered for speed and accuracy. Deterministic and stochastic analysis functions allow sensitivity analysis, elasticity analysis, LTRE analysis, etc. Projection functions allow density dependent and independent with or without enforced substochasticity, as well as stochastic, cyclical, and replicated projections for even quite complex situations. This vignette was written as a basic introduction to the concepts and methods underlying lefko3. The target audience is everyone from beginners with little knowledge of population ecology and even less of R, to experienced ecologists with advanced working knowledge of R and the analysis of population dynamics. In addition to this vignette, which covers the concepts, background, and workflow underlying the package, we have created an online book entitled lefko3: a gentle introduction, and a series of vignettes focused on specific practical examples using the package to run suites of comprehensive analyses. The book covers all major uses of the package and contains lots of basic information and examples, and the vignettes show the development of raw and function-based MPMs, IPMs, and age-by-stage MPMs using two datasets included in lefko3: a dataset from a Swedish population of the perennial herb Lathyrus vernus, and a dataset from an American population of the terrestrial orchid Cypripedium candidum. In addition, we have created a number of long-format vignettes and video tutorials. All of these materials are available on the projects page of the Shefferson lab website. Life history models and projection matrices Matrix projection models (MPMs) are representations of the dynamics of a population across all life history stages deemed relevant, across the most relevant time interval (typically one year, but sometimes different, and assumed to be consistent within each analysis). They require a complete model of the organism's life history prior to construction, and this model must explicitly describe all life stages and all life history transitions to be modeled. Each stage is mutually exclusive, meaning that an individual can only be in a single stage at a given time. Each stage is represented in the matrix by a single column and a single row. Each transition takes exactly one full time step, which needs to be defined consistently and should be equivalent to the approximate amount of time between consecutive monitoring occasions. Matrix elements (\(a_{kj}\)) show either the probability of transition for an individual in stage \(j\) at occasion t (along the columns), to stage \(k\) at occasion t+1 (along the rows), or the mean rate of production of new recruits into stage \(k\) at occasion t+1 (along the rows) by individuals in reproductive stage \(j\) in occasion t (along the columns). Conceptually, each individual is in a particular stage in the instance of monitoring or observation, and then either dies or completes a transition in the interval between that occasion's observation and the next occasion's observation. Death is not an explicit life stage and so is not modeled as such, instead becoming a potential endpoint of each transition. Stages are generally defined under the assumption that they are unique moments in an organism's life. This uniqueness is assumed to extend to the demography of the organism during its time within that stage. In some cases, stages are defined as developmental stages, as happens with insect instars. In other cases, stages are defined almost purely on the basis of size, as occurs with perennial herbs that may exhibit a different number of stems, leaves, and flowers in each growing season. In still other cases, stages may be defined via other characteristics. For example, the vegetative dormancy stage of some perennial herbs is defined by a lack of aboveground size, and so is characteristically not observable (Shefferson et al. 2001). In still other cases, age may also fall into a stage's definition, as may the maturity status, observability, etc. The timing of monitoring relative to the reproductive season impacts the structure of life history models. Life history models are typically categorized as either pre-breeding or post-breeding. Here, "breeding" refers to the production of offspring, and so a pre-breeding model assumes that monitoring is conducted immediately before the new recruits are born, while a post-breeding model assumes that the monitoring is conducted just after new recruits are born (in both cases, breeding is assumed to be seasonal). In a pre-breeding model, fecundity equals the production of newborns multiplied by the survival of newborns to age 1, since fecundity must take place over a full time step and the timing of the census misses both the birth event itself and the time in which the organism is a newborn (Kendall et al. 2019). In a post-breeding model, fecundity equals the survival of the parent from the stage/age preceding reproduction to reproduction itself, multiplied by the number of newborns produced (Kendall et al. 2019). In monitoring studies of plant populations, the typical life history model a pre-breeding model, in which fecundity is estimated as the production of seeds in a given year multiplied by their over-winter survival probability as seeds and their germination probability in the following year. If the life history includes seed dormancy, then it is modeled by multiplying seed production by seed survival (which is given as the probability of maintaining seed viability from one year to the next), in the first instance, and then as seed survival for seeds already produced and dormant (seed survival may be modeled to decrease with increasing age of dormant seed). Added complexity can arise if there are multiple fecundity pathways, for example when clonal reproduction is also possible, or if multiple propagule stages exist, or if recruitment can be to seeds that immediately germinate as well as to seeds that are dormant for one or more years. We urge users to be careful with this step, as properly defining the life history model has important implications for all analyses of population dynamics (Kendall et al. 2019). Figure 1 is a simple example of a stage-based life history model (technically a life cycle graph) and its associated Lefkovitch matrix for a terrestrial orchid species, Cypripedium candidum (Shefferson et al. 2001). Here, we show each stage as a node, and each transition as a uni-directional arrow (a). The rates and probabilities are shown as mathematical symbols in (b), with \(S_{kj}\) denoting survival-transition probability from stage \(j\) in occasion t to stage \(k\) in occasion t+1, and \(F_{kj}\) denoting the fecundity of reproductive stage \(j\) into recruit stage \(k\) in this life history. Figure 1.1. Simple life history model (a) and ahistorical MPM (b) for Cypripedium candidum, a North American herbaceous plant species. Here, S is the dormant seed stage, J is the seedling stage, D is adult vegetative dormancy, V is adult vegetative sprouting, and F is flowering. Ahistorical vs. historical matrix models Figure 1b is an example of an ahistorical MPM (ahMPM), which is a matrix projection model in which the future stage of an individual is dependent only on its current stage, and not on previous stages. In other words, individual history is not incorporated into ahMPMs. It may seem odd that individual history is not incorporated into the matrices given that demographic datasets are composed of records of individual histories spanning several or even many observation events. However, construction of a typical ahMPM breaks up these individual histories into pairs of consecutive stages across time, with each pair treated as independent of every other pair of consecutive stages. For example, if an individual is in stage A in occasion 1, stage B in occasion 2, stage E in occasion 3, and dead in occasion 4, then its individual history is broken up into A-B, B-E, and E-Dead, with each pair assumed to be independent. The resulting projection matrix would be the same if these transitions originated from different individuals or the same one - their order and relationships do not have any impacts in ahMPM construction. The matrix shown in figure 1.1 can be used to project population dynamics by predicting the future numbers of individuals in each stage. For this example, we can project forward by multiplying the projection matrix by a column vector of the numbers of individuals in each stage, given in the same order as the order of stages corresponding to the rows and columns of the matrix, in a particular time, as in \[\begin{equation} \tag{1} \left[\begin{array} {rrrrr} n _{2,S} \\ n _{2,J} \\ n _{2,V} \\ n _{2,D} \\ n _{2,F} \end{array}\right] = \left[\begin{array} {rrrrr} S _{SS} & 0 & 0 & 0 & F _{SF} \\ S _{JS} & S _{JJ} & 0 & 0 & F _{JF} \\ 0 & S _{VJ} & S _{VV} & S _{VD} & S _{VF} \\ 0 & S _{DJ} & S _{DV} & S _{DD} & S _{DF} \\ 0 & 0 & S _{FV} & S _{FD} & S _{FF} \end{array}\right] \left[\begin{array} {rrrrr} n _{1,S} \\ n _{1,J} \\ n _{1,V} \\ n _{1,D} \\ n _{1,F} \end{array}\right] \end{equation}\] where \(n _{1,S}\) is the number of individuals in stage \(S\) in occasion 1. The total population size in any given occasion is the sum of the numbers of individuals in all stages in that occasion. The actual discrete population growth rate is calculated as the ratio of total population size in occasion 2 to the total population size in occasion 1, and the asymptotic growth rate \(\lambda\) can be estimated as the dominant eigenvalue of this matrix. The independence of consecutive stage-pairs reflects a central assumption in ahMPM analysis: the stage of an individual in the next occasion is influenced only by its current stage. Conceptually, if an organism's stage in the next occasion is entirely determined by its current stage, then its previous states do not influence these transitions. Thus, standard ahMPMs are two dimensional and reflect only the current and next immediate stage of individuals, given by the columns and rows, respectively. This is ultimately an extension of the iid assumption in statistics - that the states of individuals are independent and originate from identically-distributed random variables. MPMs have been estimated since the 1940s (Caswell 2001). We have never done a meta-analysis of all of these studies, but nonetheless it is safe to say that studies considering individual history are rare. The typical MPM study uses ahMPMs, and so assumes independence of stage transitions across time even from the same individual. In fact, at the time of writing, we are aware of only five examples of studies breaking these assumptions and using a historical approach, meaning that they incorporated some degree of individual history into matrix estimation and analysis (Ehrlén 2000; Shefferson, Warren II, and Pulliam 2014; Shefferson, Mizuta, and Hutchings 2017; Shefferson et al. 2018; deVries and Caswell 2018). The historical MPM (hMPM) is an extension of the matrix projection model that incorporates information on one previous occasion into the determination of vital rates. Thus, the expected survival-transition probability of an individual in stage \(j\) at occasion t to stage \(k\) at occasion t+1 depends not only on its stage in occasion t but also on its stage in occasion t-1. Population ecologists considering this problem analytically might be inclined to add an extra dimension to the matrix to deal with this, thus creating a 3d array or cube. However, this is mathematically intractable, with many analyses becoming impossible (deVries and Caswell 2018). Instead, we utilize the approach developed by Ehrlén (2000), in which rows and columns represent life history stages paired in consecutive occasions. Thus, columns now represent the From pair of stages (stage in occasions t and t-1), and rows now represent the To pair of stages (stage in occasions t and t+1), as in Figure 1.2. This model is equivalent to the second-order model proposed by deVries and Caswell (2018), although the latter incorporates more history into transitions involving individuals just born (we explain the difference further in this vignette). Figure 1.2. Historical MPM for Cypripedium candidum, a North American herbaceous plant. Here, we can refer to matrix elements as \(a_{kjl}\), where the element represents the rate at which individuals transition to stage \(k\) in occasion t+1 after having been stage \(j\) in occasion t and stage \(l\) in occasion t-1. If \(n_{kjl}\) is the number of individuals making this transition, \(n_{.jl}\) is the total number of individuals in stage \(j\) in occasion t and stage \(l\) in occasion t-1 regardless of stage or even status as alive or dead in occasion t+1, \(m\) is the number of stages in the life history model, and \(d\) is the number of stages plus death in the life history model (so \(d=m+1\)), then we have \[\begin{equation} \tag{2} a _{kjl} = \frac{n _{kjl}}{n _{.jl}} = \frac{n _{kjl}}{\sum_{i=1}^{d} n _{ijl}} \end{equation}\] These values can be used to determine the associated elements of matrices in ahistorical MPMs, as follows: \[\begin{equation} \tag{3} a _{kj} = \frac{\sum_{l=1}^{m} n _{kjl}}{\sum_{l=1}^{m} \sum_{i=1}^{d} n _{ijl}} \end{equation}\] Note that although one can use the actual numbers of individuals making historical transitions to estimate ahistorical MPMs, one cannot use the matrix elements in a historical MPM to calculate the associated ahistorical MPM elements (and vice versa). Historical transitions must be weighted by the numbers of individuals corresponding to each transition from occasion t to t+1 that were in each previous stage to produce the correct ahistorical matrix element. Historical matrix elements are equal only to the ratios of numbers of individuals in these categories, rather than to the numbers of individuals themselves, and contain no information allowing the inference of the latter. Historical MPMs can be projected forward in the same way that ahMPMs can. Here we see an example of a projection going forward one time step, using the hMPM shown in figure 1.2. \[\begin{equation} \tag{4} \tiny \left[\begin{array} {rrrrrrrrrrrrrrrrr} n _{2,SS} \\ n _{2,JS} \\ n _{2,JJ} \\ n _{2,VJ} \\ n _{2,DJ} \\ n _{2,FJ} \\ n _{2,VV} \\ n _{2,DV} \\ n _{2,FV} \\ n _{2,VD} \\ n _{2,DD} \\ n _{2,FD} \\ n _{2,VF} \\ n _{2,DF} \\ n _{2,FF} \\ n _{2,SF} \\ n _{2,JF} \end{array}\right] = \left[\begin{array} {rrrrrrrrrrrrrrrrr} S _{SSS} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & F _{SSF} & 0 \\ S _{JSS} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & F _{JSF} & 0 \\ 0 & S _{JJS} & S _{JJJ} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & F _{JJF} \\ 0 & S _{VJS} & S _{VJJ} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & F _{VJF} \\ 0 & S _{DJS} & S _{DJJ} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & F _{DJF} \\ 0 & S _{FJS} & S _{FJJ} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & F _{FJF} \\ 0 & 0 & 0 & S _{VVJ} & 0 & 0 & S _{VVV} & 0 & 0 & S _{VVD} & 0 & 0 & S _{VVF} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & S _{DVJ} & 0 & 0 & S _{DVV} & 0 & 0 & S _{DVD} & 0 & 0 & S _{DVF} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & S _{FVJ} & 0 & 0 & S _{FVV} & 0 & 0 & S _{FVD} & 0 & 0 & S _{FVF} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & S _{VDJ} & 0 & 0 & S _{VDV} & 0 & 0 & S _{VDD} & 0 & 0 & S _{VDF} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & S _{DDJ} & 0 & 0 & S _{DDV} & 0 & 0 & S _{DDD} & 0 & 0 & S _{DDF} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & S _{FDJ} & 0 & 0 & S _{FDV} & 0 & 0 & S _{FDD} & 0 & 0 & S _{FDF} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & S _{VFV} & 0 & 0 & S _{VFD} & 0 & 0 & S _{VFF} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & S _{DFV} & 0 & 0 & S _{DFD} & 0 & 0 & S _{DFF} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & S _{FFV} & 0 & 0 & S _{FFD} & 0 & 0 & S _{FFF} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & F _{SFJ} & 0 & 0 & F _{SFV} & 0 & 0 & F _{SFD} & 0 & 0 & F _{SFF} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & F _{JFJ} & 0 & 0 & F _{JFV} & 0 & 0 & F _{JFD} & 0 & 0 & F _{JFF} & 0 & 0 \\ \end{array}\right] \left[\begin{array} {rrrrrrrrrrrrrrrrr} n _{1,SS} \\ n _{1,JS} \\ n _{1,JJ} \\ n _{1,VJ} \\ n _{1,DJ} \\ n _{1,FJ} \\ n _{1,VV} \\ n _{1,DV} \\ n _{1,FV} \\ n _{1,VD} \\ n _{1,DD} \\ n _{1,FD} \\ n _{1,VF} \\ n _{1,DF} \\ n _{1,FF} \\ n _{1,SF} \\ n _{1,JF} \end{array}\right] \end{equation}\] Here, \(n _{1,SS}\) is the number of individuals that were in stage \(S\) in both occasions 0 and 1, and \(n _{2,DF}\) is the number of individuals that were in stage \(D\) in occasion 2 and stage \(F\) in occasion 1. Historical MPMs normally require a much larger number of elements to be parameterized than ahMPMs do. Figure 1.2 illustrates this issue, but the function-based matrix analysis in our Cypripedium candidum vignette provides a more realistic example. In that case study, we use a life history with 54 life stages, of which the first is a dormant seed stage, the next four are immature stages, the sixth is adult vegetative dormancy, the next 24 stages are size-classified non-flowering adults, and the final 24 stages are size-classified flowering adults. A life history with 54 stages yields an ahistorical matrix with 54 columns and 54 rows, and so 2,916 elements. A full historical matrix should have \(54 \times 54 = 2,916\) columns and 2,916 rows, consisting of 8,503,056 elements, because the column and row stages represent pairs of stages. In an ahistorical matrix, the only true zeros will be those elements corresponding to biologically impossible transitions, or, in the raw matrix case, those elements without any individuals taking the respective transition in a given time step. However, most elements in a historical MPM are structural zeros. This happens because transition elements in hMPMs are only estimable if stage at occasion t is equal in the column and row stage pairs. For example, the transition probability between stage A in occasion t-1 and stage B in occasion t (column stage), to stage C in occasion t and stage D in occasion t+1 (row stage) equals 0, because an organism cannot be in both stages B and C in occasion t. Increasing the number of stages in a life history model causes these structural zeros to increase at a faster rate than the rate at which the number of truly estimable elements increases. In fact, if there are \(m\) stages in a life history, yielding \(m^2\) elements in an ahistorical matrix, then although there will be \(m^4\) elements in the historical matrix, only \(m^3\) will be potentially estimable while \((m-1)m^3\) are structural zeros (we say potentially because some of the logically possible transitions may still be biologically impossible, or may have no data to yield values other than 0). Thus, in the Cypripedium candidum example, all 2,916 elements of the ahMPM are potentially estimable (although some are biologically impossible), while only 157,464 of the 8,503,056 elements of the associated hMPM are potentially estimable and the rest are structural zeros. Raw vs. function-based matrix models, including integral projection models Matrix projection models, whether historical or ahistorical, can also be categorized as either raw or function-based. The oldest and most common in the literature is the raw MPM, in which transition probabilities in the matrix are estimated by counting all of the individuals alive in a particular stage at occasion t, and dividing that number into the number of individuals from that set that are still alive in each possible stage in occasion t+1 (equations 2 and 3). For example, if 100 individuals are alive in stage D in year 2010, and 20 of these are alive in stage D in year 2011, as are a further 40 in stage V and a further 25 in stage F, then the associated transition probabilities are 0.20, 0.40, and 0.25, respectively. The overall survival probability for stage D is the sum of these transitions, or 0.85. Methods for estimating fecundity in raw matrices vary. In the Cypripedium candidum vignette, we used counts of the number of fruits produced by each individual in a given year, and multiplied by the mean number of seeds per fruit and the mean germination probability in the next year. This study was conducted as a pre-breeding census - fecundity was estimated using a count of the seeds or other propagules that the plant produces (or a proxy thereof, such as the number of fruits multiplied by the mean number of seeds per fruit) multiplied by the probability of that propagule surviving to the next year in order to make sure that fecundity reflects demographic processes across a full time step (Kendall et al. 2019). In other systems in which offspring may be observed and tracked, counts of actual recruits may be possible, such as in studies of nesting birds, and as in Lathyrus vernus, an herbaceous perennial plant that is the subject of some vignettes included in this package. In function-based MPMs, matrix elements are populated by kernels that link together functions representing key demographic processes governing each transition. Typically, demographic datasets are analyzed via linear models to estimate demographic parameters such as survival probability, the probability of becoming a certain size assuming survival, and fecundity. These linear models are developed using nested subsets of the demographic data used in a study in order to estimate conditional probabilities reflecting all of the demographic processes that must occur over the span of a single time step. Matrix elements are then estimated as products of responses from these linear models set to particular inputs corresponding to stage at occasion t and occasion t+1, as well as any other parameters governing the construction of the matrix. Function-based MPMs are more recent inventions than raw MPMs, and their strengths are making them increasingly common in the literature. Easterling, Ellner, and Dixon (2000) proposed a method of analyzing population dynamics called the integral projection model (IPM), which modeled the life history of an organism using a continuous size metric modeled on a Gaussian distribution. At least theoretically, the aim was to produce a model that was not contingent on a stage classification together with a function-based kernel to model population dynamics. In practice, however, size was and is discretized into many fine-scale size classes. The result is that, in practice, IPMs are actually high resolution function-based MPMs, breaking up a life history into many fine-scale size classes using a continuous measure of size and then estimating survival-transitions and fecundity according to these fine-scale size classes (Ellner and Rees 2006). Package lefko3 easily handles the creation and analysis of IPMs. We include a vignette in lefko3 showing an example, and also devote a chapter of the free e-book lefko3: a gentle introduction to the topic. The example from Figures 1.1 and 1.2 may help to illustrate the procedure of creating a function-based MPM. In Cypripedium candidum, the simple life history model shown in Figure 1 includes one adult stage that is not observable (D - vegetative dormancy), one adult stage that is observable but not reproductive (V - vegetative sprouting), and one adult stage that is both observable and reproductive (F - flowering). If \(\sigma _{.jl}\) is the probability of survival from stage \(j\) in occasion t and stage \(l\) in occasion t-1 to any stage in occasion t+1, \(\rho _{.jl}\) is the probability of sprouting in occasion t+1 after being stage \(j\) in occasion t and stage \(l\) in occasion t-1 and surviving to occasion t+1, and \(\gamma _{.jl}\) is the probability of flowering in occasion t+1 after being stage \(j\) in occasion t and stage \(l\) in occasion t-1 and surviving to and sprouting in occasion t+1, then \[\begin{equation} \tag{5} a _{DDD} = \sigma _{.DD} (1 - \rho _{.DD}) \end{equation}\] \[\begin{equation} \tag{6} a _{VDD} = \sigma _{.DD} \rho _{.DD} (1 - \gamma _{.DD}) \end{equation}\] \[\begin{equation} \tag{7} a _{FDD} = \sigma _{.DD} \rho _{.DD} \gamma _{.DD} \end{equation}\] In both raw and function-based MPMs, most estimated elements are survival-transition probabilities. In ahMPMs, these give the probability of an individual in stage \(j\) at occasion t surviving and transitioning to stage \(k\) at occasion t+1. Fewer elements are devoted to fecundity, although life history models with more reproductive stages and more recruit stages will have more fecundity elements. In Figure 1.1, fecundity is shown in the top-right of the matrix, as the mean production of seeds either dormant or germinating in the next occasion. Ignoring the fecundity elements, we have a survival-transition matrix (symbolized as either U or T), in which the column sums correspond to the expected survival probabilities of individuals in each stage from occasion t to occasion t+1. Ignoring the survival-transition terms, we have the fecundity matrix (symbolized as F), in which the column sums correspond to the expected overall time-specific fecundity of individuals in these respective stages. The full MPM (symbolized as A) is the matrix sum of the survival-transition matrix and the fecundity matrix. In principle, everything just noted about function-based ahMPMs also applies to function-based hMPMs. However, there is a key difference in terms of the distribution of elements within the matrix. The inclusion of stage or condition in occasion t-1 increases both the number of survival-transition elements and the number of fecundity elements, and also has the effect of dispersing the locations of these elements across the matrix. Whereas estimated transitions in ahMPMs will generally occur in dense patches of estimated elements, in hMPMs, most elements are structural zeros, with elements occurring either individually surrounded by zeros, or in small patches of estimated elements surrounded by zeros. This makes error-checking more challenging, because the user needs to be aware of where elements are supposed to occur in order to check their values properly. Different parameterizations of the historical MPM deVries and Caswell (2018) detail different approaches to the development of hMPMs, differing in how state or stage in occasion t-1 is incorporated into the matrix. Full prior stage dependence models deal with history by incorporating prior condition as the exact stage of an organism in occasion t-1, yielding a matrix showing stage pairs in occasion t-1 and t along the columns of the matrix, and stage pairs in occasion t and t+1 along the rows of the matrix. Prior condition models deal with history by making the transition from stage t to stage t+1 a function of stage at occasion t and condition in occasion t-1. Prior condition can be determined in the same way that current stage is determined (e.g. size classification), or a different measure of condition can be used, such as growth (i.e. the change in size between occasions t-1 and t). One key feature proposed by deVries and Caswell (2018) is the addition of a new stage to account for the prior status of newborn individuals. This reflects a slightly different interpretation from Ehrlén (2000) of the historical transition. In Ehrlén (2000), all matrix elements simply reflect the order of the events in a single transition, including both fecundity and survival-transition events. However, in deVries and Caswell (2018), these matrix elements must also reflect the history of specific individuals. In the latter case, the fact that a newborn in occasion t did not exist in occasion t-1 leads to the creation of a new prior stage to account for the time immediately prior to birth. This new stage only exists in the immediate prior occasion, and is only used for newborns in the first survival transition from birth. So, for example, in a three stage MPM where the 1st stage is the newborn stage and only the 3rd stage is reproductive, we add a 4th stage to the prior portion of the row and column. Thus, we may start with the following hMPM in Ehrlén (2000) format: \[\begin{equation} \tag{8} \tiny \left[\begin{array} {rrrrrrrrr} n _{2,AA} \\ n _{2,BA} \\ n _{2,CA} \\ n _{2,AB} \\ n _{2,BB} \\ n _{2,CB} \\ n _{2,AC} \\ n _{2,BC} \\ n _{2,CC} \end{array}\right] = \left[\begin{array} {rrrrrrrrr} S _{AAA} & 0 & 0 & S _{AAB} & 0 & 0 & S _{AAC} & 0 & 0 \\ S _{BAA} & 0 & 0 & S _{BAB} & 0 & 0 & S _{BAC} & 0 & 0 \\ S _{CAA} & 0 & 0 & S _{CAB} & 0 & 0 & S _{CAC} & 0 & 0 \\ 0 & S _{ABA} & 0 & 0 & S _{ABB} & 0 & 0 & S _{ABC} & 0 \\ 0 & S _{BBA} & 0 & 0 & S _{BBB} & 0 & 0 & S _{BBC} & 0 \\ 0 & S _{CBA} & 0 & 0 & S _{CBB} & 0 & 0 & S _{CBC} & 0 \\ 0 & 0 & S _{ACA} + F _{ACA} & 0 & 0 & S _{ACB} + F _{ACB} & 0 & 0 & S _{ACC} + F _{ACC} \\ 0 & 0 & S _{BCA} & 0 & 0 & S _{BCB} & 0 & 0 & S _{BCC} \\ 0 & 0 & S _{CCA} & 0 & 0 & S _{CCB} & 0 & 0 & S _{CCC} \\ \end{array}\right] \left[\begin{array} {rrrrrrrrr} n _{1,AA} \\ n _{1,BA} \\ n _{1,CA} \\ n _{1,AB} \\ n _{1,BB} \\ n _{1,CB} \\ n _{1,AC} \\ n _{1,BC} \\ n _{1,CC} \end{array}\right] \end{equation}\] This hMPM becomes as follows in deVries and Caswell (2018) format: \[\begin{equation} \tag{9} \tiny \left[\begin{array} {rrrrrrrrrrrr} n _{2,AA} \\ n _{2,BA} \\ n _{2,CA} \\ n _{2,AB} \\ n _{2,BB} \\ n _{2,CB} \\ n _{2,AC} \\ n _{2,BC} \\ n _{2,CC} \\ n _{2,AP} \\ n _{2,BP} \\ n _{2,CP} \end{array}\right] = \left[\begin{array} {rrrrrrrrrrrr} S _{AAA} & 0 & 0 & S _{AAB} & 0 & 0 & S _{AAC} & 0 & 0 & S _{AA,AP} & 0 & 0 \\ S _{BAA} & 0 & 0 & S _{BAB} & 0 & 0 & S _{BAC} & 0 & 0 & S _{BA,AP} & 0 & 0 \\ S _{CAA} & 0 & 0 & S _{CAB} & 0 & 0 & S _{CAC} & 0 & 0 & S _{CA,AP} & 0 & 0 \\ 0 & S _{ABA} & 0 & 0 & S _{ABB} & 0 & 0 & S _{ABC} & 0 & 0 & 0 & 0 \\ 0 & S _{BBA} & 0 & 0 & S _{BBB} & 0 & 0 & S _{BBC} & 0 & 0 & 0 & 0 \\ 0 & S _{CBA} & 0 & 0 & S _{CBB} & 0 & 0 & S _{CBC} & 0 & 0 & 0 & 0 \\ 0 & 0 & S _{ACA} & 0 & 0 & S _{ACB} & 0 & 0 & S _{ACC} & 0 & 0 & 0 \\ 0 & 0 & S _{BCA} & 0 & 0 & S _{BCB} & 0 & 0 & S _{BCC} & 0 & 0 & 0 \\ 0 & 0 & S _{CCA} & 0 & 0 & S _{CCB} & 0 & 0 & S _{CCC} & 0 & 0 & 0 \\ 0 & 0 & F _{AP,CA} & 0 & 0 & F _{AP,CB} & 0 & 0 & F _{AP,CC} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array}\right] \left[\begin{array} {rrrrrrrrrrrr} n _{1,AA} \\ n _{1,BA} \\ n _{1,CA} \\ n _{1,AB} \\ n _{1,BB} \\ n _{1,CB} \\ n _{1,AC} \\ n _{1,BC} \\ n _{1,CC} \\ n _{1,AP} \\ n _{1,BP} \\ n _{1,CP} \end{array}\right] \end{equation}\] Note that matrix 9 above can be reduced by two rows and two columns, since transitions from the newborn prior stage (P) can only be to the newborn stage (A). Package lefko3 generally implements the full prior stage dependence model approach, particularly in the development of raw matrices. However, in principle, function-based matrices developed with lefko3 can be seen as falling within the prior condition model approach where prior condition is determined in the same way as current stage. We also implement both the Ehrlén (2000) format and the deVries and Caswell (2018) format, and use the former as the default. We leave it to the user to decide which approach to use. deVries and Caswell (2018) also suggest that hMPMs can be organized differently to yield more intuitive models. In particular, matrices may be as organized as block matrices where the component blocks are essentially matrices showing the transitions from all stages in occasion t to stages in occasion t+1 conditioned on individuals having been in the same stage in occasion t-1. We refer to these block component matrices as conditional matrices, since they show transitions from occasion t to occasion t+1 conditional on the same previous stage in occasion t-1. We have developed function cond_hmpm() to take full hMPMs and decompose them into their associated conditional matrices. Leslie (age-classified) MPMs Some population biologists working with lefko3 may wish to produce Leslie matrices, which classify an organism only by age. A Leslie matrix has the following general form \[\begin{equation} \tag{10} \tiny \left[\begin{array} {rrrrrrr} n _{2, 0} \\ n _{2, 1} \\ n _{2, 2} \\ n _{2, 3} \\ \vdots \\ n _{2, a-1} \\ n _{2, a} \end{array}\right] = \left[\begin{array} {rrrrrrr} 0 & F _1 & F _2 & F _3 & \cdots & F _{a-1} & F _a \\ S _{1,0} & 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & S _{2,1} & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & S _{3,2} & 0 & \cdots & 0 & 0 \\ \vdots & & & & & & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 0 & S _{5+B,4C} \\ 0 & 0 & 0 & 0 & \cdots & S _{a,a-1} & S _{>a,a} \\ \end{array}\right] \left[\begin{array} {rrrrrrr} n _{1, 0} \\ n _{1, 1} \\ n _{1, 2} \\ n _{1, 3} \\ \vdots \\ n _{1, a-1} \\ n _{1, a} \end{array}\right] \end{equation}\] Here, \(n _{2, 0}\) refers to the number of newborn (age = 0) individuals in time 2, \(a\) is the final age in the model, \(S _{1, 0}\) is the survival probability of a newborn individual (age = 0) to the next age, \(F _1\) is the fecundity of an individual in age 1, and \(S _{>a, a}\) is the survival of individuals in the final modeled age to the next age (and later). Note that in Leslie models, survival transitions are always just below the diagonal or in the final lower-right element of the diagonal, and fecundity transitions are always in the top row. Note also that age-classified models are inherently ahistorical, and so no historical models are possible. Package lefko3 estimates both raw and function-based forms of these. Age-by-stage MPMs Package lefko3 can also handle the estimation and analysis of age-by-stage MPMs. These matrices incorporate both stages (whether size-classified or not), as well as the ages in which they occur. For example, in a three stage life history with a single newborn stage and two adult stages, only one of which is reproductive (as in equations 1, 4, 8, and 9), and in which age impacts survival and fecundity only for the first 4 years, beyond which survival and fecundity transitions remain the same, we have the following ahistorical age-by-stage matrix. \[\begin{equation} \tag{11} \tiny \left[\begin{array} {rrrrrrr} n _{2,1A} \\ n _{2,2B} \\ n _{2,2C} \\ n _{2,3B} \\ n _{2,3C} \\ n _{2,4B} \\ n _{2,4C} \end{array}\right] = \left[\begin{array} {rrrrrrr} 0 & 0 & F _{1A,2C} & 0 & F _{1A,3C} & 0 & F _{1A,4+C} \\ S _{2B,1A} & 0 & 0 & 0 & 0 & 0 & 0 \\ S _{2C,1A} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & S _{3B,2B} & S _{3B,2C} & 0 & 0 & 0 & 0 \\ 0 & S _{3C,2B} & S _{3C,2C} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & S _{4B,3B} & S _{4B,3C} & S _{5+B,4B} & S _{5+B,4C} \\ 0 & 0 & 0 & S _{4C,3B} & S _{4C,3C} & S _{5+C,4B} & S _{5+C,4C} \\ \end{array}\right] \left[\begin{array} {rrrrrrr} n _{1,1A} \\ n _{1,2B} \\ n _{1,2C} \\ n _{1,3B} \\ n _{1,3C} \\ n _{1,4B} \\ n _{1,4C} \end{array}\right] \end{equation}\] The above matrix is ahistorical. Package lefko3 does not currently support historical age-by-stage matrices, although they are certainly possible. Step 1: Life history model development The basic workflow to analysis with package lefko3 starts with the development of a life history model that encapsulates all of the appropriate life stages relevant to population dynamics. We do not present a complete discussion on this step, instead focusing on a sketch of the process focused on the life cycle graph and noting key issues related to this package. We encourage users to explore the literature on this subject, but particularly note chapters 3 and 4 in Caswell (2001) for a good treatment useful for beginners. Stages need to be defined carefully, and should strongly account for variability in vital rates. Statistical approaches such as Jenks natural breaks optimization can help determine ideal "natural" breaks in the distribution of size data to act as borders between size-classified stages in raw MPMs (Jenks 1967). In contrast, stages within function-based MPMs are more straightforward to define, with regular breaks across the size spectrum. We also note Kendall et al. (2019) as a good reference detailing common problems in MPM construction and how to avoid them through proper development and operationalization of life history models. Beissinger and Westphal (1998), Wardle (1998), and Salguero-Gómez and Casper (2010) provide good discussions of the proper application of life history models to understand population dynamics through the MPM approach. Here we utilize a life cycle graph approach to build a life history model. A life cycle graph shows the life history of an organism as a series of nodes and arrows. The nodes represent unique stages that the individual may go through during development, and individuals are noted as occurring in these stages during monitoring sessions conducted over a roughly or strictly regular frequency, such as at the start of the growing season every year, or over specific dates each year. Arrows represent transitions between stages across consecutive occasions. These may be interpreted either as probabilities of survival when reflecting the individual's stage across time, or as rates when reflecting the production of offspring. A good life cycle graph, and in general a good life history model, must explicitly include representations of all stages possible for the organism to enter, and all biologically plausible transitions between these transitions. Figure 1a shows an example of a life cycle graph. The construction of a good life history model is not as easy as it might appear, particularly because it can be influenced by factors seemingly external to the life history of the organism. Some of the most important considerations outside of the species' biology include the size of the dataset, and the decision of whether to create a raw MPM or a function-based MPM. All else being equal, a larger dataset and a function-based MPM will allow more stages to be used in the life history. Assuming a dataset of fixed size, raw MPMs will include more and more zeros as the number of stages in the life history model increases. This happens because the chance of having any individuals actually move through a specific transition decreases as the number of possible transitions increases. These functional zeros that become increasingly common with increasing stages can cause problems in analysis, because they suggest that some transitions are impossible when, in fact, individuals moving through them were missing just by chance. Even a transition with extremely high probability may be missing in some years just by chance. One impact of this would be the reduction of the mean transition within an element-wise mean matrix to an unrealistic level, causing the predicted population growth rate to be artificially low. Function-based MPMs are better able to handle large numbers of matrix elements because the risk of overparameterization is reduced by parsimonious model selection when vital rate models are determined. However, the size of the dataset influences the statistical power of these vital rate models, with smaller datasets yielding larger error associated with matrix elements and hence a loss of variability with potentially important factors. Loss of statistical power in vital rate models can lead to the loss of important process variance, and can make the population appear more static than it really is. Further, developing function-based MPMs for life history models that include stages rarely or never actually observed can yield problematic inferences, such as matrices suggesting that some stages have survival probabilities greater than 1.0. Chapter 6 in Morris and Doak (2002) provides a good description of techniques that may be used to define stages and to determine the exact number to use given the context imposed by a given dataset of a given size. Ellner and Rees (2006) and Doak et al. (2021) provide some discussion of this issue within the context of integral projection model development. Once a life history model is developed, we need to characterize the main life stages in a way that is relevant to the dataset. We do this with the sf_create() function, which creates a stageframe object. This term is a combination of the terms life history stage and data frame, and the object itself is a data frame that describes all of the life history stages. This object is created in a way that shows how stage relates to size, reproductive status, observation status, propagule status, maturity status, entry status (i.e. a stage's ability to serve as the first stage of an individual's life), and presence in the dataset, and in which the description of each life stage is unique. A good stageframe describes stages in ways that matrix-estimating functions utilize to compute elements accurately, and also allows the user to stipulate extra descriptive information on each stage as text. If the user wishes to create an IPM or a function-based MPM with many stages, then this function also allows stage definition to be automated. Step 2a: Dataset organization Once the stageframe has been created, the demographic data needs to be organized into a format that lefko3 can use. The proper format is one that we term historically-formatted vertical format, or hfv format for short. We have developed two functions for this purpose, depending on the original format of the data. If the data is arranged horizontally, meaning that individual life histories are recorded in a spreadsheet in which each unique individual's data is organized within a single row, where columns correspond to descriptor variables and to condition at different times, then the verticalize3() function can standardize this dataset into the proper format. If the dataset is in vertical, ahistorical format, in which an individual's condition across time is recorded across rows in a spreadsheet, with a single row corresponding to either a single observation or to a pair of consecutive observations, then the historicalize3() function can standardize the dataset properly. These functions both can also add some new variables that might be of use. For example, both functions can estimate radial density if Cartesian coordinates for each datum and a radius are provided. Both functions are powered by binaries built in C++ using single control loops to propagate each data frame properly and efficiently, and so run very quickly (typically under 1s on a 2019 MacBook Pro). The case studies included in lefko3 illustrate the usage of these functions on real data. The function summary_hfv() can be used to provide quick details about the resulting standardized datasets. Step 2b: Provide supplemental information for matrix estimation MPMs are often estimated only partially from available demographic datasets. Some transitions are parameterized using information gathered from other studies, whether through direct input in the matrix or through the development of kernels contingent on external information. Other transitions might also be estimated via proxy transitions elsewhere in the matrix. In lefko3, this information can be provided in one of two ways. The preferred, and most recently developed, method is through the supplement table, which can be developed using the supplemental() function. This function allows users to create a data frame detailing: specific transitions to set as constants; specific transitions to estimate via proxy transitions; specific multipliers for fecundity in cases in which fecundity estimates resulting from linear modeling must be modified to characterize the full transition, or for transitions estimated via proxy; and specific multipliers for survival in cases where survival is incorporated by proxy and must be set at a reduced or elevated level relative to the original, directly estimated transition. Examples might illustrate where this approach is useful. If I lack my own data on subterranean juvenile stages in a plant species, but I have estimates of survival for those stages form another study, then I might use those estimates as constants in the MPM. I can also set some transitions explicitly to 0 if I believe that there is a biological need for such an override. Further, if I lack demographic data on the development of germinated seeds to the seedling or earliest adult stage, but I have reason to believe that the survival probabilities should be similar to the survival within an observed stage of seedlings or small adults, then I can use the latter survival-transitions as proxies (or even decrease them via a multiplier). Finally, if fecundity is a function of seed production, survival to the next year, and germination probability, then germination probability might be estimated via a separate field germination study. I might wish to incorporate this germination probability both as a constant transition, and as a multiplier on estimated fecundity. Supplement tables provide the means to include all of this information. Step 3: Assess whether historical MPM is justified, and develop models of demographic parameters Historical MPMs are not necessarily justified in all cases. We urge the reader to use the principle of parsimony in deciding this. For example, users may analyze whether lagged effects of previous stage appear to influence vital rates. Package lefko3 includes one method to do this based on linear modeling. A number of other methods exist, mostly focused on what are known as memory models (Brownie et al. 1993; Pradel, Wintrebert, and Gimenez 2003; Cole et al. 2014). Here, we focus only on the main lefko3 method, and leave it to the reader to explore other options. In lefko3, function modelsearch() allows users to explore whether history influences component vital rates, and to develop linear models of these vital rates. The results can be used to decide not only whether a historical MPM is justified, but also to develop function-based hMPMs and ahMPMs themselves. Package lefko3 can estimate linear models to estimate up to fourteen different vital rates: Survival probability - This is the probability of surviving from occasion t to occasion t+1, given that the individual is in stage \(j\) in occasion t (and, if historical, in stage \(l\) in occasion t-1). In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. This parameter is required in all function-based matrices. Observation probability - This is an optional parameter denoting the probability of observation in occasion t+1 of an individual in stage \(k\) given survival from occasion t to occasion t+1. This parameter is only used when at least one stage is not observable. For example, some plants are capable of vegetative dormancy, in which case they are alive but do not necessarily sprout in all years in which they are in this stage. In these cases, the probability of sprouting may be estimated as the observation probability, and its complement is the probability of becoming dormant assuming survival. Note that this probability does not refer to observer effort, and so should only be used to differentiate completely unobservable stages where the observation status refers to an important biological phenomenon, such as when individuals may be alive but have a size of 0. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Primary size transition probability - This is the probability of becoming size \(k\) in occasion t+1 assuming survival from occasion t to occasion t+1 and observation in that time. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Secondary size transition probability - This is the probability of becoming size \(k\) in occasion t+1 assuming survival from occasion t to occasion t+1 and observation in that time, within a second size metric used for classification in addition to the primary metric. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Tertiary size transition probability - This is the probability of becoming size \(k\) in occasion t+1 assuming survival from occasion t to occasion t+1 and observation in that time, within a third size metric used for classification in addition to the primary and secondary metrics. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Reproduction probability - This is an optional parameter denoting the probability of reproducing in occasion t+1 given survival from occasion t to occasion t+1, and observation in that time. Note that this should be used only if the researcher wishes to separate breeding from non-breeding mature stages. If all adult stages are potentially reproductive and no separation of reproducing from non-reproducing adults is desired, then this parameter should not be estimated. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Fecundity rate - Under the default setting, this is the rate of successful production of offspring in occasion t by individuals alive, observable, and reproductive in that time, and, if desired and sufficient information is provided in the dataset, the survival of those offspring into occasion t+1 in whatever juvenile class is possible. Thus, the fecundity rate of seed-producing plants might be split into seedlings, which are plants that germinated within a year of seed production, and dormant seeds. Alternatively, it may be given only as produced fruits or seeds, with the survival and germination of seeds provided elsewhere in the MPM development process, such as within a supplement table. An additional setting allows fecundity rate to be estimated using data provided for occasion t+1 instead of occasion t. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Juvenile survival probability - This is an optional parameter that is used to model the probability of surviving from juvenile stage \(j\) in occasion t to occasion t+1. It is used only when the user wishes to model juvenile vital rates separately from adults, assuming different relationships with explanatory variables than those used to classify adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Juvenile observation probability - This is an optional parameter denoting the probability of observation in occasion t+1 of an individual in mature stage \(k\) given survival from a juvenile stage in occasion t to occasion t+1. It is used only when the user wishes to model juvenile vital rates separately from adults, assuming different relationships with explanatory variables than those used to classify adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and a number of individual or environmental covariates in occasions t and t-1, and individual identity, and all other caveats noted in (2) above apply. Juvenile primary size transition probability - This is an optional parameter denoting the probability of becoming mature size \(k\) in occasion t+1 assuming survival from juvenile stage \(j\) in occasion t to occasion t+1 and observation in that time. It is used only when the user wishes to model juvenile vital rates separately from adults, assuming different relationships with explanatory variables than those used to classify adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Juvenile secondary size transition probability - This is an optional parameter denoting the probability of becoming mature size \(k\) in occasion t+1 assuming survival from juvenile stage \(j\) in occasion t to occasion t+1 and observation in that time, in a secondary size metric in addition to the primary size metric. It is used only when the user wishes to model juvenile vital rates separately from adults, assuming different relationships with explanatory variables than those used to classify adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Juvenile tertiary size transition probability - This is an optional parameter denoting the probability of becoming mature size \(k\) in occasion t+1 assuming survival from juvenile stage \(j\) in occasion t to occasion t+1 and observation in that time, in a tertiary size metric in addition to the primary and secondary size metrics. It is used only when the user wishes to model juvenile vital rates separately from adults, assuming different relationships with explanatory variables than those used to classify adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and a number of individual or environmental covariates in occasions t and t-1, and individual identity. Juvenile reproduction probability - This is an optional parameter denoting the probability of reproducing in mature stage \(k\) in occasion t+1 given survival from juvenile stage j in occasion t to occasion t+1, and observation in that time. It is used only when the user wishes to model juvenile vital rates separately from adults, assuming different relationships with explanatory variables than those used to classify adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and a number of individual or environmental covariates in occasions t and t-1, and individual identity, and all other caveats in (4) apply. Juvenile maturity probability - This is an optional parameter denoting the probability of becoming mature in occasion t+1 given survival from juvenile stage j in occasion t to occasion t+1. This is a different parameter than (13) above and is particularly used to weight transition probabilities to juvenile vs. mature stages properly in matrices with defined juvenile stages. It is used only when the user wishes to model juvenile vital rates separately from adults, assuming different relationships with explanatory variables than those used to classify adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and a number of individual or environmental covariates in occasions t and t-1, and individual identity, and all other caveats in (1) apply. Of these fourteen vital rates, most users will estimate at least parameters (1) survival probability, (3) size transition probability, and (7) fecundity. These three are the default set for function modelsearch(). Parameters (2) observation probability and (6) reproduction probability may be used when some stages are included that are completely unobservable (and so do not have any size), or that are mature but non-reproductive, respectively. Parameters (8) through (14) should only be added if the dataset contains juvenile individuals transitioning to maturity, and these juveniles live essentially as a single stage before transitioning to maturity, or before transitioning to a stage that is size-classified in the same manner as adult stages are. Function modelsearch() handles the entire modeling process using an exhaustive model building process combined with information theoretical model selection using model AICc and the number of estimated parameters in each model. It handles the development of global models, exhaustive model building, and the selection of best-fit models. Users should provide this function with information about the following: Individual history - Are the matrices to be built historical or ahistorical? If the former, then the state of the individual in occasion t-1 will be included in modeling. Modeling approach - Should the models be estimated as generalized linear models (GLMs) or mixed linear models? While most function-based matrix models are estimated as the former, the latter approach can account for repeated observations of the same individual by including individual identity as a random factor. Mixed models also allow time and patch to be treated as random variables, which allows for broader and more theoretically sound inference. Suite of factors - Should both size and reproductive status be tested as causal factors? Should two-way interactions be included? Alternatively, should models be estimated as constants? Suite of vital rates - Which adult demographic parameters should be estimated? The defaults are (1) survival, (3) primary size, and (7) fecundity. Should (2) observation status, (6) reproductive status, or (4) secondary size or (5) tertiary size also be modeled? Juvenile vital rate estimation - Should juvenile parameters (8) through (14) also be modeled separately from adult parameters? Best-fit criterion - If a model with fewer parameters exists within 2.0 AICc units of the model with the lowest AICc, then should this model be used as the best-fit model (the default), or should the model with the lowest AICc always be chosen? Size distribution - Should size be modeled as a continuous variable under a Gaussian or Gamma distribution, or as a count variable under either a Poisson or negative binomial distribution? If a count variable, then should the distribution be zero-inflated to account for excess zeros, zero-truncated to account for a lack of zeros, or left unaltered? Fecundity distribution - Should fecundity be modeled as a continuous variable under a Gaussian or Gamma distribution, or as a count variable under either a Poisson or negative binomial distribution? If a count variable, then should the distribution be zero-inflated to account for excess zeros, zero-truncated to account for a lack of zeros, or left unaltered? Continuous distribution estimation method - If using a Gaussian or Gamma distribution for size, then should the probability of transition be estimated via the cumulative density function (CDF) method (the default), or the midpoint method? See Doak et al. (2021) for a discussion of the relative merits of both. Timing of fecundity - Function modelsearch() assumes that linear models of fecundity use a metric counted or measured in occasion t as the response. This applies well with most herbaceous plant datasets, where flowers or seeds produced in one year might be the fecundity response measured. However, users not wishing to follow this default behavior can use the fectime option to stipulate a fecundity metric measured in occasion t+1. Age - Is an age or age x stage MPM the main goal? Currently only age MPMs and ahistorical age x stage MPMs are offered in lefko3. Individual covariates and environmental state variables - Should individual or environmental covariates be tested as causal factors on vital rates? If so, then should they be included as fixed, quantitative terms or as random, categorical terms? Stage groups - If stage groups are noted in the stageframe, then should they also be used as fixed, independent categorical predictors in modeling? Missing values for occasions and patches - Should occasions or patches with coefficients that drop to 0 in modeling be estimated as random draws from the corresponding distributions of occasion or patch coefficients? Censoring - Should data points marked as questionable be used or eliminated? Variable names - The names of all relevant variables in the dataset need to be specified. Note that the default behavior assumes variable names output via the historicalize3() or verticalize3() functions, which produce standardized historically-formatted vertical (hfv format) datasets. Once all decisions are made and associated input terms are provided, modelsearch() goes to work. The result is a lefkoMod object, which is a list in which the first 14 elements are the best-fit models developed for each vital rate. These are followed by an equivalent number of elements showing the full model tables developed and tested, followed by a table detailing the names and definitions of terms used in modeling, an element detailing the best-fit criterion used, and ending on quality control data showing the number of individuals and the number of unique transitions used in the estimation of each model, as well as the overall accuracy of the models. Depending on user choices, linear modeling is handled through the lm() and glm() functions in the stats package, the glm.nb() function in the MASS package, the lmer() and glmer() functions in lme4 (Bates et al. 2015), the glmmTMB() function in glmmTMB (Brooks et al. 2017), the glm.nb() function in package MASS, the vglm() function in package VGAM (Yee and Wild 1996; Yee 2015), or the zeroinfl() function in package pscl (Zeileis, Kleiber, and Jackman 2008). Exhaustive model building proceeds through the dredge() function in package MuMIn (Bartoń 2014). Model selection is handled through assessment of AICc and the number of parameters estimated per model (see 6. Best-fit criterion above). If modelsearch() is set for historical analysis (historical = TRUE, the default), then the decision of whether to develop a historical MPM can be made on the basis of whether any best-fit vital rate model includes size or reproductive status in occasion t-1. If at least one vital rate does, then a historical MPM is justified. Otherwise, it is not. Regardless, the output can be used to create a function-based MPM in the next step. Advanced users of lefko3 may wish to create their own models without the package's automated model selection function. In that case, lefko3's matrix functions can accommodate single models developed using the functions and packages mentioned before. Step 4: Estimate matrices MPM creation can be accomplished with eight different functions: rlefko2() - Creates raw ahistorical stage-classified MPMs given a dataset, a stageframe, and a supplement table. rlefko3() - Creates raw historical stage-classified MPMs given a dataset, a stageframe, and a supplement table. arlefko2() - Creates raw ahistorical age-by-stage MPMs given a dataset, a stageframe, and a supplement table. rleslie() - Creates raw age-classified MPMs given a dataset, a stageframe, and a supplement table flefko2() - Creates function-based ahistorical MPMs and IPMs given a dataset, a set of models, a stageframe, and a supplement table. flefko3() - Creates function-based historical MPMs and IPMs given a dataset, a set of models, a stageframe, and a supplement table. aflefko2() - Creates function-based ahistorical age-by-stage MPMs given a dataset, a set of models, a stageframe, and a supplement table. fleslie() - Creates function-based age-classified MPMs given a dataset, a set of models, a stageframe, and a supplement table. These functions incorporate binary kernels developed to handle the estimation of matrix elements quickly and efficiently. A single run of flefko3(), for example, should be able to yield all annual matrices for all patches for the Cypripedium candidum dataset provided with lefko3 in under a half-minute on most machines (14s or so on RPS' 2019 MacBook Pro with 2.3 GHz 8-Core Intel Core i9). Parallel computing should not be necessary even with the slowest of current machines, provided that the machine is current enough to handle at least R 3.6.3. The output for each of these functions is a lefkoMat object, which is an S3 object (i.e. a list with pre-defined elements). These lists include elements A, U, and F, which are themselves lists of complete projection matrices, survival-transition matrices, and fecundity matrices, respectively. For example, code such as matobject$A[[1]] would access the first complete projection matrix in a lefkoMat object named matobject. They are followed by elements referred to as ahstages, hstages, and agestages, which provide data frames describing the stages (technically showing the stageframe, though edited by the function for consistency), the historical stage-pairs, and the age-stage pairs shown in the order in which they occur within the matrices. The labels element is a data frame giving a description of each matrix in the order in which it occurs within the A, U, and F elements, including the population, patch, and monitoring time designations. The final elements are quality control outputs that vary in content depending on whether the output matrices are raw or function-based, but nonetheless always include at least the numbers of individuals and individual-transitions used for estimation. Users wishing to check the output MPMs for errors or simply understand more about them may utilize functions summary() and cond_hmpm(). Function summary() summarizes each lefkoMat object used as input, and also checks the survival of stages, stage-pairs, or age-stages by assessing the column sums for each U matrix and providing warnings if any survival probabilities fall below 0 or are greater than 1. Function cond_hmpm() decomposes hMPMs into MPMs showing transitions from occasion t to occasion t+1, but conditioned on the same stage at occasion t-1. Thus, a single hMPM with five stages and 25 stage pairs would be broken down into five conditional MPMs, one for each stage at occasion t-1, with five columns and five rows denoting stage at occasions t and t+1, respectively. These can be examined individually, and the survival of stages from occasion t to occasion t+1 can be assessed as a function of stage in occasion t-1 by taking the associated column sums of these conditional matrices. Step 4a. Importing matrices and IPMs Some population ecologists will wish to utilize the power and functionality of lefko3 on already existing matrices. Although many analytical functions work with simple lists of matrices, users may import sets of matrices as lefkoMat objects in order to utilize the most powerful analyses. The create_lM() function allows users to take a list of square matrices of equal dimensions, and import them into a lefkoMat object. Users wishing to do this will also need to develop a stageframe, and to input the order of populations, patches, and years associated with the order of matrices in the input list. Function create_lM() runs several quality checks on the input data, and assuming that there are no obvious inconsistencies, yields the final lefkoMat object for further use in analysis. Users may also import IPMs. If the characteristics of the kernels propagating an IPM are properly described in a publication, and the kernel is composed of generalized linear models or generalized linear mixed models, then function vrm_import() allows this information to be input into lefko3 and used to recreate the IPMs in discretized form. Please see lefko3: a gentle introduction for more information, or see the function example. Step 5. MPM analyses Package lefko3 includes a number of functions to aid analysis once matrices are created. These may be of greater utility in some circumstances than established functions such as eigen(), because our functions are made to handle even extremely large, sparse matrices. Currently, we include functions to estimate element-wise arithmetic mean matrices, discrete and stochastic population growth rate, stable and long-run stage structure, reproductive value, deterministic and stochastic sensitivity and elasticity, and deterministic and stochastic life table response experiments (LTRE and sLTRE). We also include a basic projection function that can work under deterministic, stochastic, and density dependent assumptions. Plans are in the works for further analysis functions in the future. Deterministic vs. stochastic analysis Population ecologists most typically run deterministic projection analyses. These analyses take single matrices, and project them a theoretically infinite numbers of times. In most cases, such an infinite projection yields a set of asymptotic properties that are measurable using eigen analysis. For example, the long-run population growth rate should asymptotically approach the largest real part of the dominant eigenvalue. Ecologists wishing to assess the influence of environmental variability may instead wish to assume temporal environmental stochasticity. Under this assumption, the population is projected into the future by randomly shuffling with resampling all matrices associated with that specific population. Several theorems predict that such a long-run projection will typically yield roughly stable characteristics, including the long-run mean population growth rate and the long-run average stage structure, and that these characteristics may differ strongly from those predicted under deterministic analysis. Package lefko3 contains functions that quickly assess deterministic and stochastic properties. In addition, users may use the projection3() function to conduct user-defined projections of various sorts, including even cyclical and unequally weighted stochastic projections. Density independent vs. density dependence analysis Package lefko3 also provides the ability to conduct density dependent projections. Readers of the density dependence literature will know that there are many forms of density dependent functions that can be applied to matrix projection. The oldest forms typically applied a function such as the logistic function to all matrix elements, generally by multiplying each matrix element by some function of density (e.g., Leslie 1959). More recent forms incorporate complex approaches, often separating density dependent functions by vital rate (e.g., Jensen 1995). To provide the greatest flexibility, lefko3 currently implements density dependence in three different ways. First, density dependence may be set on matrix elements themselves using the density_input() function in conjunction with the two projection functions, projection3() and f_projection3(). This function allows users to list all matrix elements that should be subject to density dependence with a time lag, and to stipulate the characteristics of the density dependence. Shorthand abbreviations can be used in function density_input() to describe where density, such as all for all stages, rep for reproductive stages, nrep for mature but non-reproductive stages, etc. A default time delay of 1 time step is applied, and this can be set to any positive integer.Most users will be interested in this form. The second form of density dependence is vital rate density dependence, assuming a time lag. This form modifies vital rates developed for function-based MPMs by density, and so is less general than the previous approach. This can be operationalized using the density_vr() function in conjunction with function f_projection3(). The third form is spatial density dependence in vital rate models. This form of density dependence models spatial density rather population size, and is incorporated into vital rate model development in function-based MPMs. Currently, lefko3 includes four density dependence functions that can be chosen, with projections capable of including multiple functions and time delays. The first is the Ricker function, which is given as \[\begin{equation} \tag{12} \phi_{t+1} = \phi_t \times \alpha e^{-\beta n_t} \end{equation}\] where \(\alpha\) and \(\beta\) are the density dependence parameters, and \(\beta\) in particular gives the strength of density dependence. This equation is among the most interesting in the density dependence literature, because it is capable of producing oscillations, even multi-period oscillations, and chaos. The second density dependence function applied in lefko3 is the Beverton-Holt equation, which is given as \[\begin{equation} \tag{13} \phi_{t+1} = \phi_t \times \frac{\alpha}{1 + \beta n_t} \end{equation}\] where \(\alpha\) and \(\beta\) are, once again, the density dependence parameters, and \(\beta\) in particular gives the strength of density dependence. This function generally asymptotes at an equilibrium density, and so is not capable of producing the complex patterns that the Ricker model is capable of. The third function implemented in lefko3 is the Usher function, given as \[\begin{equation} \tag{14} \phi_{t+1} = \phi_t \times \frac{1}{1 + e^{\alpha n_t + \beta}} \end{equation}\] where both \(\alpha\) and \(\beta\) give the strength of density dependence, though the former via an interaction with density and the latter via an addition to the exponential effect. Finally, lefko3 also implements a form of the logistic equation, given as \[\begin{equation} \tag{15} \phi_{t+1} = \phi_t \times (1 - \frac{n_t}{K}) \end{equation}\] where only one parameter, \(K\), is needed. Package lefko3 currently implements these analyses via the projection3() and f_projection3() functions. Settings for this function also allow the user to stipulate whether the projection should force matrices to remain substochastic. Two forms of substochasticity are allowed: 1) a simple form in which survival-transition probabilities can be prevented from moving outside of the interval [0, 1] and fecundity can be prevented from becoming negative, and 2) a more complicated form that keeps the column sums in survival-transitions matrices (equivalent to the survival probabilities of stages, stage-pairs, or age-stages) within the interval [0, 1] and prevents fecundity from becoming negative. We are currently developing functions to analyze density-dependent projection results, including sensitivity and elasticity analyses. Population growth rate Package lefko3 allows the estimation of both the deterministic and stochastic population growth rates. The deterministic population growth rate is estimated with function lambda3(), and estimates the population growth rate (\(\lambda\)) as the dominant eigenvalue of each matrix provided (technically, the largest real part of the estimated eigenvalues). Where matrices are large and sparse, as in most historical or age-by-stage cases, lambda3() first converts matrices to sparse format in order to speed up estimation. The function slambda3() estimates the log stochastic population growth rate in its instantaneous form (\(a = \text{log} \lambda _{S}\)). This is estimated as the mean log discrete population growth rate across a user-specified number of random draws of the supplied annual matrices: \[\begin{equation} \tag{16} a = \text{log} \lambda _{S} = \frac{1}{T} \sum _{t=1}^{T} \text{log} \frac{N _{t}}{N _{t-1}} \end{equation}\] where \(N _t\) is the population size in occasion \(t\), and \(T\) is the number of occasions projected forward, set to 10,000 by default. Function slambda3() does not shuffle across patches or populations, instead shuffling within patches, or shuffling annual matrices calculated as element-wise means of patch matrices within the same population and the same time. The methodology is based on Morris and Doak (2002), though accounts for spatial averaging of patches and can easily handle large and sparse matrices. Stable stage distribution and reproductive value In ahistorical analysis, the stable stage distribution and the reproductive value of stages are estimated as the standardized right and left eigenvectors, respectively, associated with the dominant eigenvalue of the matrix. Standardization of the stable stage distribution is handled by dividing each respective element of the right eigenvector by the sum of the elements in that eigenvector. Standardization of the reproductive value vector is handled by dividing each element in the left eigenvector by the value of the first non-zero element in that eigenvector. The methods mentioned above apply to historical matrices as well. However, as described, they only provide the stable stage distribution and reproductive values of stage pairs. We provide two functions, stablestage3() and repvalue3(), to allow the estimation of these vectors for both ahistorical and historical MPMs. When provided with a historical MPM, these functions also estimate historically-corrected stable stage distributions and reproductive value vectors, which correspond to the original stages in the associated life history model rather than the stage pairs, and so are comparable against the ahistorical stable stage distributions and reproductive value vectors. The historically-corrected stable stage proportion for stage \(j\) is estimated as the sum of all stable stage proportions for stage \(j\) in occasion t across all stages in occasion t-1, as in: \[\begin{equation} \tag{17} SS _j = \sum _{l=1}^{m} w _{jl} \end{equation}\] where \(l\) is stage in occasion t-1, \(m\) is the number of stages, \(SS_j\) is the stable stage proportion of stage \(j\), and \(w _{jl}\) is the stable stage proportion of stage pair \(jl\) given as the standardized right eigenvector corresponding to the dominant eigenvalue of the hMPM. The historically-corrected reproductive value of stage \(j\) is calculated as the sum of all reproductive values for stage \(j\) in occasion t across all stages in occasion t-1, weighted by the stable stage proportion of the respective stage pair divided by the stable stage proportion of stage \(j\) at occasion t (Ehrlén 2000), as in: \[\begin{equation} \tag{18} RV _j = \sum _{l=1}^{m} v _{jl} (w _{jl} / SS _{j}) \end{equation}\] where \(RV _j\) refers to reproductive value of stage \(j\), and \(v _{jl}\) refers to the reproductive value of the stage pair as given by the standardized left eigenvector of the dominant eigenvalue of the historical MPM. The influence of history can make these numbers differ quite dramatically from those produced by ahistorical matrices. Package lefko3 also handles the estimation of stochastic stable stage distribution and reproductive value vectors. It handles this through projection of shuffled temporal matrices, typically 10,000 occasions forward though the exact number can be set as needed. According to the stochastic strong and weak ergodic theorems, this random shuffling should eventually yield a roughly stationary distribution of stage proportions and stage reproductive values. Thus, the stochastic stable structure is estimated as the arithmetic mean vector of the final set of typically one or two thousand predicted stage proportion vectors in such a projection. The stochastic reproductive value is more complicated. Imagine a vector, \(\mathbf{x}(t)\), which includes the expected number of offspring produced by each stage in occasion t+1. This is referred to as the undiscounted reproductive value vector because it is not standardized, and so is likely to eventually move to an extreme value such as infinity. It turns out that \(\mathbf{x}(t)\) is related to other occasions in this projection via: \[\begin{equation} \tag{19} \mathbf{x}^\top(t) = \mathbf{x}^\top(t+1)\mathbf{A}_t \end{equation}\] Standardizing leads to the discounted reproductive value vector, as below: \[\begin{equation} \tag{20} \mathbf{v}(t) = \frac{\mathbf{x}(t)}{\lVert \mathbf{x}(t) \rVert} \end{equation}\] The reproductive value vector can then be projected as well, as in \[\begin{equation} \tag{21} \mathbf{v}(t) = \frac{\mathbf{v}^\top(t+1)\mathbf{A}_t}{\lVert \mathbf{v}^\top(t+1)\mathbf{A}_t \rVert} \end{equation}\] The reproductive value is projected backwards in time, and package lefko3 takes its average in the final one or two thousand portions of the backward projection. Sensitivity and elasticity Package lefko3 contains functions allowing users to conduct deterministic and stochastic sensitivity and elasticity analyses. Sensitivity and elasticity analysis are forms of perturbation analysis, and we urge readers to consult Caswell (2001) and Caswell (2019) to become fully acquainted with the topic. Here, we discuss just the most important aspects to understand these analyses as conducted using lefko3. The sensitivity of \(\lambda\), the deterministic population growth rate, to a specific element in a projection matrix can be calculated using eigen analysis as \[\begin{equation} \tag{22} \frac{\partial \lambda}{\partial a _{kj}} = \frac{\bar{v} _{k} w _{j}}{<\mathbf{w}, \mathbf{v}>} \end{equation}\] Here, \(\mathbf{w}\) is the stable stage distribution vector calculated as the right eigenvector of the dominant eigenvalue of the matrix (standardized to sum to 1), \(\mathbf{v}\) is the reproductive value vector calculated as the associated left eigenvector (standardized by dividing by the value of the first non-zero value), and \(\bar{v} _{k}\) is the complex conjugate of element \(k\) of \(\mathbf{v}\). \(k\) is the stage in occasion t+1 and in the ahMPM refers to the corresponding row, while \(j\) refers to stage in occasion t and in an ahMPM refers to the corresponding column. The term \(<\mathbf{w}, \mathbf{v}>\) refers to the scalar product of these vectors. In the hMPM case, the basic method is essentially the same as in equation 22, although the equation itself changes somewhat, with the sensitivity of \(\lambda\) to each matrix element given as \[\begin{equation} \tag{23} \frac{\partial \lambda}{\partial a _{kjl}} = \frac{\bar{v} _{kj} w _{jl}}{<\mathbf{w}, \mathbf{v}>} \end{equation}\] Here, \(k\) is stage in occasion t+1, \(j\) is stage in occasion t, \(l\) is stage in occasion t-1, \(kj\) refers both to the stage pair in occasions t+1 and t as well as the corresponding row in the historical matrix, and \(jl\) refers both to the stage pair in occasions t and t-1 as well as the corresponding column in the historical matrix. Historically-corrected sensitivities may also be estimated for the basic stages of the life history model using the historically-corrected stable stage distributions and reproductive values given in equations 17 and 18 as input in equation 22. Sensitivities show how much a small, additive change in a matrix element can alter \(\lambda\), but do so by estimating the local slope of \(\lambda\) given in terms of the matrix element (Caswell 2001). Sensitivities are also typically non-zero even for impossible transitions represented by zeros in the matrix. Elasticities, in contrast, show the influence of small, proportional changes in matrix elements on \(\lambda\), essentially given as local slopes of \(log \lambda\) given in terms of the logarithms of matrix elements. This results in impossible transitions yielding elasticities equal to 0. In the ahistorical case, the elasticity, \(e _{kj}\), of \(\lambda\) to change in a matrix element \(a _{kj}\) is given as \[\begin{equation} \tag{24} e _{kj} = \frac{a _{kj}}{\lambda} \frac{\partial \lambda}{\partial a _{kj}} \end{equation}\] The historical case is just an extension of the above. \[\begin{equation} \tag{25} e _{kjl} = \frac{a _{kjl}}{\lambda} \frac{\partial \lambda}{\partial a _{kjl}} \end{equation}\] Here, we use row and column definitions equivalent to those used in the historical sensitivity case. This makes the elasticity a function of the sensitivity of \(\lambda\) to the matrix element, as well as of the value of the matrix element itself. Thus, structural zeros in the hMPM must have elasticity equal to 0, although it is entirely possible that they have non-zero sensitivities. Elasticities calculated for hMPMs can be compared to those calculated in ahMPMs easily because elasticities may be added by stage or transition type, with all elasticities corresponding to a specific matrix necessarily summing to 1.0. Thus, we can calculate the stage pair elasticities resulting from a historical MPM analysis as \[\begin{equation} \tag{26} e _{kj} = \sum_{i=1}^{m} e _{kji} \end{equation}\] with all symbol definitions as before. We have provided a summary() function for elasticity output in lefko3 that automatically groups the resulting elasticities by the kind of ahistorical or historical transition. Stochastic sensitivity and elasticity analysis are also available. Per Caswell (2001), the sensitivity of \(a = \text{log} \lambda _{S}\) to changes in a specific element is given as: \[\begin{equation} \tag{27} \frac{\partial \text{log} \lambda _S}{\partial a _{kj}} = \lim_{T \to \infty} \frac{1}{T} \sum_{t=0}^{T-1} \frac{\mathbf{v}(t+1) \mathbf{w}^\top(t)}{\mathbf{v}^\top(t+1) \mathbf{w}(t+1)} \end{equation}\] where \(t\) refers to a specific monitoring occasion and \(T\) refers to the number of occasions projected forward. Similarly, the stochastic elasticity of \(a = \text{log} \lambda _{S}\) to changes in a specific element is given as: \[\begin{equation} \tag{28} \frac{\partial \text{log} \lambda _S}{\partial \text{log} a _{kj}} = \lim_{T \to \infty} \frac{1}{T} \sum_{t=0}^{T-1} \frac{\bigl(\mathbf{v}(t+1) \mathbf{w}^\top(t) \bigr) \circ \mathbf{A_t}}{\mathbf{v}^\top(t+1) \mathbf{w}(t+1)} \end{equation}\] where \(\mathbf{A_t}\) refers to the A matrix corresponding to occasion t. Stochastic sensitivities for hMPMs may be converted to historically-corrected format as in the deterministic case, and stochastic elasticities may be summed as before. Users working with elasticities, whether deterministic or stochastic, may find the summary.lefkoElas() function useful. If an elasticity analysis is conducted on a lefkoMat object, then the information in that object can be used to summarize summed elasticity according to different sorts of transitions. In the ahistorical case, elasticities are sorted by growth, stasis, shrinkage, and fecundity. In the historical case, they are sorted by transition pairs across three consecutive occasions (e.g. stasis followed growth vs. shrinkage followed by stasis vs. growth followed by reproduction). Life table response experiments (LTREs) MPMs are very useful tools to describe and explore population dynamics. However, sometimes we may wish to know how specific factors or treatments impact the population growth rate, and through what elements and vital rates those impacts are achieved. Life table response experiments (LTREs) provide the means to evaluate these impacts via methodologies that compare matrices for their impacts on the growth rate of some reference matrix or set of matrices. Several forms of LTRE exist. The simplest and earliest developed is the fixed, one-way LTRE, which evaluates the impact of a single factor on the asymptotic population growth rate \(\lambda\). Imagine two matrices, the first a matrix derived from some treatment such as a particular management regime, \(\mathbf{A^{(m)}}\), and a reference matrix from a control regime, \(\mathbf{A^{(R)}}\). Per Caswell (2001), if we estimate a matrix \(\mathbf{A^{(m+R)/2}}\) that is the element-wise arithmetic mean matrix of \(\mathbf{A^{(m)}}\) and \(\mathbf{A^{(R)}}\), then: \[\begin{equation} \tag{29} \Delta \lambda = \lambda^{(m)} - \lambda^{(R)} \approx \sum_{j,i} ((a_{ji}^{(m)} - a_{ji}^{(R)}) (\frac{\partial \lambda}{\partial a_{ji}})) | _\mathbf{A^{(m+R)/2}} \end{equation}\] Since 2010, several stochastic life table response experiment (sLTREs) methodologies have also been developed. sLTREs measure the impact of one or more treatments on the log stochastic growth rate, \(a = \text{log} \lambda _{S}\). However, the treatment and reference here are not single matrices, but sets of matrices that vary randomly across time. Here, we assume that \(log\lambda^{(m)}\) and \(log\lambda^{(R)}\) are the log stochastic growth rates of the treatment MPM and the reference MPM, respectively. Package lefko3 currently implements the Davison et al. (2010) simulated sLTRE, where: \[\begin{equation} \tag{30} \Delta \text{log} \lambda _{S} = log\lambda^{(m)} - log\lambda^{(R)} \approx \sum_{j,i} [log \mu_{ji}^{(m)} - log \mu_{ji}^{(R)}] E_{ji}^{\mu} + [log \sigma_{ji}^{(m)} - log \sigma_{ji}^{(R)}] E_{ji}^{\sigma} \end{equation}\] In the above equation, \(E_{ji}^{\mu}\) and \(E_{ji}^{\sigma}\) are the stochastic elasticities of the log stochastic growth rate to changes in the mean and standard deviation, respectively, of the element at row \(j\) and column \(i\) in the reference matrix set. Function ltre3() handles all of these calculations. Davison et al. (2013) presented an analytical solution to the sLTRE, and we are currently working to implement this version in lefko3 as well. Further analyses Users can take the MPMs produced by MPM creation functions in package lefko3 and plug them into MPM or matrix analysis functions in other packages. Matrices produced by lefko3 are stored within lefkoMat objects, and users wishing to work with analysis functions in other packages need only extract the matrices from them. This allows the use of all functions that work with matrices, including functions in base R such as eigen(), as well as in packages such as popbio (Stubben and Milligan 2007) and popdemo (Stott, Hodgson, and Townley 2012). We encourage users to explore whether the packages and functions they wish to use can handle sparse matrices, as well as large matrices - some were not designed to and can fail or yield unpredictable behavior when applied particularly to historical matrices produced by lefko3. We are grateful to two anonymous reviewers whose scrutiny improved the quality of this vignette. The project resulting in this package and this tutorial was funded by Grant-In-Aid 19H03298 from the Japan Society for the Promotion of Science. Bartoń, Kamil A. 2014. "MuMIn: Multi-Model Inference." https://CRAN.R-project.org/package=MuMIn. Bates, Douglas, Martin Maechler, Ben Bolker, and Steve Walker. 2015. "Fitting Linear Mixed-Effects Models Using Lme4." Journal of Statistical Software 67 (1): 1–48. https://doi.org/10.18637/jss.v067.i01. Beissinger, Steven R., and Michael Ian Westphal. 1998. "On the Use of Demographic Models of Population Viability in Endangered Species Management." Journal of Wildlife Management 62 (3): 821–41. https://doi.org/10.2307/3802534. Brooks, Mollie E., Kasper Kristensen, Koen J. van Benthem, Arni Magnusson, Casper W. Berg, Anders Nielsen, Hans J. Skaug, Martin Machler, and Benjamin M. Bolker. 2017. "glmmTMB Balances Speed and Flexibility Among Packages for Zero-Inflated Generalized Linear Mixed Modeling." The R Journal 9 (2): 378–400. https://doi.org/10.32614/RJ-2017-066. Brownie, C., James E. Hines, James D. Nichols, Kenneth H. Pollock, and Jay B. Hestbeck. 1993. "Capture-Recapture Studies for Multiple Strata Including Non-Markovian Transitions." Biometrics 49 (4): 1173–87. https://doi.org/10.2307/2532259. Caswell, Hal. 2001. Matrix Population Models: Construction, Analysis, and Interpretation. Second edition. Sunderland, Massachusetts, USA: Sinauer Associates, Inc. ———. 2019. Sensitivity Analysis: Matrix Methods in Demography and Ecology. Demographic Research Methods. Cham, Switerland: Springer Nature. https://doi.org/10.1007/978-3-030-10534-1. Cole, Diana J., Byron J. T. Morgan, Rachel S. McCrea, Roger Pradel, Olivier Gimenez, and Remi Choquet. 2014. "Does Your Species Have Memory? Analyzing Capture-Recapture Data with Memory Models." Ecology and Evolution 4 (11): 2124–33. https://doi.org/10.1002/ece3.1037. Davison, Raziel, Hans Jacquemyn, Dries Adriaens, Olivier Honnay, Hans de Kroon, and Shripad Tuljapurkar. 2010. "Demographic Effects of Extreme Weather Events on a Short-Lived Calcareous Grassland Species: Stochastic Life Table Response Experiments." Journal of Ecology 98 (2): 255–67. https://doi.org/10.1111/j.1365-2745.2009.01611.x. Davison, Raziel, Florence Nicole, Hans Jacquemyn, and Shripad Tuljapurkar. 2013. "Contributions of Covariance: Decomposing the Components of Stochastic Population Growth in Cypripedium Calceolus." The American Naturalist 181 (3): 410–20. https://doi.org/10.1086/669155. deVries, Charlotte, and Hal Caswell. 2018. "Demography When History Matters: Construction and Analysis of Second-Order Matrix Population Models." Theoretical Ecology 11 (2): 129–40. https://doi.org/10.1007/s12080-017-0353-0. Doak, Daniel F., Ellen Waddle, Ryan E. Langendorf, Allison M. Louthan, Nathalie Isabelle Chardon, Reilly R. Dibner, Douglas A. Keinath, et al. 2021. "A Critical Comparison of Integral Projection and Matrix Projection Models for Demographic Analysis." Ecological Monographs 91 (2): e01447. https://doi.org/10.1002/ecm.1447. Easterling, Michael R., Stephen P. Ellner, and Philip M. Dixon. 2000. "Size-Specific Sensitivity: Applying a New Structured Population Model." Ecology 81 (3): 694–708. https://doi.org/10.1890/0012-9658(2000)081[0694:SSSAAN]2.0.CO;2. Ehrlén, Johan. 2000. "The Dynamics of Plant Populations: Does the History of Individuals Matter?" Ecology 81 (6): 1675–84. https://doi.org/10.1890/0012-9658(2000)081[1675:TDOPPD]2.0.CO;2. Ellner, Stephen P., and Mark Rees. 2006. "Integral Projection Models for Species with Complex Demography." American Naturalist 167 (3): 410–28. https://doi.org/10.1086/499438. Jenks, George F. 1967. "The Data Model Concept in Statistical Mapping." International Yearbook of Cartography 7: 186–90. Jensen, A. L. 1995. "Simple Density-Dependent Matrix Model for Population Projection." Ecological Modelling 77 (1): 43–48. https://doi.org/10.1016/0304-3800(93)E0081-D. Kendall, Bruce E., Masami Fujiwara, Jasmin Diaz-Lopez, Sandra Schneider, Jakob Voigt, and Soren Wiesner. 2019. "Persistent Problems in the Construction of Matrix Population Models." Ecological Modelling 406 (August): 33–43. https://doi.org/10.1016/j.ecolmodel.2019.03.011. Leslie, P. H. 1959. "The Properties of a Certain Lag Type of Population Growth and the Influence of an External Random Factor on a Number of Such Populations." Physiological Zoology 32 (3): 151–59. https://doi.org/10.1086/physzool.32.3.30152218. Morris, William F., and Daniel F. Doak. 2002. Quantitative Conservation Biology: Theory and Practice of Population Viability Analysis. Sunderland, Massachusetts, USA: Sinauer Associates, Inc. Pradel, Roger, C. Wintrebert, and O. Gimenez. 2003. "A Proposal for a Goodness-of-Fit Test to the Arnason-Schwarz Multisite Capture-Recapture Model." Biometrics 59: 43–53. https://doi.org/10.1111/1541-0420.00006. Salguero-Gómez, Roberto, and Brenda B. Casper. 2010. "Keeping Plant Shrinkage in the Demographic Loop." Journal of Ecology 98: 312–23. https://doi.org/10.1111/j.1365-2745.2009.01616.x. Shefferson, Richard P., Tiiu Kull, Michael J. Hutchings, Marc-André Selosse, Hans Jacquemyn, Kimberly M. Kellett, Eric S. Menges, et al. 2018. "Drivers of Vegetative Dormancy Across Herbaceous Perennial Plant Species." Ecology Letters 21 (5): 724–33. https://doi.org/10.1111/ele.12940. Shefferson, Richard P., Shun Kurokawa, and Johan Ehrlén. 2021. "Lefko3: Analysing Individual History Through Size-Classified Matrix Population Models." Methods in Ecology and Evolution 12 (2): 378–82. https://doi.org/10.1111/2041-210X.13526. Shefferson, Richard P., Ryo Mizuta, and Michael J. Hutchings. 2017. "Predicting Evolution in Response to Climate Change: The Example of Sprouting Probability in Three Dormancy-Prone Orchid Species." Royal Society Open Science 4 (1): 160647. https://doi.org/10.1098/rsos.160647. Shefferson, Richard P., Brett K. Sandercock, Joyce Proper, and Steven R. Beissinger. 2001. "Estimating Dormancy and Survival of a Rare Herbaceous Perennial Using Mark-Recapture Models." Ecology 82 (1): 145–56. https://doi.org/10.1890/0012-9658(2001)082[0145:EDASOA]2.0.CO;2. Shefferson, Richard P., Robert J. Warren II, and H. Ronald Pulliam. 2014. "Life History Costs Make Perfect Sprouting Maladaptive in Two Herbaceous Perennials." Journal of Ecology 102 (5): 1318–28. https://doi.org/10.1111/1365-2745.12281. Stott, Iain, David J. Hodgson, and Stuart Townley. 2012. "Popdemo: An R Package for Population Demography Using Projection Matrix Analysis." Methods in Ecology and Evolution 3 (5): 797–802. https://doi.org/10.1111/j.2041-210X.2012.00222.x. Stubben, Chris J., and Brook G. Milligan. 2007. "Estimating and Analyzing Demographic Models Using the Popbio Package in r." Journal of Statistical Software 22: 11. https://doi.org/10.18637/jss.v022.i11. Wardle, Glenda M. 1998. "A Graph Theory Approach to Demographic Loop Analysis." Ecology 79 (7): 2539–49. https://doi.org/10.1890/0012-9658(1998)079[2539:AGTATD]2.0.CO;2. Yee, Thomas W. 2015. "Vector Generalized Linear and Additive Models: With an Implementation in R." https://CRAN.R-project.org/package=VGAM. Yee, Thomas W., and C. J. Wild. 1996. "Vector Generalized Additive Models." Journal of the Royal Statistical Society, Series B 58 (3): 481–93. Zeileis, Achim, Christian Kleiber, and Simon Jackman. 2008. "Regression Models for Count Data in R." Journal of Statistical Software 27 (1): 1–25. https://doi.org/10.18637/jss.v027.i08.
CommonCrawl
Full paper | Open | Published: 15 August 2015 Earthquake interevent time distribution in Kachchh, Northwestern India Sumanta Pasari1 & Onkar Dikshit1 Earth, Planets and Spacevolume 67, Article number: 129 (2015) | Download Citation Statistical properties of earthquake interevent times have long been the topic of interest to seismologists and earthquake professionals, mainly for hazard-related concerns. In this paper, we present a comprehensive study on the temporal statistics of earthquake interoccurrence times of the seismically active Kachchh peninsula (western India) from thirteen probability distributions. Those distributions are exponential, gamma, lognormal, Weibull, Levy, Maxwell, Pareto, Rayleigh, inverse Gaussian (Brownian passage time), inverse Weibull (Frechet), exponentiated exponential, exponentiated Rayleigh (Burr type X), and exponentiated Weibull distributions. Statistical inferences of the scale and shape parameters of these distributions are discussed from the maximum likelihood estimations and the Fisher information matrices. The latter are used as a surrogate tool to appraise the parametric uncertainty in the estimation process. The results were found on the basis of two goodness-of-fit tests: the maximum likelihood criterion with its modification to Akaike information criterion (AIC) and the Kolmogorov-Smirnov (K-S) minimum distance criterion. These results reveal that (i) the exponential model provides the best fit, (ii) the gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull models provide an intermediate fit, and (iii) the rest, namely Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull, fit poorly to the earthquake catalog of Kachchh and its adjacent regions. This study also analyzes the present-day seismicity in terms of the estimated recurrence interval and conditional probability curves (hazard curves). The estimated cumulative probability and the conditional probability of a magnitude 5.0 or higher event reach 0.8–0.9 by 2027–2036 and 2034–2043, respectively. These values have significant implications in a variety of practical applications including earthquake insurance, seismic zonation, location identification of lifeline structures, and revision of building codes. The Kachchh (Kutch) province of Gujarat, northwestern India, is a unique stable continental region (SCR) of the world that has experienced two large intraplate earthquakes within a span of 182 years (Mw 7.8 in 1819 and Mw 7.7 in 2001). In addition to these two disastrous events, it has a long history of infrequent but moderate earthquake occurrences, indicating slow but continuous stress release in these regions (Chandra 1977; Gupta et al. 2001; Rajendran and Rajendran 2001; Rastogi 2001, 2004; Jade et al. 2002). The Kachchh peninsula, which mostly falls in Zone V (highest seismicity and potential for magnitude 8.0 earthquakes) on the seismic zoning map of India (BIS: Bureau of Indian Standards 2002), is an assemblage of several active faults. Those faults are the Allah Bund Fault (ABF), Island Belt Fault (IBF), Kachchh Mainland Fault (KMF), Bhuj Fault (BF), Katrol Fault (KF), Wagad Fault (WF), Nagar Parkar Fault (NPF), and the Kathiawar Fault (KTF) (Biswas 1987, 2005; Rajendran and Rajendran 2001; Rastogi 2001; Morino et al. 2008). The genesis of earthquakes from these faults, their source characterizations, stress-field analyses, and most importantly the vulnerability assessment of such seismically active zones bear significant importance for the safety of human lives and critical infrastructure such as power plants, refineries, schools, hospitals, shopping malls, and other lifeline structures (e.g., archeological monuments) situated in the nearby major cities and industrial hubs in this heartland of northwestern India. Earthquake studies based on physical and geological models are useful; however, the current paper focuses on empirical earthquake recurrence modeling. This modeling has become an integral part within many seismological societies and private organizations for nationwide hazard assessment and catastrophe insurance programs (Lee et al. 2011; Working Group on California Earthquake Probabilities 2013). Methods such as paleoseismic investigations of mapped faults or geodetic monitoring to determine strain accumulation patterns are helpful to forecast earthquakes. However, the primary issue with these approaches is that many disastrous earthquakes do not reappear on previously identified faults (Lee et al. 2011). To address this limitation, statistical seismology of earthquake occurrence and forecasting has become essential for seismic hazard assessment of large geographical areas (Jordan 2006; Shebalin et al. 2014). In this paper, we analyze earthquake interevent times of magnitude 5.0 or higher events in the seismic-prone Kachchh region from thirteen probability distributions. Those distributions are exponential, gamma, lognormal, Weibull, Levy, Maxwell, Pareto, Rayleigh, inverse Gaussian (Brownian passage time), inverse Weibull (Frechet), exponentiated exponential, exponentiated Rayleigh (Burr type X), and exponentiated Weibull models. We seek the best probability model(s) for earthquake forecasting, and subsequently generate a number of conditional probability curves (hazard curves) to assess the present-day seismicity in the Kachchh region. Geology and seismotectonic framework of Kachchh The Kachchh peninsula largely consists of Quaternary/Cenozoic sediments, Deccan volcanic rocks, and Jurassic sandstones (Biswas 1987; Gupta et al. 2001). Geomorphologically, the entire region can be broadly categorized into the following two major zones: the Rann area that often gets submerged by seawater, and the highland zone that comprises uplifts, elevated landforms, and residual depressions (Biswas 1987, 2005; Yadav et al. 2008). The Rann area is essentially an uninhabitable desert of mud flats and salt pans, whereas the highland zone is semi arid containing fluviomarine sediments and Banni plain grasslands (Biswas 1987). Major structural features of the Kachchh region include several EW trending faults, folds, and a rift basin, which is bounded by the following two extensional faults: the south-dipping Nagar Parkar Fault (NPF) in the north and the north-dipping Kathiawar Fault (KTF) in the south (Fig. 1). Other major faults in this region are the north-dipping Allah Bund Fault (ABF), south-dipping Kachchh Mainland Fault (KMF), Katrol Fault (KF), Bhuj Fault (BF), and the south-dipping North Wagad Fault (NWF). These fault zones are the source of many moderate to large intraplate earthquakes in this region. For instance, the most disastrous Mw 7.7 (intensity X+ on the modified Mercalli intensity (MMI) scale) Republic Day Bhuj earthquake along the NWF (08:46 IST, January 26, 2001; epicentral location: 23.412° N, 70.232° E; focal depth: 23 km) caused 13,819 human deaths, US $10 billion economic loss, and damaged over one million houses (Rastogi 2001). Another large event, the 1819 Rann of Kachchh earthquake (Mw 7.8), occurred along the ABF and created fault scarp approximately 4 to 6 m high in this region (Rajendran and Rajendran 2001; Morino et al. 2008). The Anjar 1956 (Mw 6.0) event occurred along the KF (Chung and Gao 1995). In addition to these major faults, a number of uplifts (Fig. 1), namely Kachchh Mainland Uplift, Kathiawar Uplift, Pachham, Khadir, Bela, Wagad, and Chobar uplift, and some minor NE/NW trending faults/lineaments characterize the tectonic setting of Kachchh region (Biswas 1987; Rastogi 2001). Seismotectonic map of Kachchh region. This figure shows a detailed seismotectonic map of the Kachchh region comprising several faults, lineaments, uplifts, and other structural features; The Kachchh Mainland Uplift is in the south of Kachchh Mainland Fault, whereas the Wagad Uplift is in the north of Wagad Fault; PU, Pachham Uplift; KU, Khadir Uplift; BU, Bela Uplift; CU, Chobar Uplift. (Inset) Indian plate boundaries are highlighted to realize the geographical location of Kachchh peninsula. AMD, Ahmedabad; CHN, Chennai (modified after Rastogi 2001) Rationale and previous works In this paper, we consider earthquake occurrence phenomena to be a statistical process. We assume that the earthquake interevent times are random variables associated with some probability distributions. Various properties of these underlying distributions provide important insights into earthquake recurrence intervals, elapsed time (time elapsed since last earthquake), residual time (time remaining to an earthquake), and related concepts, which may finally be integrated in a systematic manner to arrive at long-term earthquake forecasting in a specified zone of interest (SSHAC: Senior Seismic Hazard Analysis Committee 1997). Extensive studies of earthquake hazards and their statistical analyses are being undertaken globally to quantify anomalous behavior of earthquake risks in a seismically active region. These studies also facilitate the examination of hypothetical earthquake scenarios that aid in decisions for how much money a government should allocate for disaster utility or an insurance company should collect from an individual as a seismic-risk coverage premium (SSHAC: Senior Seismic Hazard Analysis Committee 1997; Lee et al. 2011; Working Group on California Earthquake Probabilities 2013). Probabilistic earthquake modeling has crossed several milestones over the years. During the development stage of seismic renewal process theory, the exponential distribution (due to the Poissonian assumption of the number of earthquake events) used to be the favored distribution in representing a sequence of earthquake inter-arrival times (Cornell 1968; Hagiwara 1974; Baker 2008). Later, however, many researchers (e.g., Reid 1910; Anagnos and Kiremidjian 1988; Matthews et al. 2002; Baker 2008) have pointed out that the time-independent Poissonian model has some disagreement with the physics of earthquake-generating mechanisms. Therefore, a number of time-dependent renewal models have subsequently evolved (Utsu 1984; Parvez and Ram 1997; Matthews et al. 2002; Tripathi 2006; Yadav et al. 2008, 2010; Working Group on California Earthquake Probabilities 2013; Pasari and Dikshit 2014a, b). A pioneering work by the Japanese researcher Utsu (1984) was notable in its contributions. Utsu (1984) used four renewal probability distributions (Weibull, gamma, lognormal, and exponential) to estimate earthquake recurrence intervals in Japan and its surrounding areas. Later, several researchers (e.g., Parvez and Ram 1997; Tripathi 2006; Yadav et al. 2008, 2010; Yazdani and Kowsari 2011; Chen et al. 2013; Pasari and Dikshit 2014a; Pasari 2015) applied similar methodologies for their respective geographic regions of study. The suitability of other probability models such as the Gaussian distribution (Papazachos et al. 1987), the negative binomial distribution (Dionysiou and Papadopoulos 1992), the Levy distribution (Sotolongo-Costa et al. 2000; Pasari and Dikshit 2014b), the Pareto group of distributions (Kagan and Schoenberg 2001; Ferraes 2003), the generalized gamma distribution (Bak et al. 2002), the Brownian passage time distribution (Matthews et al. 2002; Pasari and Dikshit 2014b), the Rayleigh distribution (Ferraes 2003; Yazdani and Kowsari 2011), the inverse Weibull distribution (Pasari and Dikshit 2014a), and the exponentiated exponential distribution (Pasari and Dikshit 2014c; Pasari 2015) has also been explored for identification of the most suitable probability model for a given earthquake catalog. In this paper, we re-examine earthquake interevent studies conducted by Tripathi (2006) and Yadav et al. (2008) based on a versatile set of thirteen probability models to provide a fresh perspective on the most suitable probability distribution in this unique intraplate seismogenic zone of northwestern India. The results are discussed in conjunction with the probabilistic assessment of earthquake hazards within this region. Earthquake data We used a complete and homogeneous earthquake catalog (Yadav et al. 2008 in Pure and Applied Geophysics 165, 1813–1833) comprising 15 intra-continent earthquake events (M ≥ 5.0) that occurred in the Kachchh region (23–25° N, 68–71° E) during 1819 to 2006. Details of these events are listed in Table 1, and their epicentral distributions are shown in Fig. 2. This catalog is the most recent updated catalog, as no M ≥ 5.0 earthquakes have occurred in the Kachchh region since 2006. Table 1 List of M ≥ 5.0 earthquakes in Kachchh and its adjoining regions (Yadav et al. 2008) Epicentral distributions of earthquakes. Epicentral locations of earthquakes of magnitude M ≥ 5 that occurred in the Kachchh and its adjoining regions during 1819–2006 (modified after Yadav et al. 2008); (Inset above) The Indian plate boundaries (dark blue lines) form a triple junction in the northwest of the Kachchh region; the nearest interplate boundaries of Kachchh, being the Heart-Chaman plate boundary (~400 km) and the Himalayan plate boundary (~1000 km), are also highlighted; AMD, Ahmedabad; CHN, Chennai (after Gupta et al. 2001); (Inset below) Time completeness graph for Kachchh catalog based on a magnitude-frequency-based visual cumulative method test (Mulargia and Tinti 1985) It should be noted that the present earthquake catalog consists of modern (instrumental) as well as historical (non-instrumental) events of Kachchh region from various sources. Those sources include the Indian Meteorological Department (IMD), United States Geological Survey (USGS), Harvard Centroid Moment Tensor (CMT) catalog, and some published literature (Yadav et al. 2008). The difficulty in converting the scale of earthquake data at different stages with different magnitudes or from different sources has been a consistent challenge in homogenizing this catalog (Biswas 1987; Rundle et al. 2003; Yadav et al. 2008). However, to examine the time completeness of the Kachchh catalog, we performed a magnitude-frequency-based visual cumulative method test (Mulargia and Tinti 1985). Under this test, a graph was constructed with time (in years) and the cumulative number of earthquake events. The equation of the best fit line (in a least-squares sense) was determined. A catalog is considered to be complete with respect to time if the trend of the data stabilizes to approximately a straight line (Mulargia and Tinti 1985). This cumulative straight-line approach is based on the fact that earthquake rates and moment releases are ultimately steady over sufficiently long time periods (Mulargia and Tinti 1985). However, one should keep in mind that this assumption may not be correct. The possibility of extended aftershock durations in low-strain-rate intra-continental regions (e.g., Stein and Liu 2009) and the possibility of substantial variations in seismic activity (e.g., Page and Felzer 2015) may raise questions about this assumption. For the Kachchh catalog, it is observed that the graph between time and cumulative number of events has a linear relationship with an R-square value greater than 0.85 (Inset of Fig. 2). Thus, the studied homogeneous catalog may be considered to be complete with respect to time. The intensities, casualties, and fault characteristics of some of these earthquakes have been studied in detail (Chung and Gao 1995; Rajendran and Rajendran 2001; Rastogi 2001, 2004; Negishi et al. 2002; Mandal et al. 2005; Mandal and Horton 2007; Morino et al. 2008; Kayal et al. 2012). The 16 June 1819 great Rann of Kachchh earthquake (Mw 7.8; intensity XI on MMI scale) that occurred near the northwestern international border (~100 km northwest of Bhuj town) rumbled the entire region, causing 1500 deaths in Kachchh and 500 in Ahmedabad (~250 km from epicenter). This low-angle reverse faulting earthquake formed a fault scarp of 4–6 m in height and 90 km in length trending EW along the ABF (Rajendran and Rajendran 2001; Rastogi 2001). The isoseismals of this earthquake were nearly elliptical with a principal axis oriented in the ENE direction (Rajendran and Rajendran 2001; Yadav et al. 2008). The next damaging earthquake (Mw 6.3; intensity VII on MMI scale) occurred in the Lakhpat region on 19 June 1845. Another well-studied (Chung and Gao 1995) destructive earthquake is the 1956 Anjar earthquake (Mw 6.0; intensity IX on MMI scale) that occurred along the KF near Anjar and destroyed much of the infrastructure and buildings, claiming 115 deaths (Chung and Gao 1995; Gupta et al. 2001; Rastogi 2001). The most recent catastrophic earthquake (Mw 7.7, intensity X+ on the MMI scale) in the western part of the peninsular Indian shield occurred on 26 January 2001, followed by more than 2000 aftershocks, including a few with magnitudes above 5.0 (Mandal et al. 2005; Yadav et al. 2008; Kayal et al. 2012). This event caused extensive damage in the neighboring areas, leaving nearly 14,000 people dead, 167,000 injured, and one million people homeless (Miyashita et al. 2001; Rastogi 2001). Apart from structural damages, intense liquefaction and fluidization occurred in an area of 300 km × 200 km covering Rann, Banni plain, and several other saline-marshy lowland regions (Rastogi 2001; Kayal et al. 2012). The fault plane solutions of the main shock of the 2001 event suggested a reverse faulting (with some strike-slip component) mechanism, which is analogous to the 1956 Anjar earthquake or the 1819 Rann of Kachchh earthquake (Chung and Gao 1995; Rastogi 2001; Kayal et al. 2002; Negishi et al. 2002). In view of the above threats from earthquake damage, several national and international initiatives have been performed for earthquake hazard analysis and catastrophic insurance programs in the seismically active Kachchh region (Kayal et al. 2012; Choudhury et al. 2014). The present study contributes to this body of work by estimating earthquake recurrence intervals and associated hazards in a probabilistic environment. Methods and results Our methodology in this paper broadly consists of the following three steps: model description, parameter estimation, and model validation. We briefly discuss each method and present the results therein for a better visualization of the concepts. Thirteen different probability models are considered in the present analysis. The probability density functions of these distributions are shown in Table 2. The respective model parameters, their domains, and specific roles are also listed (Johnson et al. 1995; Murthy et al. 2004). With the known density function f (t) of a positive random variable, T, it is straightforward to obtain the cumulative distribution function F (t), survival function S (t), hazard function h (t), and reverse hazard function r (t) as \( F(t)={\displaystyle \underset{0}{\overset{t}{\int }}f(u)}\;du \), S(t) = 1 − F(t), \( h(t)=\frac{f(t)}{S(t)} \), and \( r(t)=\frac{f(t)}{F(t)} \). Table 2 Probability distributions, density functions, and associated Fisher Information Matrices (FIM) From Table 2, we observe that domains of all distributions except the Pareto distribution are the entire positive real line. These distributions also offer various representations depending on the shape parameter. In particular, the shapes of the hazard rate function and reversed hazard rate function have significant importance in understanding whether the residual time (time remaining to a future event) is increasing or decreasing for an increasing elapsed time (Sornette and Knopoff 1997; Matthews et al. 2002). On the other hand, the heavy-tailedness property of a model (e.g., lognormal, Levy, Pareto) allows diversification in seismic risk analysis and associated applications (SSHAC: Senior Seismic Hazard Analysis Committee 1997). In order to calculate the conditional probability of an earthquake for a known elapsed time τ, we introduce a random variable V, corresponding to a waiting time v. The conditional probability of an earthquake in the time interval (τ, τ + v), knowing that no earthquake occurred during previous τ years, is then calculated as $$ P\left(V\le \tau +v\Big|V\ge \tau \right)=\frac{F\left(\tau +v\right)-F\left(\tau \right)}{1-F\left(\tau \right)}\kern1.44em \left(v>0\right) $$ The maximum likelihood estimation (MLE) method was adopted for parameter estimation not only because of its flexibility and wide applicability, but also for its ability to provide the uncertainty (asymptotic) measure in the estimation. The MLE method yields consistent estimators that are often desirable in any statistical analysis (Johnson et al. 1995). In brief, the MLE method estimates parameter values by maximizing the likelihood (log) function on the basis of observed sample values. Thus, it is more realistic for practical applications. In recent years, significant research has focused on quantifying the uncertainties and intricacies of the estimation process. Nevertheless, precise uncertainty information is rarely available because exact distributions of the estimated model parameters are mostly unknown (Johnson et al. 1995). Therefore, a method based on the Fisher information matrices (FIM) is frequently utilized as a surrogate tool to appraise the parametric uncertainty in terms of the variances and confidence bounds of the estimated parameters (Hogg et al. 2005). Let I p × p (θ) be the information matrix; θ = (θ 1, θ 2, ⋯, θ p ), for some integer p, denotes the vector of parameters. Then, I p × p (θ) can be calculated (Hogg et al. 2005) as $$ {I}_{p\times p}\left(\theta \right)={\left({I}_{ij}\left(\theta \right)\right)}_{i,j=1,2,\cdots, p}=E{\left(-\frac{\partial^2 \ln f\left(T;\theta \right)}{\partial {\theta}_i\partial {\theta}_j}\right)}_{i,j=1,2,\cdots, p}=\frac{1}{n}E{\left(-\frac{\partial^2L\left(T;\theta \right)}{\partial {\theta}_i\partial {\theta}_j}\right)}_{i,j=1,2,\cdots, p} $$ Here E is the expectation operator; L(T; θ) is the log-likelihood function of n sample data points { t 1, t 2, t 3, …, t n }. The FIM I p × p (θ) = (I ij (θ)) i,j = 1,2,⋯,p can also be expressed in terms of the density function f (t; θ), the hazard function h (t; θ) (Efron and Johnstone 1990), or the reversed hazard function r (t; θ) (Gupta et al. 2004) as shown below. $$ \begin{array}{c}{\left({I}_{ij}\left(\theta \right)\right)}_{i,j=1,2,\cdots, p}=E{\left[\left(\frac{\partial \ln f\left(T;\theta \right)}{\partial {\theta}_i}\right)\left(\frac{\partial \ln f\left(T;\theta \right)}{\partial {\theta}_j}\right)\right]}_{i,j=1,2,\cdots, p}\\ {}=E{\left[\left(\frac{\partial \ln h\left(T;\theta \right)}{\partial {\theta}_i}\right)\left(\frac{\partial \ln h\left(T;\theta \right)}{\partial {\theta}_j}\right)\right]}_{i,j=1,2,\cdots, p}\\ {}=E{\left[\left(\frac{\partial \ln r\left(T;\theta \right)}{\partial {\theta}_i}\right)\left(\frac{\partial \ln r\left(T;\theta \right)}{\partial {\theta}_j}\right)\right]}_{i,j=1,2,\cdots, p}\end{array} $$ The FIM defined above is a symmetric and positive semi-definite matrix. It is important to note that although the random variables (∂/∂θ)ln f(T; θ), (∂/∂θ)ln h(T; θ) or (∂/∂θ)ln r(T; θ) produce different first moments, their second moments are identically equal to the elements of I p × p (θ). The information matrix is often combined with the Cramer-Rao lower bound theorem (Hogg et al. 2005) to asymptotically estimate the variance-covariance matrix \( {\sum}_{\widehat{\theta}} \) of the estimated parameters \( \left(\widehat{\theta}\right) \) as \( {\sum}_{\widehat{\theta}}\ge {\left[nI\left(\widehat{\theta}\right)\right]}^{-1} \); \( \widehat{\theta} \) is the maximum likelihood estimate of θ. In addition, the (1 − δ) % two-sided asymptotic confidence bounds on the parameters are easily obtained from the following inequality: $$ \widehat{\theta}-{z}_{\delta /2}.{\sqrt{\left[{I}_{ii}\left(\widehat{\theta}\right)\right]}}_{i=1,2,\cdots, p}<\theta <\widehat{\theta}+{z}_{\delta /2}.{\sqrt{\left[{I}_{ii}\left(\widehat{\theta}\right)\right]}}_{i=1,2,\cdots, p} $$ \( {\left[{I}_{ii}\left(\widehat{\theta}\right)\right]}_{\;i=1,2,\cdots, p} \) is the vector of diagonal entries of the FIM and z δ/2 is the critical value corresponding to a significance level of δ/2 on the standard normal distribution (Hogg et al. 2005). The FIMs of 11 distributions are provided in Table 2. We excluded the exponentiated Rayleigh and exponentiated Weibull distributions because the FIM of the exponentiated Rayleigh distribution contains highly non-linear implicit formulae (Kundu and Raqab 2005), and the FIM of exponentiated Weibull distribution is not completely known (Pal et al. 2006). Table 3 presents the maximum likelihood estimated parameter values along with their asymptotic standard deviations and confidence bounds. For the Pareto distribution, we calculated the exact variances (Quandt 1966) of the estimated parameters using the following formula: $$ \begin{array}{c}{\sigma}^2\left(\widehat{\alpha}\right)=\frac{\alpha^2n\beta }{\left(n\beta -2\right){\left(n\beta -1\right)}^2}\kern1.92em \left(n>\frac{2}{\beta}\right)\\ {}{\sigma}^2\left(\widehat{\beta}\right)=\frac{\beta^2{n}^2}{{\left(n-2\right)}^2\left(n-3\right)}\kern2.76em \left(n>3\right)\end{array} $$ Table 3 Estimated parameter values along with their asymptotic standard deviations and confidence bounds Table 3 also reveals many insights into the distributional properties. For example, estimated shape parameters for the gamma, lognormal, Weibull, and exponentiated exponential are greater than 1.0, indicating that the associated hazard curves are monotonically increasing. A similar observation is noted for the exponentiated Weibull distribution, with βγ > 1 (Pal et al. 2006). It should be emphasized that the asymptotic standard deviations of estimated model parameters, which, in this case, vary from 0.007673 (exponentiated exponential) to 4.725639 (gamma), do not necessarily correspond to a good or a bad fit of the underlying distribution. Rather, the impact of the asymptotic standard deviations could be examined to understand the accuracy of the model fitness (Hogg et al. 2005). Two popular goodness-of-fit tests, namely the maximum likelihood criterion and its modification, called the Akaike information criterion (AIC), and Kolmogorov-Smirnov (K-S) minimum distance criterion, are used for model selection and validation. A brief description of these methods is presented below. The maximum likelihood criterion is entirely based on the MLE method. It uses the log-likelihood values to prioritize the competing models. The higher the likelihood value, the better is the model (Johnson et al. 1995). The maximum likelihood criterion, however, assumes that the number of parameters in each model is the same. To relax this presumption and to account for model complexity due to a greater number of parameters, the AIC was employed. This is an extension of the maximum likelihood criterion. The AIC values are calculated as 2k − 2L, where k is the number of parameters in the model and L is the log-likelihood value. The model with the minimum AIC value is marked as the most economical model (Johnson et al. 1995). The K-S minimum distance criterion prioritizes the competing models based on their "closeness" to the empirical distribution function F n (t), defined as \( {F}_n(t)=\frac{\mathrm{Number}\;\mathrm{of}\;{t}_i\le t}{n} \). F n (t) defined in this manner becomes a step function. Now, if we assume that there are two competitive models F and G, then, the corresponding K-S distances are calculated as $$ \begin{array}{l}{D}_1=\underset{-\infty <t<\infty }{sup}\left|{F}_n(t)-F(t)\right|\\ {}{D}_2=\underset{-\infty <t<\infty }{sup}\left|{F}_n(t)-G(t)\right|\end{array} $$ where sup t denotes the supremum of the set of distances. If D 1 < D 2,, model F is chosen; otherwise we choose model G. Monte Carlo simulations are often utilized to generate thousands of data points to obtain the maximum distance between the empirical model and the fitted model. However, it is recommended to prioritize the competitive models based on their overall fit as revealed in various K-S plots (Johnson et al. 1995; Murthy et al. 2004). The non-linear equation solver package fsolve() function of the MATLAB 7.10 software (MATLAB 2010) was used for the MLE estimations. The initial solution vectors for these estimations are usually determined from the graphical parametric approximations (Murthy et al. 2004). In particular, for the initial approximations of the scale and shape parameters of the three-parameter exponentiated Weibull distribution, we used MLE estimates of corresponding Weibull parameters. The log-likelihood, AIC, and K-S distance values for each competing probability model were calculated and are presented in Table 4. It is observed that the exponential model has the lowest AIC value, indicating this model was the most economic model to fit the present data. The AIC values of gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull distributions are larger than those of the exponential model, and these AIC values themselves are close to each other. Therefore, these seven competitive models may be categorized into a common group of models that provide an intermediate fit to the Kachchh catalog. The rest (Levy, Maxwell, Rayleigh, and inverse Weibull) have larger AIC values, indicating a poor fit to the present catalog. On the other hand, the K-S distances corresponding to the exponential and gamma models are the smallest, implying that these two models provide the best fit. In contrast, the K-S distances corresponding to Levy, Maxwell, Pareto, Rayleigh, inverse Gaussian, and exponentiated Rayleigh are the largest, implying that these models have a poor fit. The remaining models (lognormal, Weibull, inverse Weibull, exponentiated exponential, and exponentiated Weibull) have intermediate K-S values, indicating their intermediate fit to the present earthquake catalog. To support and extend this discussion on model fitness and model comparisons, Fig. 3 provides a number of K-S graphs that examine the overall fit of the competitive models. Table 4 Model selection and validation from two goodness-of-fit tests: the maximum log-likelihood criterion (ln L) and the Kolmogorov-Smirnov (K-S) minimum distance criterion K-S graphs for model comparison. This figure shows a number of K-S plots for the following studied models: (a) exponential and gamma, (b) lognormal and inverse Gaussian, (c) gamma and exponentiated exponential, and (d) Weibull and exponentiated Weibull, (e) exponentiated exponential and exponentiated Weibull, (f) exponential and exponentiated Rayleigh distributions, (g) exponential and exponentiated exponential, (h) exponential and exponentiated Weibull, (i) Rayleigh and exponentiated Rayleigh, and (j) all poor fitting distributions From Fig. 3, it is observed that some pairs of distributions are very close to each other, making them almost indistinguishable. Examples are the exponential and gamma, the gamma and exponentiated exponential, the Weibull and exponentiated Weibull, and the exponentiated exponential and exponentiated Weibull distributions. However, some other pairs have differences in overall fit. These include the lognormal and inverse Gaussian, the exponential and exponentiated Rayleigh, the exponential and exponentiated exponential, the exponential and exponentiated Weibull, and the Rayleigh and exponentiated Rayleigh distributions. The K-S plots of Levy, Maxwell, Pareto, Frechet, and Rayleigh distributions clearly indicate a poor fit to the present data. It is also observed that, unlike the Weibull and exponentiated Weibull "parent-child" pair, the exponential and exponentiated exponential pair and the Rayleigh and exponentiated Rayleigh pair do not preserve a close fit to the Kachchh catalog. In addition, the abscissa values where the maximum K-S distances are achieved are not identical for all studied distributions. In summary, it is concluded from Fig. 3 and Table 4 that the exponential model or the Poissonian random distribution provides the best fit to the present data. The gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull models provide an intermediate fit. The remaining models, namely Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull models, provide a poor fit to the present earthquake catalog. One possible reason for such a poor fit of Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull models could be the non-consideration of the smaller magnitude events in the present study (M < 5.0). In general, the heavy-tailed models such as Levy and Pareto, or the extreme value distributions such as Frechet, offer a good fit to a data in which the frequency of smaller events are higher than the frequency of larger events (Johnson et al. 1995). Thus, for the present catalog that consists of only larger events (M ≥ 5.0), the heavy-tailed models do not provide a suitable fit. Earthquake hazard assessment After analyzing the relative model fitness of different probability distributions, the earthquake hazards of the Kachchh region will be assessed in terms of the estimated recurrence interval and the conditional probability values. A number of conditional probability curves are also generated for different elapsed times (τ = 0, 5, ⋯, 60 years) to appraise the long-term earthquake forecasting in Kachchh and its adjoining regions. The best fit and the intermediate fit probability models were used in this calculation. The mean recurrence interval for a magnitude 5.0 or higher event in the Kachchh region was calculated to be 13.35 ± 10.91 years (from exponential distribution), whereas the estimated cumulative probability values were found to be 0.8–0.9 by 2027–2036. The conditional probability values (using Eq. 1) and the associated conditional probability curves (hazard curves) were also obtained for different elapsed times. The conditional probability values for an elapsed time of 9 years (i.e., March 2015) are tabulated in Table 5, while the conditional probability curves for the Kachchh region are presented in Fig. 4. Table 5 Estimated conditional probability values for an elapsed time of 9 years (i.e., March 2015) Conditional probability curves for the Kachchh region. Conditional probability curves (hazard curves) for elapsed time τ = 0, 5, ⋯, 60 years, as deduced from gamma, lognormal, Weibull, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull distributions for moderate earthquakes events in the Kachchh region; a dot-line represents the hazard curve for an elapsed time of 9 years (i.e., March 2015) Table 5 shows that the uncertainties of parametric estimations are favorably accounted by providing a range of conditional probability values; 95 % confidence intervals of the estimated parameters are used (refer Table 3). For the exponentiated Rayleigh and the exponentiated Weibull distributions, the "absolute" conditional probabilities are presented because of the non-availability of their parametric uncertainties (Pal et al. 2006). It is observed that the conditional probability values of a magnitude 5.0 or higher event reaches 0.8–0.9 by 2034–2043. Moreover, it is observed that the upper confidence values of estimated parameters usually lead to lower conditional probabilities in comparison with the probabilities obtained from the lower confidence values of the estimated parameters. For gamma and lognormal distributions, the differences of the upper and lower conditional probability values are higher than for the exponential, Weibull, inverse Gaussian, and exponentiated exponential distributions. For smaller waiting times, the conditional probabilities of lognormal and inverse Gaussian distributions are larger compared with other competitive distributions. However, for larger waiting times, these probability values gradually become smaller compared with the other distributions. In fact, probability values from all distributions are observed to converge to the highest probability value for large waiting times (about 60 years). In addition to the tabulated conditional probability values, a few conditional probability curves are plotted in Fig. 4 to examine the long-term seismicity of the study region. These hazard curves have many direct and indirect applications in city planning, designing seismic insurance products, location identification of lifeline structures, seismic zonation, and revision of building codes (SSHAC: Senior Seismic Hazard Analysis Committee 1997; Yadav et al. 2008). It is concluded that the results from the present study are largely consistent with the prior research by Tripathi (2006) and Yadav et al. (2008). Tripathi (2006) conducted a probabilistic hazard assessment for the Kachchh region from three probability models (gamma, lognormal, and Weibull distributions). The forecasting was based on an earthquake catalog with ten M ≥ 5.0 events. The results revealed high probability values of earthquake occurrence after 28–42 years for an M ≥ 5.0 event and after 47–55 years for an M ≥ 6.0 event, with reference to the last event in 2001 (Tripathi 2006). In a similar effort, Yadav et al. (2008) applied the same set of three probability models to an updated earthquake catalog of the Kachchh region to examine probabilistic earthquake hazards in terms of the estimated recurrence interval and the conditional probability values. The MLE was used for parameter estimations. They calculated estimated recurrence intervals to be 13.34 ± 10.91 years and the conditional probability values to be 0.8–0.9 in 2027–2034. The Weibull model provided the best relative fit (ln L = − 49.954), whereas the gamma had an intermediate fit (ln L = − 49.957) and the lognormal had relatively poor fit (ln L = − 50.384) to the Kachchh catalog. Yadav et al. (2008) mentioned that the difference in the likelihood functions of gamma and Weibull was negligible, implying their similar nature of model fitness. Kachchh and its adjoining regions suffer from a number of high intensity yet infrequent intraplate earthquakes (Gupta et al. 2001; Rastogi 2001). The recent 2001 Bhuj event (Mw 7.7), which caused a huge loss of 14,000 lives, reignited questions on our understanding of earthquake genesis (Gupta et al. 2001; Choudhury et al. 2014). Scientists have been trying to examine earthquake processes and associated hazards in Kachchh from many different approaches. These include paleoseismic and active fault mapping (e.g., Rajendran and Rajendran 2001; Morino et al. 2008), seismology (e.g., Mandal et al. 2005; Choudhury et al. 2014), GPS geodesy (e.g., Jade et al. 2001; Miyashita et al. 2001; Reddy and Sunil 2008), and geotechnical investigations (e.g., Vipin et al. 2013). This study, in contrast, focused on stochastic earthquake recurrence modeling from thirteen different probability distributions. A statistical strategy was developed to evaluate earthquake hazards by specifying the estimated recurrence interval and conditional probability values from these probability distributions. An alternative approach to assessing earthquake hazards is to simulate historical earthquake events and quantify the relative chances of occurrence in each subdivision of the study region. The results could be combined with geodetic observations to determine strain accumulation, or with paleoseismic investigations to reconstruct the chronology of the past events (Lee et al. 2011). These combined methods are useful for examining earthquake risks in various parts of the study region, and thus have drawn significant attention from earthquake insurance agencies. Nevertheless, one limitation of these studies is that the devastating earthquakes do not always occur on the mapped faults. Therefore, the city planners and the seismic insurance product designers often require an empirical earthquake hazard model for a large geographical region of interest. In recent years, there have been enormous efforts with alarm-based earthquake forecasting techniques. Examples are the pattern informatics (PI) approach (e.g., Rundle et al. 2003) that uses a pattern recognition technique to capture seismicity dynamics of an area, the relative intensity (RI) approach (e.g., Zechar and Jordan 2008) that utilizes smoothed historical seismicity based on extrapolated rate of occurrence of small events, and a moment ratio (MR) based method (e.g., Talbi et al. 2013) that uses the ratio of first- and second-order moments of earthquake interevent times as a precursory alarm to forecast large earthquakes. The comparison of these competitive forecasting models encompasses a number of likelihood testing methods, such as the N-test for data consistency in expected number space, the L-test for data consistency in likelihood-space, and the R-test for relative performance checking of seismicity models (Schorlemmer et al. 2007). The working group on Regional Earthquake Likelihood Models (RELM), supported by the Southern California Earthquake Centre (SCEC) and United States Geological Survey (USGS), or the group of Collaboratory for the Study of Earthquake Predictability (CSEP), facilitates such testing methods as a part of their earthquake research and forecasting programs (Jordan 2006; Schorlemmer et al. 2007; Shebalin et al. 2014). These simulation-based model-testing strategies usually require a controlled environment with a complete list of small to moderate historical events and a detailed seismotectonic map of the study region. In addition, the test region comprising smaller grids should experience sufficient earthquake events (during the test period) to evaluate the alarm-based earthquake predictions (Schorlemmer et al. 2007). In the present study, however, the models could not be compared in the suite of N-test, L-test, and R-test because of the lack of seismotectonic understanding and sufficient sophistication of seismic events in the Kachchh region (e.g., microseismicity, deformation rates, and fault maps). For this reason, two statistical goodness-of-fit tests were employed to examine the performance of each studied distribution such as: the maximum likelihood criterion with its modification to AIC and the K-S minimum distance criterion. While the methodology described here focuses on a statistical analysis of estimating earthquake interevent times and conditional probabilities for earthquakes of magnitude 5.0 and higher in the Kachchh region, a physical correlation of the obtained results would be valuable and may be considered as our future work. Nevertheless, we believe that the research and results provided in this paper demonstrate resurgence in statistical seismology and should be utilized for large global seismic databases. The present investigation led to the following conclusions: The thirteen probability models that were studied may be categorized into three groups on the basis of their performance against the current earthquake catalog of Kachchh region. The best fit came from the exponential distribution. An intermediate fit came from the gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull models. The remainder of the models, namely Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull, fit poorly to the present data. The hazard curves for different elapsed times reveal high seismicity in the geographic region of interest. The expected mean recurrence interval of a magnitude 5.0 or higher earthquake was calculated to be 13.35 ± 10.91 years, whereas the estimated cumulative probability and conditional probability values reached 0.8–0.9 by 2027–2036 and 2034–2043, respectively. The estimated seismic recurrence intervals and conditional probability values are largely consistent with the previous studies. Nevertheless, identification of the most suitable probability model and related coverage (e.g., uncertainty estimation, hazard assessment) provide additional support to seismologists and earthquake professionals to quantitatively compare various models to improve earthquake hazard analyses and associated applications in the seismically active Kachchh region. Anagnos T, Kiremidjian AS (1988) A review of earthquake occurrence models for seismic hazard analysis. Probab Eng Mech 3(1):1–11 Bak P, Christensen K, Danon L, Scanlon T (2002) Unified scaling law for earthquakes. Phys Rev Lett 88(17):178501–178504 Baker JW (2008) An introduction to probabilistic seismic hazard analysis., http://www.stanford.edu/~bakerjw/Publications/Baker_(2008)_Intro_to_PSHA_v1_3.pdf. Accessed on 15 March 2015 BIS: Bureau of Indian Standards (2002) Indian standard criteria for earthquake resistant design of structures, part 1–general provisions and buildings. IS 1893(Part 1):39 Biswas SK (1987) Regional tectonic framework structure and evolution of the western marginal basins of India. Tectonophysics 135:307–327 Biswas SK (2005) A review of structure and tectonics of Kutch basin, western India with special reference to earthquakes. Current Sci 88(10):1592–1600 Chandra U (1977) Earthquakes of peninsular India–a seismotectonic study. Bull Seismol Soc Am 67:1387–1413 Chen C, Wang JP, Wu YM, Chan CH (2013) A study of earthquake inter-occurrence distribution models in Taiwan. Nat Hazards 69(3):1335–1350 Choudhury P, Chopra S, Roy KS, Rastogi BK (2014) A review of strong motion studies in Gujarat state of western India. Nat Hazards 71:1241–1257 Chung WY, Gao H (1995) Source parameter of the Anjar earthquake of July 21, 1956, India, and its seismotectonic implications for the Kutch rift basin. Tectonophysics 242:281–292 Cornell CA (1968) Engineering seismic risk analysis. Bull Seismol Soc Am 58:1583–1606 Dionysiou DD, Papadopoulos GA (1992) Poissonian and negative binomial modeling of earthquake time series in the Aegean area. Phys Earth Planetary Inter 71:154–165 Efron B, Johnstone I (1990) Fisher information in terms of the hazard function. Ann Stat 18:38–62 Ferraes SG (2003) The conditional probability of earthquake occurrence and the next large earthquake in Tokyo. Jpn J Seismol 7:145–153 Gupta HK, Rao NP, Rastogi BK, Sarkar D (2001) The deadliest intraplate earthquake. Science 291:2101–2102 Gupta RD, Gupta RC, Sankaran PG (2004) Some characterization results based on the (reversed) hazard rate function. Commun Stat: Theory Methods 33(12):3009–3031 Hagiwara Y (1974) Probability of earthquake occurrence as obtained from a Weibull distribution analysis of crustal strain. Tectonophysics 23:313–318 Hogg RV, Mckean JW, Craig AT (2005) Introduction to mathematical statistics. 6th edn, PRC Press, p 718 Jade S, Mukul M, Parvez IA, Ananda MB, Kumar PD, Gaur VK (2002) Estimates of co-seismic displacement and post-seismic deformation using global positioning system geodesy for the Bhuj earthquakes of 26 January 2001. Current Sci 82:748–752 Johnson NL, Kotz S, Balakrishnan N (1995) Continuous univariate distributions. Vol. 2, 2nd edn. Wiley-Interscience, p 752 Jordan TH (2006) Earthquake predictability, brick by brick. Seismol Res Lett 77(1):3–6 Kagan YY, Schoenberg F (2001) Estimation of the upper cutoff parameter for the tapered Pareto distribution. J Appl Probab 38:158–175 Kayal JR, De R, Ram S, Sriram BP, Gaonkar SG (2002) Aftershocks of the 26 January, 2001 Bhuj earthquake in western India and its seismotectonic implications. J Geol Soc India 59:395–417 Kayal JR, Das V, Ghosh U (2012) An appraisal of the 2001 Bhuj earthquake (Mw7.7, India) source zone: fractal dimension and b value mapping of the aftershock sequence. Pure Appl Geophys 169:2127–2138 Kundu D, Raqab MZ (2005) Generalized Rayleigh distribution: different methods of estimation. Comput Stat Data Anal 49:187–200 Lee YT, Turcotte DL, Holliday JR, Sachs MK, Rundle JB, Chen CC, Tiampo KF (2011) Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California. Proc Natl Acad Sci 108(40):16533–16538 Mandal P, Horton S (2007) Relocation of aftershocks, focal mechanisms and stress inversion: Implications toward the seismo-tectonics of the causative fault zone of Mw7.6 2001 Bhuj earthquake (India). Tectonophysics 429:61–78 Mandal P, Chadha RK, Satyamurty C, Raju IP, Kumar N (2005) Estimation of site response in Kachchh, Gujarat, India, region using H/V spectral ratios of aftershocks of the 2001 Mw 7.7 Bhuj earthquake. Pure Appl Geophys 162:2479–2504 MATLAB: Matrix Laboratory, version 7.10.0 (2010), The MathWorks Inc., Natick, Massachusetts, United States Matthews MV, Ellsworth WL, Reasenberg PA (2002) A Brownian model for recurrent earthquakes. Bull Seismol Soc Am 92(6):2233–2250 Miyashita K, Vijaykumar K, Kato T, Aoki Y, Reddy CD (2001) Postseismic crustal deformation deduced from GPS observations, in a Comprehensive Survey of the 26 January 2001 Earthquake (Mw 7.7) in the State of Gujarat, India. In: Sato T et al. (ed) pp 46–50, Ministry of Education, Culture, and Sports, Science and Technology, Tokyo. (Available online: http://www.st.hirosaki-u.ac.jp/~tamao/Gujarat/print/Gujarat_4.pdf). Morino M, Malik JN, Mishra P, Bhuiyan C, Kaneko F (2008) Active fault traces along Bhuj fault and Katrol hill fault, and trenching survey at Wandhay, Kachchh, Gujarat, India. J Earth Syst Sci 117(3):181–188 Mulargia F, Tinti S (1985) Seismic sample area defined from incomplete catalogs: an application to the Italian territory. Phys Earth Planetary Sci 40(4):273–300 Murthy DNP, Xie M, Jiang R (2004) Weibull models. John Wiley and Sons, New Jersey, p 383 Negishi H, Mori J, Sato T, Singh R, Kumar S, Hirata N (2002) Size and orientation of the fault plane for the 2001 Gujarat, India earthquake Mw7.7 from aftershock observations: a high stress drop event. Geophys Res Lett 29(20):10-1–10-4 Page M, Felzer K (2015) Southern San Andreas fault seismicity is consistent with the Gutenberg-Richter Magnitude-Frequency distribution. Bull Seismol Soc Am 105(4); doi: 10.1785/0120140340 Pal M, Ali MM, Woo J (2006) Exponentiated Weibull distribution. Commun Stat Theory Methods 32:1317–1336 Papazachos BC, Papadimitriou EE, Kiratzi AA, Papaioannou CA, Karakaisis GF (1987) Probabilities of occurrence of large earthquakes in the Aegean and the surrounding area during the period of 1986–2006. Pure Appl Geophys 125:592–612 Parvez IA, Ram A (1997) Probabilistic Assessment of earthquake hazards in the north-east Indian Peninsula and Hindukush regions. Pure Appl Geophys 149:731–746 Pasari S (2015) Understanding Himalayan tectonics from geodetic and stochastic modelling. Unpublished PhD Thesis, Indian Institute of Technology Kanpur, p 376 Pasari S, Dikshit O (2014a) Impact of three-parameter Weibull models in probabilistic assessment of earthquake hazards. Pure Appl Geophys 171:1251–1281. doi:10.1007/s00024-013-0704-8 Pasari S, Dikshit O (2014b) Distribution of earthquake interevent times in northeast India and adjoining regions. Pure Appl Geophys. doi:10.1007/s00024-014-0776-0 Pasari S, Dikshit O (2014c) Three parameter generalized exponential distribution in earthquake recurrence interval estimation. Nat Hazards 73:639–656. doi:10.1007/s11069-014-1092-9 Quandt RE (1966) Old and new methods of estimation and the Pareto distribution. Metrika 10:55–82 Rajendran CP, Rajendran K (2001) Characteristics of deformation and past seismicity associated with the 1819 Kutch earthquake, Northwestern India. Bull Seismol Soc Am 91:407–426 Rastogi BK (2001) Ground deformation study of Mw7.7 Bhuj earthquake of 2001. Episodes 24:160–165 Rastogi BK (2004) Damage due to the Mw7.7 Kutch, India earthquake of 2001. Tectonophysics 390:85–103 Reddy CD, Sunil PS (2008) Post seismic crustal deformation and strain rate in Bhuj region, western India, after the 2001 January 26 earthquake. Geophys J Int 172:593–606 Reid HF (1910) The mechanics of the earthquake, the California earthquake of April 18, 1906, vol 2. Report of the State Investigation Commission, Carnegie Institution of Washington, Washington Rundle JB, Turcotte DL, Shchebakov R, Klein W, Sammis C (2003) Statistical physics approach to understanding the multiscale dynamics of earthquake fault systems. Rev Geophys 41(4):1019 Schorlemmer D, Gerstenberger MC, Wiemer S, Jackson DD, Rhoades DA (2007) Earthquake likelihood model testing. Seismol Res Lett 78:17–29 Shebalin PN, Narteau C, Zechar JD, Holschneider M (2014) Combining earthquake forecasts using differential probability gains. Earth Planets Space 66:37 Sornette D, Knopoff L (1997) The paradox of the expected time until the next earthquake. Bull Seismol Soc Am 87:789–798 Sotolongo-Costa O, Antoranz JC, Posadas A, Vidal F, Vazquez A (2000) Levy flights and earthquakes. Geophys Res Lett 27(13):1965–1968 SSHAC: Senior Seismic Hazard Analysis Committee (1997) Recommendations for probabilistic seismic hazard analysis: guidance on uncertainty and use of experts. US Nuclear Regulatory Commission Report CR–6372, UCRL–ID–122160, Vol. 2, Washington, DC, p 888 (http://pbadupws.nrc.gov/docs/ML0800/ML080090004.pdf) Stein S, Liu M (2009) Long aftershock sequences within continents and implications for earthquake hazard assessment. Nature 462:87–89 Talbi A, Nanjo K, Zhuang J, Satake K, Hamadache M (2013) Interevent times in a new alarm-based earthquake forecasting model. Geophys J Int 194(3):1823–1835 Tripathi JN (2006) Probabilistic assessment of earthquake recurrence in the January 26, 2001 earthquake region of Gujarat, India. J Seismol 10:119–130 Utsu T (1984) Estimation of parameters for recurrence models of earthquakes. Bull Earthq Res Inst, Univ Tokyo 59:53–66 Vipin KS, Sitharam TG, Kolathayar S (2013) Assessment of seismic hazard and liquefaction potential of Gujarat based on probabilistic approaches. Nat Hazards 65:1179–1195 Wessel P, Smith WHF (1995) New version of the generic mapping tools released. EOS Trans Am Geophys Union 76:p329 Working Group on California Earthquake Probabilities (2013) The uniform California earthquake rupture forecast, Version 3 (UCERF 3): USGS open file report 2013–1165 and California geological survey special report 228 (http://pubs.usgs.gov/of/2013/1165/) Yadav RBS, Tripathi JN, Rastogi BK, Das MC, Chopra S (2008) Probabilistic assessment of earthquake hazard in Gujarat and adjoining region of India. Pure Appl Geophys 165:1813–1833 Yadav RBS, Tripathi JN, Rastogi BK, Das MC, Chopra S (2010) Probabilistic assessment of earthquake recurrence in northeast India and adjoining regions. Pure Appl Geophys 167:1331–1342 Yazdani A, Kowsari M (2011) Statistical prediction of the sequence of large earthquakes in Iran. IJE Trans B: Appl 24(4):325–336 Zechar JD, Jordan TH (2008) Testing alarm-based earthquake predictions. Geophys J Int 172:715–724 We sincerely thank the two anonymous reviewers for their valuable comments and suggestions that have improved the quality of this paper. We are thankful to the editor-in-chief Prof. Yasuo Ogawa and the associate editor Prof. Azusa Nishizawa for their quick editorial handling and continuous encouragements toward improvement of the manuscript. We are grateful to Prof. Debasis Kundu, Prof. Teruyuki Kato, Prof. Ronald Burgmann, and Prof. John Rundle for their important suggestions at various stages of this study. We also thank Dr. RBS Yadav from Kurukshetra University for introducing the concept of stochastic earthquake recurrence modeling. We acknowledge the help of Dr. Santiswarup Sahoo in preparation of some of the figures. The first author (SP) acknowledges IIT Kanpur for the Post-Doctoral Fellowship. The generic mapping tool (GMT) system (Wessel and Smith 1995) and the MATLAB software (MATLAB 2010) were used for plotting and numerical computation purposes. Department of Civil Engineering, Indian Institute of Technology, Kanpur, 208016, India Sumanta Pasari & Onkar Dikshit Search for Sumanta Pasari in: Search for Onkar Dikshit in: Correspondence to Sumanta Pasari. SP carried out all numerical computations and drafted the initial version of the manuscript. OD has greatly edited the manuscript and helped in fine-tuning of the results and discussions. Both the authors have thoroughly read and approved the final manuscript to submit in the EPS journal. SP is a Post-Doctoral Fellow in the Department of Civil Engineering, Indian Institute of Technology (IIT) Kanpur, India. He has earned his PhD in Civil Engineering and Masters in Mathematics from IIT Kanpur. He has keen interest in modeling natural hazards from statistical seismology and geodetic (GPS) viewpoint. OD is a Professor in the Department of Civil Engineering, IIT Kanpur. His major research areas include remote-sensing applications, GIS, GPS, and natural hazard management problems. He has published more than 30 papers in reputed international and national journals. He has supervised more than 70 Masters students and 4 Doctoral students. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Kachchh peninsula Earthquake recurrence Probability models
CommonCrawl
Surveys in Geophysics July 2019 , Volume 40, Issue 4, pp 709–734 | Cite as The Relevance of Forest Structure for Biomass and Productivity in Temperate Forests: New Perspectives for Remote Sensing Rico Fischer Nikolai Knapp Friedrich Bohn Andreas Huth First Online: 04 March 2019 Forests provide important ecosystem services such as carbon sequestration. Forest landscapes are intrinsically heterogeneous—a problem for biomass and productivity assessment using remote sensing. Forest structure constitutes valuable additional information for the improved estimation of these variables. However, survey of forest structure by remote sensing remains a challenge which results mainly from the differences in forest structure metrics derived by using remote sensing compared to classical structural metrics from field data. To understand these differences, remote sensing measurements were linked with an individual-based forest model. Forest structure was analyzed by lidar remote sensing using metrics for the horizontal and vertical structures. To investigate the role of forest structure for biomass and productivity estimations in temperate forests, 25 lidar metrics of 375,000 simulated forest stands were analyzed. For the lidar-based metrics, top-of-canopy height arose as the best predictor for describing horizontal forest structure. The standard deviation of the vertical foliage profile was the best predictor for the vertical heterogeneity of a forest. Forest structure was also an important factor for the determination of forest biomass and aboveground wood productivity. In particular, horizontal structure was essential for forest biomass estimation. Predicting aboveground wood productivity must take into account both horizontal and vertical structures. In a case study based on these findings, forest structure, biomass and aboveground wood productivity are mapped for whole of Germany. The dominant type of forest in Germany is dense but less vertically structured forest stands. The total biomass of all German forests is 2.3 Gt, and the total aboveground woody productivity is 43 Mt/year. Future remote sensing missions will have the capability to provide information on forest structure (e.g., from lidar or radar). This will lead to more accurate assessments of forest biomass and productivity. These estimations can be used to evaluate forest ecosystems related to climate regulation and biodiversity protection. Forest structure Biomass Aboveground wood productivity Remote sensing Lidar Forest model The online version of this article ( https://doi.org/10.1007/s10712-019-09519-x) contains supplementary material, which is available to authorized users. Forests are crucial components of the Earth system. They represent an important pool in the global carbon cycle as they bind huge amounts of atmospheric carbon (Bonan 2008; Grace et al. 2014; Pan et al. 2011). Quantifying forest carbon stocks (e.g., forest biomass) and forest carbon fluxes (e.g., forest productivity) is important for understanding the effects of land use and climate change (Foley et al. 2005). However, forests can be heterogeneously structured which lead to variable biomass stocks and carbon fluxes (Rödig et al. 2018; Saatchi et al. 2011). The structural heterogeneity makes it difficult to reliably estimate forest biomass or forest productivity for larger regions. For example, biomass estimations for the Amazon rainforest—the largest intact tropical forest on Earth—range from 39 to 93 GtC (Houghton et al. 2001; Malhi et al. 2006; Saatchi et al. 2007, 2011). The range of estimates arises from different methodological approaches which take into account the landscape heterogeneity in different ways. Remote sensing is a promising technique that can be used to capture the state of forests with high spatial resolution (e.g., Exbrayat et al. 2019). However, remote sensing cannot measure directly forest biomass and productivity, but it can partly detect the structure of forests. In general, forest structure is related to the spatial distribution of trees and their variability in size (Schall et al. 2018b). Most of current estimates of forest biomass from remote sensing are based on relationships between forest structure and biomass. Forest height characterizes one aspect of forest structure. For example, forest canopy height, derived from lidar remote sensing, is often used for forest biomass estimations (Asner and Mascaro 2014; Dubayah et al. 2010; Lefsky et al. 1999). However, more complex structural characteristics of forests are often ignored, and considerable uncertainties remain. Forest structure, however, is an important element in forest ecology as it is linked to many ecological processes (Pretzsch 2009; Shugart et al. 2010; Snyder 2010). It is also used as indicator for biodiversity, as vertically structured forests foster the biodiversity of some taxa (Boncina 2000; Ishii et al. 2004; Schall et al. 2018a). Further, horizontal and vertical structural heterogeneities enhance resistance of forest ecosystems against disturbances (Dobbertin 2002; Pretzsch et al. 2016). Some studies also explored the effects of forest structure on forest productivity (Bohn and Huth 2017; Dănescu et al. 2016; Hardiman et al. 2011; Liang et al. 2016; Schall et al. 2018b) and found that variables characterizing forest structure are the main drivers of forest productivity instead of biodiversity variables. Although forest structure plays an important role in understanding forest dynamics, there are no global forest structure maps available. There are few coarse-resolution maps, but these only show components of forest structure (e.g., forest height from MODIS and ICESat, resolution 1 km; Lefsky 2010; Simard et al. 2011). Clearly, an efficient analysis of the multilayered forest structure for larger regions is required. New satellite missions are being launched (e.g., GEDI, BIOMASS and Tandem-L) which will use novel technologies to measure forest structure on a global scale including forest height and vertical heterogeneity. A systematic framework for ecologically meaningful structural classes has been recently proposed, which suggests two main elements: the horizontal and the vertical forest structures (Bohn and Huth 2017; Cazcarra-Bes et al. 2017; Fischer et al. 2019; Tello et al. 2014, 2018). However, there are several ways to describe horizontal and vertical forest structures from field data (del Río et al. 2016; Pommerening 2002; Reineke 1933; Shugart 2003; Zenner and Hibbs 2000). It will be difficult to find a suitable definition of forest structure for a wide range of applications, spatial scales and forest types. Moreover, forest structure metrics are different depending on whether they are based on field or remote sensing data. Field-based descriptors for forest structure are derived from individual tree size measurements, while remote sensing descriptors are often based on the heterogeneous canopy structure for a given area (Cazcarra-Bes et al. 2017). A hybrid approach for forest structure measured by field data and by remote sensing is the terrestrial laser scanning, which allows very detailed measurements of single trees and forest canopy structure (Disney 2018). However, this technique is limited to forest with a limited extent due to its high degree of detail. Airborne or satellite-based remote sensing data are therefore a suitable choice for capturing forest structure on a larger scale. In this study, we combined forest inventory data, forest modeling and airborne lidar remote sensing to answer the question, "How can we estimate forest structure from remote sensing, and what is the role of forest structure for forest biomass and aboveground wood productivity estimations?" The aim is to classify forests into structural categories, using horizontal and vertical structural descriptors that can be measured by remote sensing (Fig. 1). Based on this structural classification, we explore whether forest biomass and aboveground wood productivity can be estimated more accurately if structural information is included. We are using an individual-based model to generate hypotheses on the relations among structural variables and ecosystem variables (aboveground wood productivity and biomass). Since the output from individual-based forest models (a tabulation of each tree, its species and its size) resembles standard forest inventories from field data, obtained results can be directly applied to forest inventories. Biomass and aboveground wood productivity estimations are related to forest structure. In this study, we distinguish between horizontal forest structure, which represents the density of trees in a forest stand, and the vertical forest structure, which quantifies the vertical heterogeneity of tree heights. Forest structure can be estimated by measurements from field data and from remote sensing We used the forest model FORMIND (Bohn and Huth 2017; Fischer et al. 2016) to generate thousands of forest stands. With this large set of data, we examined in detail the role of forest structure for biomass and aboveground wood productivity estimations. Furthermore, in a case study for Germany we will explore how well forest biomass and aboveground wood productivity can be derived from remote sensing data including information on forest structure (Fig. 2). This presents a conceptual approach that can be applied across large areas where lidar data are available. Concept of this study. (1) Applying the forest model FORMIND, a large data set of virtual forest stands was generated ("forest factory data set"). For each forest stand of this data set, a virtual lidar campaign was simulated. These lidar data were used to estimate forest structure from remote sensing data. Then, relationships were explored to determine aboveground biomass and aboveground woody productivity (AWP) using remote sensing-based forest structure metrics. (2) In a case study for Germany, a lidar campaign was simulated for each forest stand from the national forest inventory data set of Germany (BWI). Forest structure for each stand was estimated from this lidar data. Based on the estimated forest structure and the relationships found in the first part of this study, Germany-wide maps of biomass and AWP were produced 2.1 Simulating Temperate Forest Stands with a Forest Model We applied the individual- and process-based forest gap model FORMIND (Fischer et al. 2016) in the "forest factory mode." In contrast to classical long-term simulations, this mode allows the creation of thousands of forest stands based on different stem size distributions and species mixtures. For each forest stand, competitions for light, space and water were calculated depending on the spatial arrangement and the size of the trees (by applying established processes from the FORMIND model). A full forest inventory is then available for each stand, which can include different arrangements of trees leading to different horizontal and vertical structures. In order to generate such forest stands, trees were randomly selected from the stem size distribution, were given a species identity and were then planted within a stand. Fifteen stem size distributions were applied, which cover a gradient from young to old and disturbed to undisturbed forests (tree height differ between 5 m and 40 m). Species mixtures included all combinations of eight common temperate species using the species-specific allometries and model algorithms of the FORMIND model version for temperate forests (Bohn et al. 2014). For each combination of age and species, we assembled 100 simulated forest stands, each stand with a size of 20 m × 20 m. This is done under the conditions that: (1) there is enough space available for the crowns of every tree and (2) every tree in the simulated forest has positive net primary productivity under environmental conditions of the temperate zone (light, temperature, water). The net primary productivity of each forest in this study is calculated using climate date of a site in Germany (climate station of the Hainich, Germany). In total, five different climates over time were applied (years 2000–2004). For more details regarding the forest factory mode of the FORMIND model, see Bohn and Huth (2017). In this "forest factory mode," we have created a total of 375,000 different virtual forest stands, each characterized by an inventory of all trees. We refer always to this data set if the term "field data" is used within this study. For each stand, the simulated forest structure was described by horizontal and vertical structural characteristics (e.g., basal area, maximal tree height). In addition, aboveground biomass (AGB) and aboveground woody productivity (AWP) of each forest stand were calculated using the FORMIND model. The full data set is available in Electronic supplementary material. 2.2 Remote Sensing Data For each forest stand, a lidar point cloud was simulated following the approach described by Knapp et al. (2018a). This lidar model uses the positions, heights, crown diameters and crown lengths of all trees to generate a 3D voxel representation of a stand and to simulate lidar measurements. The point clouds obtained for all stands were rasterized to 1 m resolution canopy height models (CHM) by taking the height values of the highest returns in each 1 m pixel, respectively, and setting empty pixels to ground height 0. Data were further aggregated to vertical CHM profiles of 1-m height bins which served as inputs to calculate vertical foliage profiles (VFP, see section "Describing forest structure from field data" of Appendix 1). 2.3 Describing Forest Structure with Field Data and Lidar Remote Sensing Several structural indices were used to characterize the forest structure. For each 20 m × 20 m forest stand from the forest factory data set, we calculated horizontal and vertical descriptors of the structure from field data and from remote sensing data. Here, we present the most important metrics used in this study. However, other metrics are also possible. In total, 13 metrics were examined to describe forest structure from field data and 25 metrics which estimate forest structure from lidar remote sensing. A full list of all these indices can be found in sections "Describing forest structure from field data" and "Describing forest structure from remote sensing data" of Appendix 1. 2.3.1 Forest Structure Estimation from Field Data Forest structure can be described by metrics derived from tree-level inventory data—from either real forest inventories or simulated stands. Here, these descriptors of forest structure were inspired by previous studies (Bohn and Huth 2017; Cazcarra-Bes et al. 2017; Fischer et al. 2019). Horizontal structure is described by stand basal area BA (m2), which is the sum of all tree basal area values of one forest stand: $${\text{BA}} = \sum \limits_{\text{trees}} \frac{\pi }{4}d^{2} ,$$ where d (m) is the stem diameter of a tree. Vertical structure is quantified by the standard deviation of tree heights SDth (m): $${\text{SD}}_{\text{th}} = \sqrt {\frac{1}{n - 1}\sum \limits_{\text{trees}} \left( {h - \bar{h}} \right)^{2} } ,$$ where h (m) is the height of a tree and \(\bar{h}\) (m) the mean tree height of a stand. 2.3.2 Forest Structure Estimation from Remote Sensing Horizontal structure for each forest stand from the lidar data set is described by the mean top-of-canopy height (m), which is the mean of the canopy height model (CHM) with 1 m resolution: $${\text{TCH}} = \frac{{\sum \nolimits_{i = 1}^{{i_{\hbox{max} } }} P_{{{\text{CHM}},i}} }}{{i_{\hbox{max} } }},$$ where \(P_{{{\text{CHM}},i}}\) is the height of the CHM in pixel i and imax is the total number of pixels. Vertical structure is quantified by the standard deviation of the vertical foliage profile (SDVFP (m)): $${\text{SD}}_{\text{VFP}} = \sqrt {\frac{{\sum \nolimits_{i = 1}^{{i_{\hbox{max} } }} \left( {p_{i} \cdot \left( {h_{i} - \frac{{\sum \nolimits_{i = 1}^{{i_{\hbox{max} } }} \left( {p_{i} \cdot h_{i} } \right) }}{{\sum \nolimits_{i = 1}^{{i_{\hbox{max} } }} p_{i} }}} \right)^{2} } \right)}}{{\frac{N - 1}{N} \cdot \sum \nolimits_{i = 1}^{{i_{\hbox{max} } }} p_{i} }}} ,$$ where pi is the foliage profile value (leaf area per m2) in height bin hi, imax is the number of height bins and N is the count of all pi that are not zero. The process of creating the required vertical foliage profile (VFP) from a discrete lidar point cloud is described in section "Calculating the vertical foliage profile from a CHM" of Appendix 1. 2.4 Estimating Forest Biomass and Productivity from Forest Structure For estimating forest biomass and aboveground wood productivity, we follow a two-step approach. First, horizontal (i.e., TCH) and vertical (i.e., SDVFP) forest structure was estimated based on remote sensing measurements. Second, the derived information on forest structure was used to estimate forest biomass and aboveground wood productivity. We compare three different approaches which differ in the degree of complexity: Hstruct: We estimate the aboveground biomass (AGB) and aboveground woody productivity (AWP) only with the information from the horizontal forest structure (here TCH). For this, we used the following power-law approach: $${\text{AGB}}_{H} = a \cdot {\text{TCH}}^{b} , {\text{AWP}}_{H} = m \cdot {\text{TCH}}^{n} .$$ Vstruct: We estimate the biomass and productivity only from the vertical forest structure (here SDVFP). $${\text{AGB}}_{V} = a \cdot {\text{SD}}_{\text{VFP}}^{b} , {\text{AWP}}_{H} = m \cdot {\text{SD}}_{\text{VFP}}^{p} .$$ Hstruct + Vstruct: We estimate the biomass and productivity from the horizontal and vertical forest structures. $${\text{AGB}}_{H + V} = a \cdot {\text{TCH}}^{b} \cdot {\text{SD}}_{\text{VFP}}^{p} , {\text{AWP}}_{H + V} = m \cdot {\text{TCH}}^{n} \cdot {\text{SD}}_{\text{VFP}}^{p} .$$ For all three variants, we applied the artificial forest stand data set described in Sect. 2.1. For each simulated forest stand, a virtual 3D airborne lidar scan was available (Sect. 2.2). Using these lidar data, for each stand all structural metrics (e.g., TCH, SDVFP) were calculated (Sect. 2.3). Beside these metrics, also AGB and AWP are known for all stands. With this information, we fitted the unknown parameters of equations (a)–(c) by linear model fit using the double logarithmic transformation. 2.5 Preparation of Nationwide Biomass and Productivity Maps with the German National Forest Inventory Data Set The German national forest inventory data (BWI) record forest stands on a sample basis according to a standardized procedure throughout Germany (Thuenen-Institut 2015). It consists of more than 45,000 field plots and is repeated every 10 years. The field plot varies in size (circa 20 m × 20 m) due to the angle count sampling method. For every plot of the BWI data set, the stem diameter (DBH) and species identification of all trees measured were available. Tree height and aboveground biomass were calculated using the same species-specific allometries as applied to the forest factory data set (Sect. 2.1). The BWI plots are distributed on a regular grid across Germany. Plot density varies between regions but there is at least one plot located within each 4 km × 4 km grid cell. For each forest stand of the BWI, all structural and stand features were determined as described above (cf. section "Describing forest structure from field data" of Appendix 1), a virtual lidar campaign was carried out and the remote sensing-based metrics were then calculated (cf. section "Describing forest structure from remote sensing data" of Appendix 1). Using the information on forest structure from simulated lidar remote sensing for each BWI plot and the equations from Sect. 2.4 (parameters are derived from the forest factory data set), we derived forest biomass and aboveground wood productivity maps for Germany. As the exact positions of the BWI plots are unknown, we used their approximate positions (on the regular grid, i.e., with a precision of at least 4 km) to geolocate the estimated forest attributes for each plot. We used Voronoi tessellation to interpolate between BWI plots. In this procedure, each forested point in Germany got assigned to the closest BWI plot nearby. Gridded maps of the different forest attributes were produced by rasterizing the tessels with 250 m pixel resolution. The rasters were masked with a forest cover mask based on a forest cover map from Hansen et al. (2013; 50% minimum tree cover, resampled to 250 m resolution). 3.1 Estimating Forest Structure from Lidar Remote Sensing Analyzing 375,000 forest stands from the forest factory data set, we investigated whether forest structure can be determined using lidar remote sensing. For this purpose, two field-based metrics were analyzed—basal area (BA) as proxy for the horizontal forest structure (i.e., the density of the forest) and tree height heterogeneity (SDth) as proxy for the vertical heterogeneity of a forest stand. In total, we tested 325 correlations between structural descriptors estimated from field data and remote sensing data. Top-of-canopy height (TCH) as a horizontal descriptor estimated from remote sensing has the highest correlation with the horizontal metric basal area estimated from field data (r2 = 0.86, see Fig. 3a). The quadratic top-of-canopy height (r2 = 0.76, see Fig. 4) and the fractional canopy cover with a threshold of 10 m (r2 = 0.81) show similar results and are also suitable to estimate the horizontal structure. Remote sensing-based estimation of the forest structure based on the analysis of the forest factory data set. The figure shows the estimate of a the horizontal forest structure (basal area) using top-of-canopy height (TCH) derived from lidar and b the estimate of the vertical forest structure (tree height heterogeneity) using the standard deviation of the vertical foliage profile (SDVFP) derived also from lidar. For this study, all forest inventories from the forest factory data set (in total 375,000 temperate forest stands with a size of 20 m × 20 m) were analyzed Correlations (coefficient of determination r2) between field-based structural metrics and remote sensing-based metrics. Numbers and gray scale indicate the coefficient of determination. For each of the 375 correlations, in total 375,000 forest stands from the forest factory data set (Sect. 2.1) were analyzed. For every virtual forest stand, a forest inventory was available allowing the calculation of the aboveground biomass (AGB) and several structural indices (e.g., maximum tree height, standard deviation of tree height). The productivity AWP for each stand was determined by applying the FORMIND model. All structural indices from remote sensing are derived from the virtual lidar campaign for each forest stand in the forest factory data set. Highlighted are the two correlations from Fig. 3 for the horizontal structure (blue) and vertical structure (green). All structural metrics are explained in section "Describing forest structure from field data" of Appendix 1 (field-based) and "Describing forest structure from remote sensing data" of Appendix 1 (remote sensing-based) For the vertical forest structure, the standard deviation of the vertical foliage profile (SDVFP) shows the strongest correlation with the field-based metric SDth (r2 = 0.75, see Fig. 3b). The Shannon index of the vertical foliage profile also reveals a relevant correlation (r2 = 0.53, see Fig. 4), but the analysis did not identify any further remote sensing metric that could be used to describe the vertical structure. A full list of all analyzed correlations between field-based structural metrics and remote sensing-based metrics is shown in Fig. 4. For the analyses that follow, the remote sensing-based metrics with the highest correlation were used, i.e., TCH as proxy for the horizontal structure and SDVFP as proxy for the vertical structure. 3.2 Analyzing the Role of Remote Sensing-Based Forest Structure for Biomass and Productivity Estimations All 375,000 temperate forest stands from the forest factory data set were analyzed to investigate the role of forest structure for the determination of forest biomass and aboveground wood productivity. For this purpose, the forest stands were grouped into sixteen forest structure classes with equal spacing, consisting of four horizontal (TCH 0–6 m, 6–12, 12–18, > 18) and four vertical (SDVFP 0–2 m, 2–4, 4–6, > 6) structure classes. It turns out that forest structure is a key factor in estimating forest biomass and aboveground wood productivity. There is an increase in forest biomass with increasing TCH—which is a proxy for the horizontal structure (Fig. 5a). However, biomass in forests with an open horizontal structure (TCH < 12 m) is much more influenced by vertical structure than in closed forests (TCH > 12 m). Role of forest structure for biomass and aboveground wood productivity (AWP) estimations. For all 375,000 forest stands, forest structure was estimated from remote sensing. As horizontal forest structure descriptor, the top-of-canopy height (TCH) was used, and as vertical structure descriptor, the standard deviation of the vertical foliage profile (SDVFP) was used. All forest stands were grouped in 16 structure classes (four horizontal classes and four vertical classes). Shown are a the observed mean aboveground biomass and b the observed mean aboveground wood productivity (AWP) in relation to the forest structure classes. Biomass and AWP values are taken directly from the forest factory data set. Error bars indicate the standard deviation The relation between aboveground wood productivity and forest structure is more complex. For stands with a homogeneous vertical structure (SDVFP between 0 and 2 m), the productivity increases with the density of the forest (high horizontal structure; Fig. 5b). The highest productivity is achieved for these single-layer forest stands (SDVFP < 2 m) and a medium dense horizontal structure (TCH around 15 m). However, as the vertical structure increases to more heterogeneous stands (SDVFP > 2 m), the positive effect of the horizontal forest structure on productivity is reduced. For forest stands with heterogeneous vertical structure (SDVFP > 4 m), productivity increases only slightly with increasing density of the horizontal structure and can even decline for forests with dense horizontal structure (TCH > 18 m). Forests with a heterogeneous vertical structure represent multilayered forests where large trees shade smaller trees, which then reduce the aboveground wood productivity of smaller trees. To summarize, following the analysis of 375,000 forest stands from the forest factory data set TCH (as a proxy for the horizontal forest structure) plays an important role in forest state monitoring (like forest biomass). However, both horizontal and vertical forest structures are relevant for aboveground wood productivity estimates (Fig. 5b; Table 1). Aboveground biomass (AGB) and aboveground wood productivity (AWP) determined using structural metrics from 3D remote sensing (TCH as horizontal descriptor, SDVFP as vertical descriptor) Forest biomass AGB (t/ha) Forest productivity AWP [t/(ha year)] H struct \({\text{AGB}}_{H} = 9.49 \cdot {\text{TCH}}^{1.22}\) \(r^{2} = 0.90,\quad {\text{rmse}} = 28\,\left( {30\% } \right)\) \({\text{AWP}}_{H} = 1.68 \cdot {\text{TCH}}^{0.31}\) \(r^{2} = 0.14,\quad {\text{rmse}} = 2.1\, \left( {75\% } \right)\) V struct \({\text{AGB}}_{V} = 34.77 \cdot {\text{SD}}_{\text{VFP}}^{0.48}\) \(r^{2} = 0.01,\quad {\text{rmse}} = 98\, \left( {158\% } \right)\) \({\text{AWP}}_{V} = 4.03 \cdot {\text{SD}}_{\text{VFP}}^{ - 0.34}\) \(r^{2} = 0.09, \quad {\text{rmse}} = 2.1\, \left( {75\% } \right)\) Hstruct + Vstruct \({\text{AGB}}_{H + V} = 7.55 \cdot {\text{TCH}}^{1.20} \cdot {\text{SD}}_{\text{VFP}}^{0.23}\) \(r^{2} = 0.90, \quad {\text{rmse}} = 29\, \left( {31\% } \right)\) \({\text{AWP}}_{H + V} = 2.55 \cdot {\text{TCH}}^{0.34} \cdot {\text{SD}}_{\text{VFP}}^{ - 0.39}\) For the calibration of the approaches, all forest stands (20 m × 20 m) from the forest factory data set were used. R-squared values indicate the correlation between observed value from the forest factory data set and the estimated value using the corresponding approach. For all approaches we got a p value ≤ 0.01. Given are the root-mean-square error (RMSE) and the normalized RMSE (nRMSE; normalized by the mean value of the observation). The detailed scatterplots can be found in Appendix 1 (Figs. 9 and 10) To quantify the relation between forest structure and biomass/productivity, three statistical approaches between these variables were investigated (cf., methods 2.4). On the one hand, these approaches were created with only one information about forest structure and on the other hand with information on horizontal and vertical forest structures (Table 1). The best results for biomass estimation are obtained with the Hstruct and Hstruct + Vstruct approach (r2 = 0.90, nRMSE = 30%; Table 1 and Fig. 9), while only the Hstruct + Vstruct approach is somehow suitable for aboveground wood productivity estimation (r2 = 0.31, nRMSE = 64%; Table 1 and Fig. 10). The error for the productivity estimate for forest stands with a size of 20 m × 20 m can be improved from 75 to 64% by taking both structural dimensions into account (Fig. 6; Table 1). Histogram for estimated forest biomass (a) and aboveground woody productivity (AWP, b) for all forest stands from the forest factory data set. Each of this forest attributes was estimated with the three approaches: Hstruc—horizontal structure, Vstruc—vertical structure and Hstruc + Vstruc—horizontal and vertical structures. The histograms of the estimated values were compared with the observed values from all forest stands (black line). The detailed relationships between estimated and observed biomass/productivity are presented in Figs. 9 and 10 3.3 Case Study: Estimating Forest Biomass and Productivity at Country Level We estimate forest structure for whole of Germany based on 45,000 plots from the German national forest inventory data (BWI data set) and relate this information to forest biomass and aboveground wood productivity. For this country-wide analysis, we used lidar data derived from a virtual lidar campaign over all BWI forest stands (for more details see methods 2.5). With this lidar data set, we created two maps for estimated horizontal and vertical structures of forests in Germany (Fig. 7). According to this analysis, 19% of all forest stands in Germany have a dense structure (horizontal structure with TCH > 18 m), which corresponds to high and closed forests. Only 10% of all forests have an open structure (horizontal structure with TCH < 6 m), which means that these stands are either low in height or have a low tree density. The mean value of TCH—the proxy for horizontal structure—for Germany is 12.4 m. The amount of forest area with heterogeneous vertical structure (SDVFP > 6 m) is with 6% low compared to the amount of forest areas with simple vertical structure (20%, SDVFP < 2 m). Most stands in Germany represent a medium value of heterogeneity in vertical structure (60%, SDVFP between 2 m and 4 m) which corresponds to forests with a low number of layers. For whole Germany, the mean value of SDVFP—the proxy for the vertical forest structure—is 3.1 m. Overall, according to this study, we have large areas with dense forests in Germany, but these forests are often less vertically structured with only one to two layers. Horizontal and vertical forest structures for Germany estimated from 3D remote sensing. As horizontal index for forest structure, we used top-of-canopy height (as proxy for the density of a forest stand), and as index for the vertical heterogeneity, the standard deviation of the vertical foliage profile was used. Low values of vertical structure stand for a homogenous structure, higher values for a heterogeneous structure The densest forests can be found in the northern part and in the southern part of Germany. Within Germany, there is a trend for the horizontal structure to become increasingly dense from north to south (Fig. 16a). Looking at the east–west gradient of the horizontal structure, however, one finds the densest stands in the middle region; at the outer edges, the horizontal structure is rather open (Fig. 16c). A similar pattern emerges for the vertical structural index with the most heterogeneous stands in the southern part of Germany (Fig. 16b). Regarding the relationship between horizontal forest structure and topographic gradients, no clear trend was observable (Fig. 16e). What can be detected, however, is that the variability of open and dense forests stands increases significantly for altitudes higher than 1000 m (Fig. 16e). The same applies to the vertical structure (Fig. 16f). In addition, a clear trend can be identified with vertically homogeneous stands at lower altitudes and vertically heterogeneous stands at higher altitudes (Fig. 16f). Three regions in Germany show a high structural heterogeneity: the Black Forest, the Bavarian Forest and the mid-Alpine foothills. Looking at the described structural characteristics especially for forest areas in national parks and for forest areas outside national parks, there are only minor differences (Fig. 17). The horizontal structure of these forest stands is similar (TCH national park: 11.7 m vs. TCH non-national park: 12.4 m) and the vertical structure as well (SDVFP national park: 3.3 m vs. SDVFP non-national park: 3.1 m). Using the estimated horizontal and vertical forest structures of each stand in Germany (cf., Fig. 7), it is possible now to estimate forest biomass and aboveground wood productivity. Applying the best approach Hstruct + Vstruct (as shown in Table 1), we produced a forest biomass and aboveground wood productivity map for Germany (Fig. 8). In southern Germany we found higher biomass values compared to the north. The total biomass of all forested areas in Germany is 2.3 Gt, and the mean biomass of forested areas is 209 t/ha. We estimated a total aboveground wood productivity of 43 Mt/year with a mean value of 4 t/ha/year. Estimation of a forest biomass and b aboveground wood productivity for Germany by applying the Hstruct + Vstruct approach. The Hstruct + Vstruct approach was calibrated using all forest stands from the forest factory data set (cf., Table 1). Structural information for each forest stand was taken from Fig. 7 We obtained a good agreement when comparing the estimated biomass (Hstruct + Vstruct approach) with the observed biomass from the BWI plots (r2 = 0.76, RMSE = 65 t/ha; cf. Fig. 14). Using only horizontal structural information produces also good results for the biomass estimation (r2 = 0.76, RMSE = 66 t/ha, cf. Fig. 14). However, if only vertical structure is used as input, the results are poor (r2 = 0.14, RMSE = 125 t/ha, cf. Fig. 14). This shows that for the estimation of the biomass, the information about the horizontal structure is sufficient. Forest structure is usually quantified using tree sizes from inventory measurements. In this study, we estimated forest structure from remote sensing (lidar) and provide a workflow to generate forest structure maps for Germany. 4.1 Estimating Horizontal and Vertical Forest Structures For each structural dimension, a metric was found which could be used to determine horizontal and vertical forest structures. It was harder to find a metric for the vertical structure. For the measurement of vertical forest structure by remote sensing, standard deviation of the vertical foliage profile is very well suited to reflect the heterogeneity in the vertical leaf distribution. For the estimation of the horizontal structure, top-of-canopy height (TCH) was selected, which is a height measurement and at first glance not a structural measure. However, TCH is a quite general metric that covers also information on the forest structure. On the one hand, it is affected by the height and shape of the largest trees in the upper canopy and is therefore closely linked to the basal area of these canopy trees. On the other hand, TCH also provides information about the horizontal vegetation density and the openness of a forest stand, if, for example, canopy gaps occur (Lu et al. 2016). For these reasons, TCH is suited to describe the density of a forest and thus its horizontal structure. However, horizontal structure can be also described by other approaches, which more closely include the positions of all trees. For example, the concepts of point pattern analysis could be used for this purpose (Wiegand et al. 2013). This would require identifying all individual trees from remote sensing, which has so far only been tested for small areas (e.g., Ferraz et al. 2016). 4.2 The Role of Forest Structure for Biomass and Productivity For the estimation of forest biomass and productivity, the concept of forest structure classification in combination with forest modeling and remote sensing has a high potential for applications on larger scales. For 375,000 forest stands, we have investigated the relations between structure, biomass and aboveground wood productivity in forest ecosystems. Forest structure has been shown to be an important factor for estimating biomass and productivity from remote sensing. In particular, horizontal forest structure seems to be a good predictor for forest biomass, while the vertical forest structure showed only weaker relationships with forest biomass. Hence, metrics describing the horizontal structure of forests might be a good choice for forest biomass estimations which is in accordance with other studies (Asner and Mascaro 2014; Knapp et al. 2018a). Metrics describing the vertical structural (e.g., the standard deviation of the vertical foliage profile) are useful for forest productivity estimations. Vertical foliage profile showed a much higher correlation with the ground-based standard deviation of tree heights than the standard deviation of the classical canopy height model profile. With VFP, the weight of larger and smaller canopy trees is the same, while classical lidar profiles are dominated by upper canopy trees. Small canopy trees may only provide a minor contribution to stand biomass, but can play an important role for stand productivity—which emphasizes the role of vertical structure for productivity estimations (Bohn and Huth 2017). In a study by Stark et al. (2012), the Shannon index of the lidar profile was successfully related to forest dynamics like productivity and mortality rates. In our study, this Shannon index performed also very well as vertical metric, as it measures the heterogeneity of leaf densities in different layers. Our aim was to estimate forest structure from a single remote sensing measurement and use this information for improved biomass and productivity estimations. An alternative approach for productivity estimations would be to analyze the change in forest structure and relate this to biomass change or productivity. For example, with lidar or radar it is possible to detect larger changes in forest height over a certain time period. However, often an exact detection of the height change via remote sensing is not possible, because particularly old-growth forests rarely change their canopy height and the detection error of the sensor is often larger than the actual change in height (e.g., Knapp et al. 2018b). In addition, data are often only available for a single remote sensing campaign (e.g., country-wide airborne lidar campaigns). Therefore, there is a strong need for the estimation of forest productivity from forest structure. We assume a simplified Central European climate and sites with homogenous soil conditions, without any spatial heterogeneities. This has the advantage of eliminating the influence of changing climate and soil properties on biomass and productivity estimations. It allowed to study the fundamental role of forest structure on biomass and productivity. Future studies will investigate the influence of changing environmental factors on these estimations. In perspective, information about forest structure can be used to distinguish different forest types, for example natural forests, forests that are disturbed by natural hazards or forests that have been managed by different strategies (Dieler et al. 2017; Müller et al. 2000; Peck et al. 2014; Young et al. 2017). It is also worthwhile to consider forest structure in the context of biodiversity research (Jetz et al. 2016; Pereira et al. 2013; Pettorelli et al. 2016). Forest structure can be a valuable indicator of biodiversity, since habitat structure and habitat heterogeneity can be correlated with animal and plant species diversity (e.g., Boncina 2000; Ishii et al. 2004; Schall et al. 2018a; Tews et al. 2004). All of this emphasizes the need for large-scale forest structure estimations from remote sensing. 4.3 Linking Forest Models with Remote Sensing In this study, the role of forest structure for biomass and productivity estimations was investigated by analyzing a large data set generated from a forest model. The synergy of forest models with remote sensing is a promising method. Structural realism in a model is a requirement for applying such a concept, which was realized here by the individual-based approach of the FORMIND model (Fischer et al. 2016; Shugart et al. 2015). In particular, forest gap models are helpful tools to understand forest responses to climate change, modified disturbance regimes and structural changes (Shugart et al. 2018). A few studies have tried to link remote sensing and forest modeling, covering model parameterization and initialization (Falkowski et al. 2010; Hurtt et al. 2004; Ranson et al. 2001), exploring several remote sensing metrics for ecosystem service estimations (Knapp et al. 2018a; Köhler and Huth 2010; Palace et al. 2015) and for error quantification (Frazer et al. 2011; Hurtt et al. 2010). To sum up, forest models can help to find reliable estimates of forest biomass and productivity as presented in this study. Gap models are particularly valuable because they take into account not only large-scale disturbances but also small‐scale variations of forest structures (due to gap-building processes). Linking forest models with remote sensing can help to extrapolate local findings to larger scales and better understand ecosystem patterns and processes (Knapp et al. 2018b; Rödig et al. 2017, 2018). Here, we used information from airborne lidar remote sensing—however, this analysis is not limited to airborne lidar, but can also be applied to other remote sensing techniques such as lidar and radar satellite missions. Therefore, the forest structure classification presented here can be applied also to other regions. Measurements of the upcoming satellite missions will offer a unique opportunity to reduce the uncertainties in the estimation of aboveground carbon emissions from forests (resulting from photosynthesis, respiration, mortality, human disturbances). These missions will also provide the opportunity to identify changes in forest structure which can be relevant for the estimation of forest productivity. This will improve our understanding of the global carbon cycle, which will be relevant for climate modeling and policy adaptation. In this study, we applied a novel approach of linking remote sensing measurements with dynamic forest models to study the role of forest structure for biomass and productivity. While information on horizontal structure seems to be sufficient for estimating biomass, information on horizontal and vertical structures is required for estimating aboveground wood productivity. In a case study for Germany, maps for forest structure, biomass and aboveground wood productivity were provided. The presented workflow is transferable to other regions as remote sensing data are available. Future satellite missions that measure forest structure (like GEDI from 2018, BIOMASS from 2020 and Tandem-L expected from 2022) will allow the derivation of more accurate estimations of forest biomass, productivity and other forest ecosystem services. This study originates from the workshop "Space-based Measurement of Forest Properties for Carbon Cycle Research" at the International Space Science Institute in Bern during November 2017. We thank the Thünen Institute for providing the German national forest inventory data. We also want to thank Hans Pretzsch, Peter Biber and Michael Heym (TUM) for their input on forest structure and structure metrics. Kostas Papathanassiou, Victor Cazcarra-Bes, Matteo Pardini and Marivi Tello Alonso (DLR) gave useful insights into linking forest structure and remote sensing. We also thank the anonymous reviewers for their insightful comments and suggestions. This study was part of the HGF-Helmholtz-Alliance "Remote Sensing and Earth System Dynamics" HA-310 under the funding reference RA37012. NK was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) under the funding reference 50EE1416. FB was funded by the Deutsche Forschungsgemeinschaft (DFG) within the research unit FOR1246 (Kilimanjaro ecosystems under global change: linking biodiversity, biotic interactions and biogeochemical ecosystem processes). HHS was funded by NASA grants 14-TE14-0085 and 16-ESUSPI-16-0015. 10712_2019_9519_MOESM1_ESM.rds (70.1 mb) Supplementary material 1 (RDS 71812 kb) Appendix 1: Estimation of Forest Attributes Using Structural Information Describing Forest Structure from Field Data The study examined a total of 13 field-based metrics to describe forest structure, which are listed in the following. Forest structure was described, for example, by basal area BA [m2], which is the sum of all tree basal area values BAi of a forest stand: $${\text{BA}} = \sum \limits_{i} {\text{BA}}_{i} = \sum \limits_{i} \frac{\pi }{4}d_{i} ^{2} ,$$ where di (m) is the stem diameter of a tree i (in total n trees in a stand). Alternative metrics to describe the horizontal and vertical structures of a forest stand are: standard deviation of stem diameters: \({\text{SD}}_{\text{DBH}} = \sqrt {\frac{1}{n - 1}\sum \limits_{i} \left( {d_{i} - \bar{d}} \right)^{2} }\) coefficient of variation of all stem diameters: \({\text{CV}}_{\text{DBH}} = \frac{{{\text{SD}}_{\text{DBH}} }}{{\overline{{d_{i} }} }}\) skewness of the diameter distribution: \({\text{Skew}}_{\text{DBH}} = \frac{{\frac{1}{n} \cdot \sum \nolimits_{i = 1}^{n} \left( {d_{i} - \bar{d}} \right)^{3} }}{{\left( {\frac{1}{n} \cdot \sum \nolimits_{i = 1}^{n} \left( {d_{i} - \bar{d}} \right)^{2} } \right)^{{\frac{3}{2}}} }}\) Gini coefficient of the diameter distribution: \({\text{Gini}}_{\text{DBH}} = \frac{{2\sum \nolimits_{i} i \cdot d_{i} }}{{n\sum \nolimits_{i} d_{i} }} - \frac{n + 1}{n}\), where di is the sorted list of stem diameters. \(\bar{d}\) is the mean diameter of all trees within a stand. The same metrics can be calculated also for the tree height distribution (where Hi (m) is the height of a tree) or basal area distribution. Especially for the tree height distribution, we have calculated further metrics. maximum height: \(H_{\hbox{max} } = \hbox{max} \left( {H_{i} } \right)\) mean height: \(H_{\text{mean}} = \frac{1}{n}\sum \nolimits_{i} H_{i}\) quadratic mean height: \(H_{{{\text{quad}} \cdot {\text{mean}}}} = \sqrt {\frac{1}{n}\sum\nolimits_{i} {H_{i}^{{2_{i} }} } }\) Lorey's height: \(H_{\text{Lorey's}} = \frac{{\sum \nolimits_{i} H_{i} \cdot {\text{BA}}_{i} }}{{\sum \nolimits_{i} {\text{BA}}_{i} }}.\) Describing Forest Structure from Remote Sensing Data Estimating forest structure from remote sensing is more challenging as remote sensing data are not tree-based as in the field-based case. This study examined a total of 25 remote sensing-based metrics to describe forest structure. The basis for most metrics is the lidar-derived canopy height model (CHM) with a spatial resolution of 1 m × 1 m. In this study, we described horizontal structure for each 20 m × 20 m forest stand mainly by the mean top-of-canopy height TCH (m), which is the mean of the canopy height model (CHM): $${\text{TCH}} = \frac{{\sum \nolimits_{i = 1}^{n} P_{{{\text{CHM}},i}} }}{n},$$ where PCHM,i is the forest height of the CHM in pixel i and n is the number of pixels. Alternative metrics based on the CHM are: maximum height: \(H_{\hbox{max} } = {\hbox{max} } \left( {P_{{{\text{CHM}},i}} } \right)\) quadratic TCH: \({\text{QTCH}} = \sqrt {\frac{{\sum \nolimits_{i = 1}^{n} P_{{{\text{CHM}},i}}^{2} }}{n}}\) relative height of the CHM: \({\text{RH}}_{q} = {\text{quantile}}_{q} \left( {P_{{{\text{CHM}},i}} } \right)\) It is also possible to calculate the standard deviation, the coefficient of variation and the skewness of the CHM (functions are described above in the field-based section). In this study, we considered further advanced metrics based on the CHM: Shannon index of the CHM: \({\text{Shannon}}_{\text{CHM}} = - \sum \limits_{i = 1}^{{i_{\hbox{max} } }} {\text{CHM}}\left( {h_{i} } \right) \cdot \ln \left( {{\text{CHM}}\left( {h_{i} } \right)} \right),\) with CHM (hi) being the CHM profile value (pixel count) in bin i. CHM (hi) has to be > 0, and CHM (hi) = 0 is ignored, Kurtosis of the CHM: \({\text{Kurtosis}}_{\text{CHM}} = n \cdot \frac{{\sum \nolimits_{i = 1}^{n} \left( {P_{{{\text{CHM}},i}} - \overline{{P_{\text{CHM}} }} } \right)^{4} }}{{\left( {\sum \nolimits_{i = 1}^{n} \left( {P_{{{\text{CHM}},i}} - \overline{{P_{\text{CHM}} }} } \right)^{2} } \right)^{2} }},\) with n being the total pixel number, PCHM,i the value of pixel i and \(\overline{{P_{\text{CHM}} }}\) the mean value of the CHM (which is the same as TCH), the p–h ratio of the CHM: \(P:H_{{{\text{CHM}}}} = \frac{{h\left( {\mathop {{\text{max}}}\limits_{{i\epsilon \left[ {1,i_{{{\text{max}}}} } \right]}} \left( {{\text{CHM}}\left( {h_{i} } \right)} \right)} \right)}}{{\mathop {{\text{max}}}\limits_{{i\epsilon \left[ {1,i_{{{\text{max}}}} } \right]}} \left( {h_{i} } \right)}}\), with CHM (hi) being the pixel count in height bin hi and imax is the highest height bin. Another class of metrics calculates the fractional canopy cover above a certain threshold × (m) using the CHM: \({\text{FCC}}_{x} = \frac{{\sum \nolimits_{{h_{i} = x}}^{{h_{\hbox{max} } }} {\text{CHM}}\left( {h_{i} } \right)}}{{\sum \nolimits_{{h_{i} = 0}}^{{h_{\hbox{max} } }} {\text{CHM}}\left( {h_{i} } \right)}},\) with CHM (hi) the count of CHM pixels in height bin hi and × the height threshold to distinguish canopy from gap. Instead of using the CHM as the basic information for calculating all these lidar metrics, we have used the vertical foliage profile (VFP) for a second class of metrics. All the above-described metrics can be calculated using the VFP. For this reason, the VFP was divided into 1-m height classes. This height classes can now be used in the equations described above by replacing the CHM. The generation of a VFP profile from a CHM is described below. Calculating the Vertical Foliage Profile from a CHM The vertical foliage profile (VFP) was reconstructed from the CHM profile at 1 m vertical resolution following the approach described by Harding et al. (2001). $${\text{VFP}}\left( {h_{i} } \right) = \frac{1}{k*\Delta h}*\ln \left( {\frac{{P\left( {h_{i} } \right)}}{{P\left( {h_{i + 1} } \right)}}} \right),$$ with k being the light extinction coefficient, Δh the width of one height bin and P(hi) the value of the cumulative CHM profile in height bin hi. The method reconstructs the vertical leaf profile by giving more weight to lower parts of the profile. All pixels below 5 m height were regarded as ground and the light extinction coefficient was set to 0.3 which has been shown to result in good LAI estimations (Getzin et al. 2017). Estimation of Forest Biomass and Productivity Using Forest Structure See Figs. 9 and 10. Relationship between observed biomass and estimated biomass derived by three different approaches (see Table 1). Each point represents one of 375,000 forest stands from the forest factory data set. The observed biomass have been derived by summing up the biomass values of all trees in the 20 m × 20 m stand. The estimated biomass was determined using the structural information for each forest stand. a Estimation of biomass using only information from the horizontal structural index TCH (AGB = 9.49 * TCH1.22, r2 = 0.90), b using the vertical structural index SDVFP (AGB = 34.77 * SD VFP 0.48 , r2 = 0.01) and c using the vertical and horizontal structural index (AGB = 7.55 * TCH1.20 * SD VFP 0.23 , r2 = 0.90). A comparison of the estimated biomass values for the different approaches is shown in Fig. 6a Relationship between observed and estimated aboveground woody productivity (AWP) for 375,000 forest stands (forest factory data set). Each dot represents one forest stand. a Estimation of productivity using only the horizontal structural index TCH (AWP = 1.68 * TCH0.31, r2 = 0.14), b only the vertical structural index SDVFP (AWP = 4.03 * SD VFP −0.34 , r2 = 0.09) and c using the vertical and horizontal structural index (AWP = 2.55 * TCH0.34 * SD VFP −0.39 , r2 = 0.31). A comparison of the estimated productivity values with the different approaches is shown in Fig. 6b Appendix 2: Analysis of the German Forest Inventory Data Set All analyses so far referred to the forest factory data set. This Appendix reproduces all analyses with the empirical BWI data set. For each forest stand of the BWI data set, a virtual lidar campaign was carried out and the remote sensing-based metrics were then calculated. See Figs. 11, 12, 13, 14, 15, 16 and 17. Remote sensing-based estimation of the forest structure using the BWI data set. Each dot represents one stand of the BWI. The figure shows the estimate of a the horizontal forest structure (basal area) from lidar using top-of-canopy height and b the vertical forest structure (tree height heterogeneity) from lidar using the standard deviation of the vertical foliage profile Overview of all correlations between field-based structural metrics and remote sensing-based metrics based only on the BWI data set. Numbers and gray scale indicate the coefficient of determination. All structural metrics are explained in Appendix 1 Role of forest structure for biomass, derived from the BWI data set (more than 45,000 field plots, 20 m × 20 m). Forest structure is estimated from remote sensing. As horizontal forest structure descriptor the top-of-canopy height (TCH) was used and as vertical structure descriptor the standard deviation of the vertical foliage profile (SDVFP). Shown is the mean aboveground biomass in relation to the forest structure classes. Error bars indicate the standard deviation Relationship between observed biomass and estimated biomass. Each point represents one forest stand from the forest inventory data set BWI. The observed biomass has been taken from the BWI data set. The estimated biomass values were determined using different approaches (cf. Table 1) and information on forest structure. a Estimation of biomass using only the horizontal structural index TCH, b only the vertical structural index SDVFP and c using the vertical and horizontal structural index. A comparison of the estimated values with the different approaches is shown in Fig. 15 Histogram for forest biomass estimates for Germany based on the BWI data set. The biomass was estimated using three different approaches: H—horizontal structure (blue), V—vertical structure (green) and H + V—horizontal and vertical structures (red). The histogram was compared with the measured values from the BWI data set (black line) Forest structure of Germany over different gradients. Mean value of the horizontal structure (TCH) from a south to north, c west to east in Germany and e over the altitudinal gradient. Mean value of the vertical structure (SDVFP) from a south to north, c west to east in Germany and e over the altitudinal gradient. The structure values correspond to Fig. 7 (forest structure maps of Germany) Histograms of structural metrics for forests outside of national parks and inside national parks. As horizontal forest structure descriptor, the top-of-canopy height (TCH, left) was used, and as vertical structure descriptor, the standard deviation of the vertical foliage profile (SDVFP, right) was used Asner GP, Mascaro J (2014) Mapping tropical forest carbon: calibrating plot estimates to a simple LiDAR metric. Remote Sens Environ 140:614–624CrossRefGoogle Scholar Bohn FJ, Huth A (2017) The importance of forest structure to biodiversity–productivity relationships. R Soc Open Sci 4:160521CrossRefGoogle Scholar Bohn FJ, Frank K, Huth A (2014) Of climate and its resulting tree growth: simulating the productivity of temperate forests. Ecol Model 278:9–17CrossRefGoogle Scholar Bonan GB (2008) Forests and climate change: forcings, feedbacks, and the climate benefits of forests. Science 320:1444–1449CrossRefGoogle Scholar Boncina A (2000) Comparison of structure and biodiversity in the Rajhenav virgin forest remnant and managed forest in the Dinaric region of Slovenia. Glob Ecol Biogeogr 9:201–211CrossRefGoogle Scholar Cazcarra-Bes V, Tello-Alonso M, Fischer R, Heym M, Papathanassiou K (2017) Monitoring of forest structure dynamics by means of L-band SAR tomography. Remote Sens 9:1229CrossRefGoogle Scholar Dănescu A, Albrecht AT, Bauhus J (2016) Structural diversity promotes productivity of mixed, uneven-aged forests in southwestern Germany. Oecologia 182:319–333CrossRefGoogle Scholar del Río M, Pretzsch H, Alberdi I, Bielak K, Bravo F, Brunner A, Condés S, Ducey MJ, Fonseca T, von Lüpke N, Pach M, Peric S, Perot T, Souidi Z, Spathelf P, Sterba H, Tijardovic M, Tomé M, Vallet P, Bravo-Oviedo A (2016) Characterization of the structure, dynamics, and productivity of mixed-species stands: review and perspectives. Eur J For Res 135:23–49CrossRefGoogle Scholar Dieler J, Uhl E, Biber P, Müller J, Rötzer T, Pretzsch H (2017) Effect of forest stand management on species composition, structural diversity, and productivity in the temperate zone of Europe. Eur J For Res 136:739–766CrossRefGoogle Scholar Disney M (2018) Terrestrial Li DAR: a three-dimensional revolution in how we look at trees. New Phytol. https://doi.org/10.1111/nph.15517 Google Scholar Dobbertin M (2002) Influence of stand structure and site factors on wind damage—comparing the storms Vivian and Lothar. For Snow Landsc Res 77:187–205Google Scholar Dubayah RO, Sheldon SL, Clark DB, Hofton MA, Blair JB, Hurtt GC, Chazdon RL (2010) Estimation of tropical forest height and biomass dynamics using lidar remote sensing at La Selva, Costa Rica. J Geophys Res Biogeosci 115:G00E09CrossRefGoogle Scholar Exbrayat J-F, Bloom AA, Carvalhais N, Fischer R, Huth A, MacBean N, Williams M (2019) Understanding the land carbon cycle with space data: current status and prospects. Surv Geophys. https://doi.org/10.1007/s10712-019-09506-2 Google Scholar Falkowski MJ, Hudak AT, Crookston NL, Gessler PE, Uebler EH, Smith AMS (2010) Landscape-scale parameterization of a tree-level forest growth model: a k-nearest neighbor imputation approach incorporating LiDAR data. Can J For Res 40:184–199CrossRefGoogle Scholar Ferraz A, Saatchi S, Mallet C, Meyer V (2016) Lidar detection of individual tree size in tropical forests. Remote Sens Environ 183:318–333CrossRefGoogle Scholar Fischer R, Bohn F, Dantas de Paula M, Dislich C, Groeneveld J, Gutiérrez AG, Kazmierczak M, Knapp N, Lehmann S, Paulick S, Pütz S, Rödig E, Taubert F, Köhler P, Huth A (2016) Lessons learned from applying a forest gap model to understand ecosystem and carbon dynamics of complex tropical forests. Ecol Model 326:124–133CrossRefGoogle Scholar Fischer R, Knapp N, Bohn F, Huth A (2019) Remote sensing measurements of forest structure types for ecosystem service mapping. In: Schröter M, Bonn A, Klotz S, Seppelt R, Baessler C (eds) Atlas of ecosystem services: drivers, risks, and societal responses. Springer, Cham, pp 63–67CrossRefGoogle Scholar Foley JA, DeFries R, Asner GP, Barford C, Bonan G, Carpenter SR, Chapin FS, Coe MT, Daily GC, Gibbs HK, Helkowski JH, Holloway T, Howard EA, Kucharik CJ, Monfreda C, Patz JA, Prentice IC, Ramankutty N, Snyder PK (2005) Global consequences of land use. Science 309:570–574CrossRefGoogle Scholar Frazer GW, Magnussen S, Wulder MA, Niemann KO (2011) Simulated impact of sample plot size and co-registration error on the accuracy and uncertainty of LiDAR-derived estimates of forest stand biomass. Remote Sens Environ 115:636–649CrossRefGoogle Scholar Getzin S, Fischer R, Knapp N, Huth A (2017) Using airborne LiDAR to assess spatial heterogeneity in forest structure on Mount Kilimanjaro. Landsc Ecol 32:1881–1894CrossRefGoogle Scholar Grace J, Mitchard E, Gloor E (2014) Perturbations in the carbon budget of the tropics. Glob Change Biol 20:3238–3255CrossRefGoogle Scholar Hansen MC, Potapov PV, Moore R, Hancher M, Turubanova SA, Tyukavina A, Thau D, Stehman SV, Goetz SJ, Loveland TR, Kommareddy A, Egorov A, Chini L, Justice CO, Townshend JRG (2013) High-resolution global maps of 21st-century forest cover change. Science 342:850–853CrossRefGoogle Scholar Hardiman BS, Bohrer G, Gough CM, Vogel CS, Curtis PS (2011) The role of canopy structural complexity in wood net primary production of a maturing northern deciduous forest. Ecology 92:1818–1827CrossRefGoogle Scholar Harding DJ, Lefsky MA, Parker GG, Blair JB (2001) Laser altimeter canopy height profiles: methods and validation for closed-canopy, broadleaf forests. Remote Sens Environ 76:283–297CrossRefGoogle Scholar Houghton RA, Lawrence KT, Hackler JL, Brown S (2001) The spatial distribution of forest biomass in the Brazilian Amazon: a comparison of estimates. Glob Change Biol 7:731–746CrossRefGoogle Scholar Hurtt GC, Dubayah R, Drake J, Moorcroft PR, Pacala SW, Blair JB, Fearon MG (2004) Beyond potential vegetation: combining lidar data and a height-structured model for carbon studies. Ecol Appl 14:873–883CrossRefGoogle Scholar Hurtt GC, Fisk J, Thomas RQ, Dubayah R, Moorcroft PR, Shugart HH (2010) Linking models and data on vegetation structure. J Geophys Res Biogeosci. https://doi.org/10.1029/2009JG000937 Google Scholar Ishii HT, Tanabe S, Hiura T (2004) Exploring the relationships among canopy structure, stand productivity, and biodiversity of temperate forest ecosystems. For Sci 50:342–355Google Scholar Jetz W, Cavender-Bares J, Pavlick R, Schimel D, Davis FW, Asner GP, Guralnick R, Kattge J, Latimer AM, Moorcroft P, Schaepman ME, Schildhauer MP, Schneider FD, Schrodt F, Stahl U, Ustin SL (2016) Monitoring plant functional diversity from space. Nat Plants 2:16024CrossRefGoogle Scholar Knapp N, Fischer R, Huth A (2018a) Linking lidar and forest modeling to assess biomass estimation across scales and disturbance states. Remote Sens Environ 205:199–209CrossRefGoogle Scholar Knapp N, Huth A, Kugler F, Papathanassiou K, Condit R, Hubbell SP, Fischer R (2018b) Model-assisted estimation of tropical forest biomass change: a comparison of approaches. Remote Sens 10:731CrossRefGoogle Scholar Köhler P, Huth A (2010) Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests. Biogeosciences 7:2531–2543CrossRefGoogle Scholar Lefsky MA (2010) A global forest canopy height map from the moderate resolution imaging spectroradiometer and the geoscience laser altimeter system. Geophys Res Lett 37:L15401CrossRefGoogle Scholar Lefsky MA, Harding D, Cohen WB, Parker G, Shugart HH (1999) Surface lidar remote sensing of basal area and biomass in deciduous forests of eastern Maryland, USA. Remote Sens Environ 67:83–98CrossRefGoogle Scholar Liang JJ, Crowther TW, Picard N, Wiser S, Zhou M, Alberti G, Schulze ED, McGuire AD, Bozzato F, Pretzsch H, de-Miguel S, Paquette A, Hérault B, Scherer-Lorenzen M, Barrett CB, Glick HB, Hengeveld GM, Nabuurs GJ, Pfautsch S, Viana H, Vibrans AC, Ammer C, Schall P, Verbyla D, Tchebakova N, Fischer M, Watson JV, Chen HYH, Lei XD, Schelhaas MJ, Lu HC, Gianelle D, Parfenova EI, Salas C, Lee E, Lee B, Kim HS, Bruelheide H, Coomes DA, Piotto D, Sunderland T, Schmid B, Gourlet-Fleury S, Sonké B, Tavani R, Zhu J, Brandl S, Vayreda J, Kitahara F, Searle EB, Neldner VJ, Ngugi MR, Baraloto C, Frizzera L, Balazy R, Oleksyn J, Zawiła-Niedźwiecki T, Bouriaud O, Bussotti F, Finér L, Jaroszewicz B, Jucker T, Valladares F, Jagodzinski AM, Peri PL, Gonmadje C, Marthy W, O'Brien T, Martin EH, Marshall AR, Rovero F, Bitariho R, Niklaus PA, Alvarez-Loayza P, Chamuya N, Valencia R, Mortier F, Wortel V, Engone-Obiang NL, Ferreira LV, Odeke DE, Vasquez RM, Lewis SL, Reich PB (2016) Positive biodiversity–productivity relationship predominant in global forests. Science 354:196CrossRefGoogle Scholar Lu D, Chen Q, Wang G, Liu L, Li G, Moran E (2016) A survey of remote sensing-based aboveground biomass estimation methods in forest ecosystems. Int J Digit Earth 9:63–105CrossRefGoogle Scholar Malhi Y, Wood D, Baker TR, Wright J, Phillips OL, Cochrane T, Meir P, Chave J, Almeida S, Arroyo L, Higuchi N, Killeen TJ, Laurance SG, Laurance WF, Lewis SL, Monteagudo A, Neill DA, Vargas PN, Pitman NCA, Quesada CA, Salomão R, Silva JNM, Lezama AT, Terborgh J, Martinez RV, Vinceti B (2006) The regional variation of aboveground live biomass in old-growth Amazonian forests. Glob Change Biol 12:1107–1138CrossRefGoogle Scholar Müller S, Ammer C, Nüsslein S (2000) Analyses of stand structure as a tool for silvicultural decisions—a case study in a Quercus petraea—Sorbus torminalis stand. Forstwiss Cent 119:32–42CrossRefGoogle Scholar Palace MW, Sullivan FB, Ducey MJ, Treuhaft RN, Herrick C, Shimbo JZ, Mota-E-Silva J (2015) Estimating forest structure in a tropical forest using field measurements, a synthetic model and discrete return lidar data. Remote Sens Environ 161:1–11CrossRefGoogle Scholar Pan YD, Birdsey RA, Fang JY, Houghton R, Kauppi PE, Kurz WA, Phillips OL, Shvidenko A, Lewis SL, Canadell JG, Ciais P, Jackson RB, Pacala SW, McGuire AD, Piao SL, Rautiainen A, Sitch S, Hayes D (2011) A large and persistent carbon sink in the world's forests. Science 333:988–993CrossRefGoogle Scholar Peck JE, Zenner EK, Brang P, Zingg A (2014) Tree size distribution and abundance explain structural complexity differentially within stands of even-aged and uneven-aged structure types. Eur J For Res 133:335–346CrossRefGoogle Scholar Pereira HM, Ferrier S, Walters M, Geller GN, Jongman RHG, Scholes RJ, Bruford MW, Brummitt N, Butchart SHM, Cardoso AC, Coops NC, Dulloo E, Faith DP, Freyhof J, Gregory RD, Heip C, Hoft R, Hurtt G, Jetz W, Karp DS, McGeoch MA, Obura D, Onoda Y, Pettorelli N, Reyers B, Sayre R, Scharlemann JPW, Stuart SN, Turak E, Walpole M, Wegmann M (2013) Essential biodiversity variables. Science 339:277–278CrossRefGoogle Scholar Pettorelli N, Wegmann M, Skidmore A, Mücher S, Dawson TP, Fernandez M, Lucas R, Schaepman ME, Wang T, O'Connor B, Jongman RHG, Kempeneers P, Sonnenschein R, Leidner AK, Böhm M, He KS, Nagendra H, Dubois G, Fatoyinbo T, Hansen MC, Paganini M, de Klerk HM, Asner GP, Kerr JT, Estes AB, Schmeller DS, Heiden U, Rocchini D, Pereira HM, Turak E, Fernandez N, Lausch A, Cho MA, Alcaraz-Segura D, McGeoch MA, Turner W, Mueller A, St-Louis V, Penner J, Vihervaara P, Belward A, Reyers B, Geller GN (2016) Framing the concept of satellite remote sensing essential biodiversity variables: challenges and future directions. Remote Sens Ecol Conserv 2:122–131CrossRefGoogle Scholar Pommerening A (2002) Approaches to quantifying forest structures. Forestry 75:305–324CrossRefGoogle Scholar Pretzsch H (2009) Forest dynamics, growth and yield. Springer, BerlinCrossRefGoogle Scholar Pretzsch H, del Río M, Schütze G, Ammer C, Annighöfer P, Avdagic A, Barbeito I, Bielak K, Brazaitis G, Coll L, Drössler L, Fabrika M, Forrester DI, Kurylyak V, Löf M, Lombardi F, Matović B, Mohren F, Motta R, den Ouden J, Pach M, Ponette Q, Skrzyszewski J, Sramek V, Sterba H, Svoboda M, Verheyen K, Zlatanov T, Bravo-Oviedo A (2016) Mixing of Scots pine (Pinus sylvestris L.) and European beech (Fagus sylvatica L.) enhances structural heterogeneity, and the effect increases with water availability. For Ecol Manag 373:149–166CrossRefGoogle Scholar Ranson KJ, Sun G, Knox RG, Levine ER, Weishampel JF, Fifer ST (2001) Northern forest ecosystem dynamics using coupled models and remote sensing. Remote Sens Environ 75:291–302CrossRefGoogle Scholar Reineke LH (1933) Perfecting a stand-density index for even-aged forests. J Agric Res 46:627–638Google Scholar Rödig E, Cuntz M, Heinke J, Rammig A, Huth A (2017) Spatial heterogeneity of biomass and forest structure of the Amazon rain forest: linking remote sensing, forest modelling and field inventory. Glob Ecol Biogeogr 26:1292–1302CrossRefGoogle Scholar Rödig E, Cuntz M, Rammig A, Fischer R, Taubert F, Huth A (2018) The importance of forest structure for carbon fluxes of the Amazon rainforest. Environ Res Lett 13:054013CrossRefGoogle Scholar Saatchi SS, Houghton RA, Alvala RCDS, Soares JV, Yu Y (2007) Distribution of aboveground live biomass in the Amazon basin. Glob Change Biol 13:816–837CrossRefGoogle Scholar Saatchi SS, Harris NL, Brown S, Lefsky M, Mitchard ETA, Salas W, Zutta BR, Buermann W, Lewis SL, Hagen S, Petrova S, White L, Silman M, Morel A (2011) Benchmark map of forest carbon stocks in tropical regions across three continents. Proc Natl Acad Sci USA 108:9899–9904CrossRefGoogle Scholar Schall P, Gossner MM, Heinrichs S, Fischer M, Boch S, Prati D, Jung K, Baumgartner V, Blaser S, Böhm S, Buscot F, Daniel R, Goldmann K, Kaiser K, Kahl T, Lange M, Müller J, Overmann J, Renner SC, Schulze ED, Sikorski J, Tschapka M, Türke M, Weisser WW, Wemheuer B, Wubet T, Ammer C (2018a) The impact of even-aged and uneven-aged forest management on regional biodiversity of multiple taxa in European beech forests. J Appl Ecol 55:267–278CrossRefGoogle Scholar Schall P, Schulze E-D, Fischer M, Ayasse M, Ammer C (2018b) Relations between forest management, stand structure and productivity across different types of Central European forests. Basic Appl Ecol 32:39–52CrossRefGoogle Scholar Shugart HH (2003) A theory of forest dynamics. The Blackburn Press, CaldwellGoogle Scholar Shugart HH, Saatchi S, Hall FG (2010) Importance of structure and its measurement in quantifying function of forest ecosystems. J Geophys Res Biogeosci. https://doi.org/10.1029/2009JG000993 Google Scholar Shugart HH, Asner GP, Fischer R, Huth A, Knapp N, Le Toan T, Shuman JK (2015) Computer and remote-sensing infrastructure to enhance large-scale testing of individual-based forest models. Front Ecol Environ 13:503–511CrossRefGoogle Scholar Shugart HH, Wang B, Fischer R, Ma J, Fang J, Yan X, Huth A, Armstrong AH (2018) Gap models and their individual-based relatives in the assessment of the consequences of global change. Environ Res Lett 13:033001CrossRefGoogle Scholar Simard M, Pinto N, Fisher JB, Baccini A (2011) Mapping forest canopy height globally with spaceborne lidar. J Geophys Res Biogeosci 116:G04021CrossRefGoogle Scholar Snyder M (2010) What is forest stand structure and how is it measured? North Woodl 64:15Google Scholar Stark SC, Leitold V, Wu JL, Hunter MO, de Castilho CV, Costa FRC, McMahon SM, Parker GG, Shimabukuro MT, Lefsky MA, Keller M, Alves LF, Schietti J, Shimabukuro YE, Brandao DO, Woodcock TK, Higuchi N, de Camargo PB, de Oliveira RC, Saleska SR (2012) Amazon forest carbon dynamics predicted by profiles of canopy leaf area and light environment. Ecol Lett 15:1406–1414CrossRefGoogle Scholar Tello M, Pardini M, Papathanassiou K, Fischer R (2014) Towards forest structure characteristics retrieval from SAR tomographic profiles. In: Electronic proceedings EUSAR 2014; 10th European conference on synthetic aperture radar, 03–05 June 2014, Berlin, Germany. VDE Verlag, Berlin, pp 1425–1428Google Scholar Tello M, Cazcarra-Bes V, Fischer R, Papathanassiou K (2018) Multiscale forest structure estimation from SAR tomography. In: Electronic proceedings EUSAR 2018; 12th European conference on synthetic aperture radar, 04–07 June, 2018, Aachen, Germany. VDE Verlag, Berlin, pp 600–603Google Scholar Tews J, Brose U, Grimm V, Tielbörger K, Wichmann MC, Schwager M, Jeltsch F (2004) Animal species diversity driven by habitat heterogeneity/diversity: the importance of keystone structures. J Biogeogr 31:79–92CrossRefGoogle Scholar Thuenen-Institut (2015) Dritte Bundeswaldinventur - Basisdaten (Stand 20.03.2015)Google Scholar Wiegand T, He F, Hubbell SP (2013) A systematic comparison of summary characteristics for quantifying point patterns in ecology. Ecography 36:92–103CrossRefGoogle Scholar Young BD, D'Amato AW, Kern CC, Kastendick DN, Palik BJ (2017) Seven decades of change in forest structure and composition in Pinus resinosa forests in northern Minnesota, USA: comparing managed and unmanaged conditions. For Ecol Manag 395:92–103CrossRefGoogle Scholar Zenner EK, Hibbs DE (2000) A new method for modeling the heterogeneity of forest structure. For Ecol Manag 129:75–87CrossRefGoogle Scholar © Springer Nature B.V. 2019 1.Department of Ecological ModelingHelmholtz Centre for Environmental Research GmbH – UFZLeipzigGermany 2.Institute for Meteorology and Climate Research, Atmospheric Environmental ResearchKarlsruhe Institute of TechnologyGarmisch-PartenkirchenGermany 3.Department of Environmental SciencesUniversity of VirginiaCharlottesvilleUSA 4.German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-LeipzigLeipzigGermany 5.Institute of Environmental Systems ResearchUniversity of OsnabrückOsnabrückGermany Fischer, R., Knapp, N., Bohn, F. et al. Surv Geophys (2019) 40: 709. https://doi.org/10.1007/s10712-019-09519-x Received 17 August 2018 Accepted 14 February 2019 First Online 04 March 2019 DOI https://doi.org/10.1007/s10712-019-09519-x
CommonCrawl
Research article | Open | Published: 30 January 2015 Forecasting the development of boreal paludified forests in response to climate change: a case study using Ontario ecosite classification Benoit Lafleur1,2, Nicole J Fenton1 & Yves Bergeron1 Forest Ecosystemsvolume 2, Article number: 3 (2015) | Download Citation Successional paludification, a dynamic process that leads to the formation of peatlands, is influenced by climatic factors and site features such as surficial deposits and soil texture. In boreal regions, projected climate change and corresponding modifications in natural fire regimes are expected to influence the paludification process and forest development. The objective of this study was to forecast the development of boreal paludified forests in northeastern North America in relation to climate change and modifications in the natural fire regime for the period 2011–2100. A paludification index was built using static (e.g. surficial deposits and soil texture) and dynamic (e.g. moisture regime and soil organic layer thickness) stand scale factors available from forest maps. The index considered the effects of three temperature increase scenarios (i.e. +1°C, +3°C and +6°C) and progressively decreasing fire cycle (from 300 years for 2011–2041, to 200 years for 2071–2100) on peat accumulation rate and soil organic layer (SOL) thickness at the stand level, and paludification at the landscape level. Our index show that in the context where in the absence of fire the landscape continues to paludify, the negative effect of climate change on peat accumulation resulted in little modification to SOL thickness at the stand level, and no change in the paludification level of the study area between 2011 and 2100. However, including decreasing fire cycle to the index resulted in declines in paludified area. Overall, the index predicts a slight to moderate decrease in the area covered by paludified forests in 2100, with slower rates of paludification. Slower paludification rates imply greater forest productivity and a greater potential for forest harvest, but also a gradual loss of open paludified stands, which could impact the carbon balance in paludified landscapes. Nonetheless, as the thick Sphagnum layer typical of paludified forests may protect soil organic layer from drought and deep burns, a significant proportion of the territory has high potential to remain a carbon sink. In boreal forest ecosystems, successional paludification is described as a dynamic process driven by forest succession between fire events that leads to peat accumulation, and a concomitant thickening of the soil organic layer (SOL), and the formation of waterlogged conditions on a formerly dry mineral soil (Simard et al. 2007). Paludification is influenced by climatic factors and permanent site features, such as surficial deposits and soil texture, as well as by natural fire regimes (Lecomte et al. 2006; Simard et al. 2009; Payette et al. 2013). In boreal regions, in the extended absence of fire, paludification leads to the formation of paludified forests and can reduce forest productivity by up to 50%–80% (Simard et al. 2007). According to the most recent report of the Intergovernmental Panel on Climate Change, warming of the climate system is unequivocal (IPCC 2013). This changing climate is expected to increase drought severity in boreal regions (Girardin and Mudelsee 2008), and therefore to influence the natural fire regime, resulting in an increase of fire severity and burn rate (Flannigan et al. 2005; de Groot et al. 2009; Bergeron et al. 2010; van Bellen et al. 2010). Fire plays an important role in landscape level paludification processes, as fire can "depaludify" forest stands if most of the SOL is burnt (Dyrness and Norum 1983; Greene et al. 2005). However, if the fire is not severe and a relatively thick SOL remains after fire, the regenerating forest stands may remain paludified (Lecomte et al. 2005). Because boreal peatlands represent important carbon reservoirs (it is estimated that boreal peatlands, including paludified forests, store 455 Pg of carbon, i.e. approximately 15% of the Earth's terrestrial carbon (Gorham 1991; Lavoie et al. 2005)) any modification to the fire cycle may have important consequences on the carbon cycle and the global climate. Throughout the boreal region, paludified forests support important forest industries. It is expected that any modifications to the climate and natural fire regimes will, in all likelihood, require industries to adapt to new ecosystem conditions and, presumably, to modify their practices. Depending on the type of harvest practices, forest harvest in paludified forests can both promote or reduce paludification (Lafleur et al. 2010a, 2010b), and, as is the case for fire, have important effects on the C budget at the landscape level. This context provides strong incentives for the development of simple tools that can be used to rapidly and easily forecast the combined effects of climate change and modified fire regime on paludification and forest development. In North America, forest mapping is commonly used to describe the forest mosaic at the regional scale. Forest maps provide information on stand scale environmental factors (e.g. surficial deposits, soil texture, moisture regime, slope), as well as on stand species composition, height and density. This information presents a great potential for research in forest ecology and management. For instance, it can be used by forest managers to forecast the effects of silvicultural practices or wildfire on stand regeneration, composition and productivity. Because some stand scale environmental factors provided in forest maps are intrinsically dynamic (e.g. soil moisture regime and SOL thickness) and potentially influenced by climate variables, forest maps could also be used to forecast the effects of climate change on forest development. In this context, the main objective of this study was to use information commonly available on forest maps in Canada in order to evaluate the potential for paludification of boreal forest stands in relation to climate change and modifications to the natural fire regime. To achieve this objective we developed a dynamic paludification index. First, we developed a base index projecting the development of paludified forests over time without considering climate change. Then, we added the effects of climate change on the thickening on SOL to this base index in order to forecast the development of paludified forests in response to projected climate change. Finally, forecasted climate change and modifications to the natural fire regime were added to the base index to further explore the development of paludified forests in the context of climate change. Based on information commonly available on forest maps, the paludification index can therefore estimate the effects of both climate change and natural fire regimes on the development of paludified forests at both the local and regional scales. A territory in eastern Canada was used to model the development of paludified forests in relation to climate change and changes in the natural fire regime. The Gordon Cosens Forest is a 20 000 km2 (17 360 km2 of which is covered in forest, and the remaining area is water bodies) forest management unit located in the Ontario ecodistrict 3E-1, a region also known as the Clay Belt (Figure 1). Cold climate, flat topography, and surficial deposits that are resistant to water penetration all make this region favorable for the development of paludified forests (Jeglum 1991; Riley 1994). The southern part of the Clay Belt is covered by thick (>10 m) glaciolacustrine clay and silt deposited by glacial Lake Ojibway, while the northern part, known as the Cochrane till, is covered by a compact till made up of a mixture of clay and gravel, created by a southward ice flow approximately 8000 years BP (Veillette 1994). Soils of the study area are mostly classified as Gleysols and Luvisols (Soil Classification Working Group 1998). Nonetheless, organic deposits (i.e. a surficial deposit consisting of a SOL > 40 cm thick) are found in many locations in both the southern and northern parts of the study area. Black spruce (Picea mariana [Mill.] BSP.) is the dominant tree species of the study area. Location of the Clay Belt and the Gordon Cosens Forest in northeastern Ontario (inset). According to the local weather station (Kapuskasing, Ontario), from 1981 to 2010, the average annual temperature was 1.3°C and the average annual precipitation was 830 mm, with 30% falling during the growing season (Environment Canada 2014). The average number of degree-days (>5°C) is 1430, and the frost-free season lasts about 100 days; frost can occasionally occur during the growing season. In this region, according to different scenarios (A1 and B2) and simulation models (Canadian Center for Climate Modelling and Analysis [CGCM 3.1], Australian-based Commonwealth Scientific and Industrial Research Organization [CSIROMk3.5], National Institute for Environmental Studies [MIROC 3.2 medres], and National Center for Atmospheric Research [NCAR-CC SM3]), climate warming is projected to be between 3–6°C and precipitation is projected to increase by 10%–20% by the end of the 21st century (McKenney et al. 2010). In the study area, fire frequency has diminished from a 100-year cycle to an approximately 400-year cycle since the little Ice Age (ca. 1850; Bergeron et al. 2004). As a result of anthropogenic climate change, Bergeron et al. (2010) predict a doubling of fire frequency in this region by the end of the 21st century. Data for individual forest stands polygons, used to forecast the development of boreal forested peatlands, were retrieved from the Ontario ecosite classification system (Taylor et al. 2000). This classification seeks to classify the province's ecosystems, such as non forested uplands (prairie, cliff top, dunes), forested ecosystems (both upland and lowland), and non forested wetlands (marshes, swamps, fen, bogs). For forest polygons, ecosites are defined as homogeneous landscape areas (i.e. polygons typically 10–100 hectares) of common surficial deposits, soil moisture regime, soil texture, SOL thickness, humus form, and tree cover. Figure 2 illustrates the forest stand mosaic of the Gordon Cosens Forest, whereas Table 1 shows the size distribution of forest stand polygons. Ecosite classification is meant to be a practical tool for resource managers, and can be used for a variety of forest and site level applications including timber supply analysis, harvest planning, wildlife habitat studies and assessments, and successional studies. Forest mosaic of the Gordon Cosens Forest. BF = balsam fir, BP = balsam poplar, BS = black spruce, JP = jack pine, L = larch, TA = trembling aspen, WB = white birch, WC = white cedar, WS = white spruce. Table 1 Size and area distribution of polygons of the Gordon Cosens Forest map The forest stand polygon data used in this study were retrieved from the 2004 forest inventory reassessment, the most recent reassessment available for the study area. The analysis of this inventory reveals that in 2004, organic deposits (i.e. surficial deposit consisting of a SOL > 40 cm thick) covered 42% of the study area, and nearly 70% of the area had moist to wet soils. While black spruce stands covered 55% of the study area, trembling aspen (Populus tremuloides Michx.) and mixed black spruce-trembling aspen stands covered 5% and 25% of the area, respectively. Jack pine (Pinus banksiana Lamb.), white spruce (Picea glauca Moench), and white birch (Betula papyrifera Marsh.) were the other dominant species of the regional landscape. Development of the paludification index Using static (e.g. surficial deposit and soil texture) and dynamic (e.g. moisture regime and SOL thickness) stand scale factors available from forest maps (Table 2; Taylor et al. 2000), we developed a dynamic paludification index for each forest stand polygon. Each static and dynamic factor was divided into classes (2 to 9 according to site feature), each of which was attributed a score related to its paludification "power" (between 0 and 5; 0 = null paludification "power" and 5 = high paludification "power"). Table 2 lists the factor classes with their related paludification "power" score. Adding up the score of each factor, gives the paludification index, which describes the level of paludification of each forest stand polygon (hereafter referred to as stand). The maximum score an ecosite could achieve is 14. The paludification index was calculated as follows: $$ \mathrm{PI}=\mathrm{Static}\ \mathrm{factors}+\mathrm{Dynamic}\ \mathrm{factors} $$ $$ \mathrm{PI}=\left(D+T+H\right)+\left( SOL+M+O\right), $$ where PI is the paludification index, D is surficial deposit, T is soil texture, H is humus form, SOL is soil organic layer thickness, M is moisture regime, and O is overstorey composition. Although we recognize the dynamic nature of surficial deposits and soil texture, we treated these factors as static because the transformation rates of these variables are slow relative to the projection time (i.e. 100 years) used in this study. Similarly, humus form was considered to be static because of its slow rate of transformation (Yu et al. 2001). Stands with a PI ≥ 13 were classified as paludified, whereas stands with a PI between ≥ 7 and < 13 were classified as nearly paludified. Stands with a PI ≤ 6 were classified as not paludified. Although we recognize the possibility of interactions among variables and nonlinearity in the effects, the use of a simple index with linear effects was justified because of the lack of information about possible interactions among some variables. Table 2 Paludification scores for static and dynamic stand scale factors Dynamic factors were allowed to vary over time according to certain rules. In the base index (i.e. the index not considering climate change or fire), SOL thickness increased with time, following peat accumulation rates determined by Lecomte et al. (2006) for nearby sites located in the Clay Belt of Quebec. According to Lecomte et al. (2006), for the past ca. 200 years the peat accumulation rate varied between 10 and 20 cm per century. Lecomte et al. (2006) also showed that peat tends to accumulate at a faster rate where SOL is > 20 cm deep. As a result, the base index was adjusted to allow stands with a median SOL depth > 20 cm to accumulate peat at a rate of 20 cm per century, whereas stands with SOL median depth < 20 cm we allowed to accumulate peat at a rate of 10 cm per century. In the same vein, stands with an initial SOL depth < 20 cm had their peat accumulation rate adjusted to 20 cm per century when their SOL depth reached 20 cm. Simard et al. (2009) and Drobyshev et al. (2010) observed an important decline in tree growth when SOL depth is 20 cm or greater, further confirming the pertinence of the 20 cm cut off point. Furthermore, in the base index, moisture regime was allowed to vary in stands where soil texture was finer than medium loam, only if SOL depth was ≥ 20 cm. Hence, in all cases when SOL depth reached ≥ 20 cm, moisture regime stepped one class ahead. For example, stands with a fine loam soil texture and a moisture regime classified as Moist had their moisture regime changed to Very moist when their SOL depth reached 20 cm. In the index considering climate change, peat accumulation rate (hence SOL thickness) was allowed to vary according to an adaptation of the Peat Accumulation Model (PAM; Hilbert et al. 2000) made by Wu (2012). Wu (2012) modified PAM to study the response of peatland development and carbon cycling to climate change and to answer several research questions, among which was the following: How does peat accumulation respond to changes in precipitation and temperature? In its most basic form, PAM considers peat accumulation (i.e. SOL thickness) as a simple equation: "peat production minus the sum of oxic decomposition and anoxic decomposition equals peat accumulation" (i.e. change in SOL thickness = peat production – peat decomposition). In Wu's (2012) adaptation of PAM, the peat accumulation rate in ombrotrophic bogs could decrease by up to 70% within the first 100 years following the initiation of climate change. In fact, Wu's (2012) estimation suggested that for the 30 year period between 2011 and 2040, which corresponds to a 1°C increase in temperature, peat accumulation rate would drop by 15%. For the 30 year period between 2041 and 2070 (2°C increase in temperature) peat accumulation rate would drop by an additional 25%. Finally, for the 30 year period between 2071 and 2100 (3°C increase in temperature) peat accumulation rate would drop by an additional 30%. Hence, according to our estimations based on Wu's (2012) relationship between temperature increase and decrease in peat accumulation rate, this relationship fits the following equation: $$ {\mathrm{Peat}}_{\mathrm{AR}}=\left(16.0414\times T\right)+\left(\hbox{--} 5.7383\times {T}^2\right)+\left(2.7230\times {T}^3\right) $$ where PeatAR is the accumulation rate of peat in cm · yr−1 and T the temperature (°C) increase. As a result, the peat accumulation rate decreases rapidly to the point where the accumulation rate was negative (i.e. peat decomposition rate was greater than production rate). Consequently, a temperature increase between 3°C and 4°C results in a reduction of SOL thickness of approximately 0.5 mm · yr−1, between 4°C and 5°C a reduction of 1.8 mm · yr−1, and between 5°C and 6°C a reduction of 3.8 mm · yr−1. Hence, when a 4°C increase is reached, peat decomposition rate is greater than its production rate, leading to a reduction in SOL thickness. Although we recognize that this extension is simplistic and should be interpreted with greater caution, we believe this extension should be representative of the effect of temperatures increases beyond 3°C on peat accumulation rate and its effect on SOL thickness. Furthermore, although we acknowledge that ombrotrophic bogs and paludified forest support a different aboveground vegetation structure (i.e. shrubs- vs. tree-dominated aboveground vegetation for ombrotrophic bogs and paludified forests, respectively), we assumed that peat decomposition in ombrotrophic bogs and paludified forests would show similar responses to temperature increase. From this information, we projected peat accumulation and SOL thickness according to three temperature increase scenarios (i.e. +1°C, +3°C and +6°C, which respectively correspond to 5%, 40% and 70% reduction in peat accumulation rate) and for four initial SOL thickness (5 cm, 10 cm, 20 cm, and 40 cm). For each scenario, average temperature was progressively increased during the simulations so that by 2100 temperature increases amount to +1°C, +3°C and +6°C relative to the average temperature as of 2004. For these four initial SOL thicknesses, peat accumulation rate varied according to Lecomte et al. (2006), i.e. where SOL is < 20 cm thick, peat accumulated at a rate of 10 cm per century, whereas where SOL is > 20 cm thick, peat accumulated at a rate of 20 cm per century. At each time step, SOL thickness was estimated as follow: $$ {\mathrm{SOL}}_{\mathrm{t}}={\mathrm{SOL}}_{\mathrm{i}}+\left(30\times {\mathrm{Peat}}_{\mathrm{ARt}}\right) $$ where SOLi is the thickness of the soil organic layer at the beginning of the reference period and PeatARt is the accumulation rate of peat for the reference period. PeatARt was calculated as follow: $$ {\mathrm{Peat}}_{\mathrm{ARt}}={\mathrm{Peat}}_{\mathrm{ARi}}-\Big({\mathrm{Peat}}_{\mathrm{ARi}}\times \left({R}_{\mathrm{t}}/100\right) $$ where PeatARi is the initial accumulation rate of peat and R t is the reduction of peat accumulation rate for the period of reference, i.e. accumulation rate for each 30 year period. For each SOL thickness class (i.e. 5 cm, 10 cm, 20, and 40 cm), we calculated peat accumulation rate and the SOL thickness for each time step under the three climatic scenarios (i.e. +1°C, +3°C and +6°C). Finally, in the index considering both climate change and natural fire regime, we first projected forest paludification considering the current fire cycle of 400 years as determined by Bergeron et al. (2004). Then we projected forest paludification allowing the fire cycle to decrease according to projections made by Bergeron et al. (2010). In this sub-index, the fire cycle decreased to 300 years for 2011–2041, to 250 years for 2041–2071, and to 200 years for 2071–2100. Furthermore, based on our own observations, we considered that currently 50% of fire ignitions resulted in high-severity fires, i.e. fires that left < 5 cm of residual SOL over the mineral soil (Simard et al. 2007). In light of greater uncertainty surrounding the impact of the climate change on fire severity, we ran our projections considering three proportions of high-severity fires, i.e. 25%, 50% and 75%. Furthermore, we considered that stands with SOL > 120 cm could not be submitted to high-severity fires because it is highly unlikely that a fire would leave < 5 cm of residual SOL over the mineral soil in such stands. At each time step, fire events and severity were randomly attributed to the forest stands of the study area. The three indices were run in the following sequence: (1) base index, (2) base index + climate change, and (3) base index + climate change + natural fire regime modifications. This sequence allowed us to first explore the effect of climate change alone and then the combined effects of climate change and modifications of the fire regime on the potential for paludification of the forest stands of the study area. Base paludification index According to the Ontario forest map data, 42.2% of the area of the Gordon Cosens Forest was already paludified in 2004, whereas 57.8% of the area was not paludified (Table 3). Table 3 Paludification level (% of the territory) of the Gordon Cosens forest according to the base index and the combination of the base index with climate change As paludification is a relatively slow process, the index did not forecast any change in the level of paludification of the territory for the 2041 time step compared to the current state of the Gordon Cosens Forest (Table 3). As an illustration, Figure 3 shows the slow but constant thickening of the SOL for four different initial SOL thickness scenarios. When initial SOL thickness was set at 5 cm or 10 cm, SOL thickness projected for 2100 remained below 20 cm (Figure 3). However, when initial SOL thickness was set at 20 cm, SOL thickness reached 30 cm around 2060 and nearly 40 cm in 2100 (Figure 2). Similarly, when initial SOL thickness was set at 40 cm, SOL thickness reached 50 cm around 2060 and nearly 60 cm in 2100 (Figure 3). As a result, for the 2071 time step, the proportion of the study area occupied by nearly paludified stands increased from 0 to 11.7%, whereas as that occupied by not paludified stands decreased to 46.1% (Table 3); the proportion of the study area classified as paludified did not change. For the 2100 time step, the index forecasted that the proportion of the study area classified as nearly paludified increased to 26.4% (Table 3), while the proportion of area classified as not paludified decreased to 31.4%. At this time step, nearly 70% of the forest stands of the Gordon Cosens Forest could be classified as paludified or nearly paludified, which corresponds to a 166% increase in the cover of paludified or nearly paludified areas. Soil organic layer (SOL) thickening with time without considering the effects of climate change. Initial SOL thickness (cm) is illustrated for four different scenarios: 5 cm, 10 cm, 20 cm, and 40 cm. SOL thickness and climate change For every initial SOL thickness (i.e. 5 cm, 10 cm, 20 cm and 40 cm), SOL thickness tended to increase with time for the +1°C and +3°C scenarios (Figure 4), the increase being greater for the +1°C scenario. For the +6°C scenario, an increase was observed until ca. 2060 when SOL thickness started to decrease. This tipping point corresponds to the moment where the temperature increase was ca. 4°C. For the stands where initial SOL thickness was 5 cm or 10 cm, the +6°C scenario led to an almost complete disappearance of the SOL by 2100. For the stands where initial SOL thickness was 20 cm or 40 cm, the same scenario ended with SOL thickness similar to what was observed initially. Soil organic layer (SOL) thickness (cm) variation with time in response to climate change. Each panel represents a different initial SOL thickness (i.e. 5 cm, 10 cm, 20 cm, and 40 cm) at the start of the projection. For each panel, upper dashed line = +1°C scenario; middle solid line = + 3°C scenario; lower dashed line = + 6°C scenario. These negatives effect of climate change on peat accumulation resulted in no change in the proportion of the study area classified as paludified, nearly paludified and not paludified between 2011 and 2100 (Table 3). Combining climate change with natural fire regime The inclusion of the current fire cycle (400 years) in our index resulted in a slight decrease (<6%) in the paludified area, regardless of the proportion of burnt stands submitted to high-severity fires (Figure 5a). Nonetheless, the decrease in paludified area was lower (ca. 2%) when 25% of the burned stands were submitted to high-severity fires and steeper (ca. 6%) when 75% were submitted to high-severity fires. Projected effects of climate change and fire severity on paludified area for the period 2011–2100. A) Fire cycle was maintained at 400 years throughout the projection. B) Fire cycle was adjusted according to projections by Bergeron et al. (2010) for a region covering northeastern Ontario and northwestern Quebec; in 2041, fire cycle = 300 years; in 2071, fire cycle = 250 years; in 2100, fire cycle = 200 years. When the fire cycle was allowed to decrease according to the projections made by Bergeron et al. (2010), declines in paludified area was steeper for the three proportions of high-severity fires (Figure 4b). Declines in paludified area were ca. 9%, 25% and 40% for the 25%, 50% and 75% high-severity fires, respectively (Figure 5b). The boreal forest represents one of the Earth's largest biome, encompassing an area of approximately 14 × 106 km2 (Wieder et al. 2006). According to Bhatti et al. (2003), it is also one of the Earth's biomes most affected by global warming. About 25% of the boreal forest region is occupied by peatlands (Wieder et al. 2006), a large proportion of which are forested peatlands. Overall, peatlands store approximately 15% of the Earth's terrestrial carbon (Gorham 1991; Lavoie et al. 2005). Consequently, at the global scale, any modifications to the natural fire regime in response to climate change may have a significant impact on the carbon sequestered in paludified forests. Our results suggest that in this context paludified forests may turn from C sink to atmospheric C source (in turn increasing atmospheric CO2 concentration and providing a positive feedback on climate warming) if the increase in forest productivity does not compensate for carbon losses. However, it is important to note that the thick Sphagnum layer typical of paludified forests may protect SOL from drought and potential increasing depth of burn (and hence CO2 emissions), and therefore that a significant proportion of the territory has high potential to remain paludified (Magnan et al. 2012; Terrier et al. 2014a). The base paludification index produced slow but noticeable increases in proportion of the study area classified as paludified or nearly paludified. In the absence of fire in 2100 nearly 70% of the area of the Gordon Cosens Forest could be classified as nearly paludified (26%) or paludified (42%). This change is attributable to increased SOL thickness and concomitant change in moisture regime. Stands most susceptible to paludification were generally located on lacustrine surficial deposits or clay till, had fine textured soil (clay to coarse loam), a moisture regime classified as Moist to Very moist, SOL median depth of 20 cm, and an overstorey comprised of black spruce (sometimes accompanied by trembling aspen) (data not shown). Over the projected time frame considered with this base index, most of the stands of the study area were classified as nearly paludified or paludified. Furthermore, as expected, temperature increase and the concomitant reduction in peat accumulation rate had an effect on SOL thickness. For the +1°C and +3°C scenarios, SOL thickness increased with time, although at a slower rate than in the model not considering climate change. For the +6°C scenario, however, peat accumulation ceased and SOL thickness started to decrease around the year 2060, indicating that the peat production rate was subsequently lower than its decomposition rate. In a long-term simulation, Ise et al. (2008) observed that a 4°C air temperature rise caused a 15% loss of soil organic carbon (measured in kg C · m−2) for a reference period of 100 years, which corresponds to a 10 cm reduction in SOL thickness. This is similar to the difference we observed (ca. 15 cm) between the +1°C and the +6°C scenarios. These results suggest that paludified stands (i.e. stands where SOL thickness > 40 cm) will remain paludified in 2100. In consequence, the introduction of the effects of temperature increase in the paludification index did not produce any modifications in the proportion of the study area whether classified as paludified, nearly paludified or not paludified compared to the base index. Hence, despite increasing temperature between 2011 and 2100, the level of paludification of the study area did not change over time. Although our model did not consider the effects of changes in precipitation on paludification, a recent modeling study suggested that in the boreal forest of eastern Canada precipitation could increase by 10%–20% by the end of the 21st century (McKenney et al. 2010). An increase in precipitation would logically lead to an increase in peat accumulation rate via the influence of higher water tables and a subsequent decrease in peat decomposition rate (Silvola et al. 1996; Ise et al. 2008). However, increased temperature is expected to lead to an increase in evapotranspiration, therefore leaving less moisture in the system (Soja et al. 2007; Wu 2012) and increasing risks of drought. A decrease in moisture availability in the system would, in turn, induce a lowering of the water table, a thickening of the oxic layer, and an increase in substrate temperature. Together these modifications would depress the peat accumulation rate, but more importantly, increase the peat decomposition rate, resulting in a steady SOL thickness over time. Therefore as two factors are acting in opposite directions (i.e. peat accumulation with time vs. increase in peat decomposition rate in response to climate change), the expected effects of climate change (not considering modifications to the current fire cycle) in our study area could result in a territory similar to the present in terms of paludified area. Introducing wildfire in the index produced a quite different picture of the study area. Keeping the fire cycle constant and at the current level (i.e. 400 years) produced only a slight decrease in paludified area. However, allowing the fire cycle to decrease with time (as projected by Bergeron et al. 2010) reduced the paludified area by between 10% and 40% under the high-fire severity scenario. Decreased fire cycle and reduced paludified area could be attributed to the fact that the projected increase in precipitation (McKenney et al. 2010) may not fully compensate for increase in temperature, thus creating conditions that are more prone to fire occurrence (Terrier et al. 2013). Yet, we feel that an increase in the proportion of high-severity fires in areas already paludified is unlikely as a recent study conducted in the forested peatlands of the Clay Belt during an extreme drought year failed to detect any effect of drought on soil moisture (measured as gravimetric water content [%]; Terrier et al. 2014a) where thick SOL (>40 cm) occurs. In the same study, the authors projected for the period 2071–2100 the effects of extreme drought on potential SOL depth of burn, and concluded that increase in drought conditions should not be sufficient to greatly modify SOL depth of burn in areas where thick SOL prevail with potential depth of burn up to 0.7 cm (Terrier et al. 2014a) for any individual fire event occurring between early spring and late fall. This resistance of SOL to drought and burn is related to the presence of Sphagnum species which hyaline cells stock large amounts of water (Silvola 1991) and limit its evaporation (Busby and Whitfield 1978), and therefore reduce potential depth of burn. However, Terrier's study (Terrier et al. 2014a) also suggested that in areas where SOL is < 40 cm and where Pleurozium schreberi is the dominant ground-covering moss, drought was able to depress soil moisture to the extent where they projected a potential depth of burn up to 3.2 cm for the period 2071–2100. These results suggest that areas that are in the process of being paludified exhibit a relatively high potential to remain unpaludified, and that those areas that are already paludified areas show a low potential for depaludification. These results are also in accordance with Magnan et al. (2012) who found no major changes in boreal peatlands despite evidence of slowed peat accumulation rates due to fire, and with Magnan et al. (2014) who showed that Sphagnum-dominated bogs located in a maritime environment have persisted over millennia and that fires had few impacts on their vegetation dynamics. Thus, at the landscape level, when fire cycle is kept constant at the current level, the slow rate of paludification results in a slight decrease in paludified area regardless of the proportion of high severity fires, suggesting either that the rate of peat accumulation in the index is too conservative, or that the current highly paludified landscape is not completely in balance with the current climate and fire regime (Payette 2001). Similarly, a decreasing fire cycle is likely to limit the development of paludified stands. Overall, our models predict a slight (no modification to the current fire cycle) to moderate (decreasing fire cycle over time) decrease in area covered by paludified forests within the Gordon Cosens Forest in 2100. This might have several impacts for the forest industries of the Clay Belt as slower paludification rates imply greater forest productivity and potentially a greater potential for forest harvest. Furthermore, at the global scale, increasing fire frequency in boreal paludified forests may have important consequences on carbon storage and climate if increases in forest productivity do not compensate for carbon losses. However, as the thick Sphagnum layer typical of paludified forests may protect soil organic layer from drought and deep burns, a significant proportion of the territory has high potential to remain a carbon sink. In this context, our findings are supported by those of Terrier et al. (2014b) who in an area adjacent to our study area modeled the impacts of climate change on fire regime, vegetation dynamics and SOL depth of burn. They concluded that although climate change is likely to increase burn rates, the moist and cool conditions in these forests would prevent high depth of burn and the landscape would remain paludified. This simple tool could be used by forest managers to forecast the development of paludified forests and to plan forest operations and conservation areas, and by policy makers to plan carbon management at the regional scale. Nonetheless, we recognize that in order to strengthen the predictions made by our index, the next step would be to construct a mechanistic model that includes sensitivity analysis as well as fire sub-indices that take into account drought and wildfire risk. Bergeron Y, Gauthier S, Flannigan M, Kafka V (2004) Fire regimes at the transition between mixedwood and coniferous boreal forest in northwestern Quebec. Ecology 85:1916–1932 Bergeron Y, Cyr D, Girardin MP, Carcaillet C (2010) Will climate change drive 21st century burn rates in Canadian boreal forest outside of its natural variability: collating global climate model experiments with sedimentary charcoal data. Int J Wildl Fire 19:1127–1139 Bhatti JS, Van Kooten GC, Apps MJ, Laird LD, Campbell ID, Campbell C, Turetsky MR, Yu Z, Banfield E (2003) Forest management planning based on natural disturbance and forest dynamics. In: Burton PJ, Messier C, Smith DW, Adamowicz WL (eds) Towards sustainable management of the boreal forest. NRC Research Press, Ottawa, pp 799–855 Busby JR, Whitfield DWA (1978) Water potential, water content, and net assimilation of some boreal forest mosses. Can J Bot 56:1551–1558 de Groot WJ, Pritchard JM, Lynham TJ (2009) Forest floor fuel consumption and carbon emissions in Canadian boreal forest fires. Can J For Res 39:367–382 Drobyshev I, Simard M, Bergeron Y, Hofgaard A (2010) Does soil organic layer thickness affect climate-growth relationships in the black spruce boreal ecosystem? Ecosystems 13:556–574 Dyrness CT, Norum RA (1983) The effects of experimental fires on black spruce forest floors in interior Alaska. Can J For Res 13:879–893 Environment Canada (2014) Canadian climate normals 1981–2010. http://www.climate.weather.gc.ca/climate_normals/index_e.html. Accessed 20 Oct 2014 Flannigan M, Logan KA, Amiro B, Skinner W, Stocks BJ (2005) Future area burned in Canada. Clim Change 72:1–16 Girardin MP, Mudelsee M (2008) Past and future changes in Canadian boreal wildfire activity. Ecol Appl 18:391–406 Gorham E (1991) Northern peatlands: role in the carbon cycle and probable responses to climatic warming. Ecol Appl 1:182–195 Greene DF, Macdonald SE, Cumming S, Swift L (2005) Seedbed variation from the interior through the edge of a large wildfire in Alberta. Can J For Res 35:1640–1647 Hilbert DW, Roulet N, Moore T (2000) Modelling and analysis of peatlands as dynamical systems. J Ecol 88:230–242 IPCC (2013) Climate change 2013: The physical science basis. In: Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Bashung J, Nauels A, Xia Y, Bex V, Midgley PM (eds) Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, UK Ise T, Dunn AL, Wofsy SC, Moorcroft PR (2008) High sensitivity of peat decomposition to climate change through water-table feedback. Nat Geosci 1:763–766 Jeglum JK (1991) Definition of trophic classes in wooded peatlands by means of vegetation types and plant indicators. Ann Bot Fenn 28:175–192 Lafleur B, Fenton NJ, Paré D, Simard M, Bergeron Y (2010a) Contrasting effects of season and method of harvest on soil properties and the growth of black spruce regeneration in the boreal forested peatlands of eastern Canada. Silva Fenn 45:799–813 Lafleur B, Paré D, Fenton NJ, Bergeron Y (2010b) Do harvest and soil type impact the regeneration and growth of black spruce stands in northwestern Quebec? Can J For Res 40:1843–1851 Lavoie M, Paré D, Bergeron Y (2005) Impact of global change and forest management on carbon sequestration on northern forested peatlands. Environ Rev 13:199–240 Lecomte N, Simard M, Bergeron Y, Larouche A, Asnong H, Richard PJH (2005) Effects of fire severity and initial tree composition on understorey vegetation dynamics in a boreal landscape inferred from chronosequence and paleoecological data. J Veg Sci 16:665–674 Lecomte N, Simard M, Fenton N, Bergeron Y (2006) Fire severity and long-term ecosystem biomass dynamics in coniferous boral forests of eastern Canada. Ecosystems 9:1215–1230 Magnan G, Lavoie M, Payette S (2012) Impact of long-term vegetation dynamics of ombrotrophic peatlands in northwestern Québec, Canada. Quat Res 77:110–121 Magnan G, Garneau M, Payette S (2014) Holocene development of maritime ombrotrophic peatlands of the St. Lawrence North Shore in eastern Canada. Quat Res 82:96–106 McKenney DW, Pedlar JH, Lawrence K, Gray PA, Colombo SJ, Crins WJ (2010) Current and projected future climatic conditions for ecoregions and selected natural heritage areas in Ontario, Climate Change Res. Rep. CCRR-016. Ontario Forest Research Institute, Ministry of Natural Resources, Sault Ste. Marie, ON, Canada Payette S (2001) Les principaux types de tourbières. In: Payette S, Rochefort L (eds) Écologie des tourbières du Québec-Labrador. Les Presses de l'Université Laval, Québec, Québec, pp 39–89 Payette S, Garneau M, Delwaide A, Schaffhauser A (2013) Forest soil paludification and mid-Holocene retreat of jack pine in easternmost North America: Evidence for a climatic shift from fire-prone to peat-prone conditions. The Holocene 23:494–503 Riley JL (1994) Peat and peatland resources of northeastern Ontario. Ministry of Northern Development and Mines, Ontario Geological Survey. Misc. Paper No, 153 Silvola J (1991) Moisture dependence of CO2 exchange and its recovery after drying in certain boreal forest and peat mosses. Lindergia 17:5–10 Silvola J, Alm J, Ahlholm U, Nykanen H, Martikainen PJ (1996) CO2-fluxes from peat in boreal mires under varying temperature and moisture conditions. J Ecol 84:219–228 Simard M, Lecomte N, Bergeron Y, Bernier PY, Paré D (2007) Forest productivity decline caused by successional paludification of boreal soils. Ecol Appl 17:1619–1637 Simard M, Bernier PY, Bergeron Y, Paré D, Guérine L (2009) Paludification dynamics in the boreal forest of the James Bay Lowlands: effect of time since fire and topography. Can J For Res 39:546–552 Sims RA, Baldwin KA (1996) Forest humus forms in northwestern Ontario. Natural Resources Canada, Canadian Forest Service, Great Lakes Forestry Centre, Technical Report TR-28 Sims RA, Towill WD, Baldwin KA, Wickware GM (1989) Field guide to the forest ecosystem classification for northwestern Ontario, Forest Canada and Ontario Ministry of Natural Resources., p 191 Soil Classification Working Group (1998) The canadian system of soil classification, 3rd edn. Agriculture and Agri-Food Canada Publication 1646, Canada Soja AJ, Tchebakova NM, French NHF, Flannigan MD, Shugart HH, Stocks BJ, Sukhinin AI, Parfenova EI, Chapin FS III, Stackhouse PW (2007) Climate-induced boreal forest change: Predictions versus current observations. Global Planet Change 56:274–296 Taylor KC, Arnup RW, Merchant BG, Parton WJ, Nieppola J (2000) A field guide to forest ecosystems of Northeastern Ontario, 2nd edn. Queen's Printer for Ontario, Canada Terrier A, Girardin MP, Périé C, Legendre P, Bergeron Y (2013) Potential changes in forest composition could reduce impacts of climate change on boreal wildfires. Ecol Appl 23:21–35 Terrier A, de Groot WJ, Girardin MP, Bergeron Y (2014a) Dynamics of moisture content in spruce-feather moss and spruce-Sphagnum organic layers during an extreme fire season and implications for future depths of burn in Clay Belt black spruce forests. Int J Wildl Fire 23:490–502 Terrier A, Girardin MP, Cantin A, de Groot WJ, Anyomi KA, Gauthier S, Bergeron Y (2014b) Disturbance legacies and paludification mediate the ecological impact of an intensifying wildfire regime in the Clay Belt boreal forest of eastern North America. J Veg Sci doi:10.1111/jvs.12250. van Bellen S, Garneau M, Bergeron Y (2010) Impact of climate change on forest fire severity and and consequences for carbon stocks in boreal forest stands of Quebec, Canada: A synthesis. Fire Ecol 6:16–44 Veillette JJ (1994) Evolution and paleohydrology of glacial lakes Barlow and Ojibway. Quat Sci Rev 13:945–971 Wieder RK, Vitt DH, Benscoter BW (2006) Peatlands and the boreal forest. In: Wieder RK, Vitt DH (eds) Boreal peatland ecosystems. Springer, New York, NY, pp 1–8 Wu J (2012) Response of peatland development and carbon cycling to climate change: a dynamic system modeling approach. Environ Earth Sci 65:141–151 Yu Z, Campbell ID, Vitt DH, Apps MJ (2001) Modelling long-term peatland dynamics. I. Concepts, review, and proposed design. Ecol Model 145:197–210 This study was made possible by funding from the Ontario Ministry of Natural Resource. We thank Rachelle Lalonde for providing forest maps, Mélanie Desrochers for map production, and Aurélie Terrier for helpful discussion on fire modeling. Institut de recherche sur les forêts, Université du Québec en Abitibi-Témiscamingue, 445 boul. de l'Université, Rouyn-Noranda, QC, J9X 5E4, Canada Benoit Lafleur , Nicole J Fenton & Yves Bergeron Centre d'étude de la forêt, Université du Québec à Montréal, 141 Avenue du Président-Kennedy, Montréal, QC, H2X 1Y4, Canada Search for Benoit Lafleur in: Search for Nicole J Fenton in: Search for Yves Bergeron in: Correspondence to Benoit Lafleur. BL developed the models, carried out the analysis, and wrote the manuscript, NJF and YB were instrumental in conceptualizing and refining the models, and in revising the manuscript. All authors read and approved the final manuscript. Paludification Peat forests Forest harvest
CommonCrawl
A quick introduction to orbital mechanics Orbital motion? That is just what happens when gravity is the only force that applies. In space, which mostly consists of space, things are seldom touching each other. As gravity is the only way objects then interact, orbital mechanics becomes the tool of choice to analyse what is going on. The following terms are important when describing an orbit: Periapsis: The point where the orbiting craft is closest to the object it orbits. Apoapsis: The point where the orbiting craft is farthest from the object it orbits. Semi-major axis: The largest central radius of an ellipse. This is half the distance between the periapsis and apoapsis. Often shortened to $a$. Let us then get started with the math! We have a spacecraft orbiting in a circular orbit. What is its velocity? $$v=\sqrt{\frac{\mu}{r}}$$ $v$ is of course the velocity. $\mu$ is the gravitational parameter. You would often see that as $GM$, which is the same thing. That is the mass of the object you are orbiting ($M$) multiplied with the gravitational constant ($G=6.674 \cdot 10^{-11} \frac{m^3}{kg s^2}$). To save you some time: $\mu _{Earth}=3.98600441 \cdot 10^{14} \frac{m^3}{s^2}$ $\mu _{Moon}=4.9048695 \cdot 10^{12} \frac{m^3}{s^2}$ $\mu _{Sun}=1.32712440018 \cdot 10^{20} \frac{m^3}{s^2}$ Using that information, we can calculate things like the orbital velocity in low Earth orbit: $\sqrt{\frac{\mu _{Earth}}{r_{LEO}}}=\sqrt{\frac{3.986 \cdot 10^{14} \frac{m^3}{s^2}}{6600000m}}=7771 \frac{m}{s}$ But wait a minute, not all orbits are circles! How can be calculate the orbital velocity in an elliptical orbit? Well, first of all, the velocity varies depending on where in the orbit you are, having a larger velocity at the periapsis than at the apoapsis. But there is an equation here to: $$v=\sqrt{\mu \left(\frac{2}{r}-\frac{1}{a}\right)}$$ $r$ is here your current distance to the centre of the object you are orbiting, and $a$ is the semi-major axis that we just talked about. Notice that in the case of a circular orbit, $r=a$ so this equation simplifies back to $v=\sqrt{\frac{\mu}{r}}$ Transfer Orbits We can already start to put the things we have learned together. Take a look at the figure below. Imagine you are orbiting in the blue orbit, and want to change to the red orbit: What do you do? Well, take a look at the point where the orbits touch each other. Why are we in the blue orbit there, and not in the red one? After all, you can be in that location if you where in the red orbit too. Not everything is the same. What matters here is that you would have a different velocity, depending on which of the orbits you are in. Let us calculate that. We say $\mu = 1$ for simplicity: $$v_{blue}=\sqrt{\frac{1}{1}}=1$$ $$v_{red}=\sqrt{1\left(\frac{2}{1}-\frac{1}{2}\right)}=1.225$$ Below is an illustration of the velocity vectors. The pink part is the $\Delta v$ we have to spend. Note that the difference is smallest when the vectors point in the same direction. Hohmann transfer orbits A very common type of orbital transfer is between two circular orbits with different radius. The most efficient transfer is almost always an elliptical orbit touching both cirles. Below is an illustration. Click "start" to view an animation. The $\Delta v$ cost would be the elliptical velocity at periapsis minus the velocity of the smaller circle, plus the velocity of the greater circle minus the elliptical velocity at apoapsis. More math! Let us take another look at the equation for velocity in an elliptical orbit: $$v=\sqrt{\mu \left(\frac{2}{r}-\frac{1}{a}\right)}$$ What happens if we tries to travel "to infinity"? We start at distance $r$, and travels to $a=\infty$: $$v=\sqrt{\mu \left(\frac{2}{r}-\frac{1}{\infty}\right)}$$ That simplifies to $$v=\sqrt{\mu \left(\frac{2}{r}-0\right)}$$ and that is equal to $$v_e=\sqrt{\frac{2\mu}{r}}$$ "$v_e$" is here short for "escape velocity". It turns out that travelling as far away as you would like has a limited cost. Of course, reaching infinity takes an ehm… infinite time… From this point, when I say infinity, I mean sufficiently far away. You can also notice that the escape velocity at a given distance is $\sqrt{2}$ times larger than the orbital velocity of a circular orbit. $v_e = \sqrt{2}v_c$. Escape Velocity: Beyond Infinity You have now seen 3 of the 4 different shapes an orbit can have, circular, elliptical and parabolic. The parabolic orbit is the one where we reach infinity. The remaining orbit is the one shaped like a hyperbola. This one also reach infinity, like the parabola, but with a mayor difference: In a parabolic orbit, we only "just reach infinity", that is, we are slowed down all the time by gravity, and the velocity is approaching zero as we get farther away. That is not always what we want. For instance, if we only "just escape Earth", our relative velocity to it is zero, and that means we have the same orbit around the Sun as Earth! In many cases, we want to have some extra velocity once we have escaped, for example for heading towards Mars, Jupiter, or for escaping the solar system entirely. An example closer to home is when we escape the Moon for a return trip. We do not want to end up in the same orbit as the Moon, instead, we want to head back home. How can we achieve that? The answer is naturally to get a velocity higher than the escape velocity. But how exactly does that relate to the final velocity we end up with once we have escaped? Behold. The Pythagoras formula of orbital mechanics: Delta-v: The core idea of rockets We have seen that the way of controlling our trajectory through space is to change the spacecraft's velocity. How is that done? Rocket propulsion depends upon conservation of momentum. At first, this looks troubeling. Surely, if our momentum is going to stay the same, so is our velocity as momentum is simply velocity multiplied by mass. But velocity is a vector quantity. If two equal masses are going with the same speed in opposite direction of each other, their momentum are together still zero. So as a first working model, we can imagine two spacecraft starting together, then pushing of from each other. One goes in the direction we want, the other one… wherever stuff ends up while drifting through space. Sacrificing a second spacecraft for every maneuver would be quite wastefull. Indeed, this way of 'propulsion' does not depend upon the other vehicle being a spacecraft, it just has to be something with mass. That allows us to pick something that is for example convenient to store onboard. Another concern of efficiency is the speed the reaction mass is ejected. Low speeds give us only a little momentum, so in order to get the most out of our propellant we wish the 'exhaust velocity' to be as high as possible. One more thing to consider is that while the last bit of ejected mass only has to push our empty spacecraft, all the mass ejected before it has to push more or less of the propellant along with the vehicle. This means a velocity change costs exponential amounts of propellant. There is a straight forward formula to find the total velocity change from what we know about the propulsion system: $$\Delta v=\ln{\frac{m_0}{m_1}}v_e$$ Change in velocity equals the natural logarithm of start mass divided by end mass, multiplied by the exhaust velocity.<br> The inverse equation is also conventient when we wish to find the required mass ratio for a given velocity change: $$\frac{m_0}{m_1} = e^{\frac{\Delta v}{v_e}}$$
CommonCrawl
Adaptation and migration of a population between patches DCDS-B Home Robustness of Morphogen gradients with "bucket brigade" transport through membrane-associated non-receptors May 2013, 18(3): 741-751. doi: 10.3934/dcdsb.2013.18.741 Multidimensional stability of planar traveling waves for an integrodifference model Judith R. Miller 1, and Huihui Zeng 2, Dept. of Mathematics and Statistics, Georgetown University, Washington DC 20057, United States Mathematical Sciences Center, Tsinghua University, Beijing 100084, China Received December 2011 Revised September 2012 Published December 2012 This paper studies the multidimensional stability of planar traveling waves for integrodifference equations. It is proved that for a Gaussian dispersal kernel, if the traveling wave is exponentially orbitally stable in one space dimension, then the corresponding planar wave is stable in $H^m(\mathbb{R}^N)$, $N\ge 4$, $m\ge [N/2]+1$, with the perturbation decaying at algebraic rate. Keywords: Traveling waves, stability, integrodifference equation.. Mathematics Subject Classification: Primary: 45P05, 39A30; Secondary: 47G10, 92D2. Citation: Judith R. Miller, Huihui Zeng. Multidimensional stability of planar traveling waves for an integrodifference model. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 741-751. doi: 10.3934/dcdsb.2013.18.741 O. Diekmann and H. Kaper, On the bounded solutions of a nonlinear convolution equation,, Nonlinear Anal., 2 (1978), 721. doi: 10.1016/0362-546X(78)90015-9. Google Scholar P. Fife and J. McLeod, The approach of solutions of nonlinear diffusion equations to travelling front solutions,, Arch. Ration. Mech. Anal., 65 (1977), 335. Google Scholar R. Gardner and K. Zumbrun, The gap lemma and geometric criteria for instability of viscous shock profiles,, Comm. Pure Appl. Math., 51 (1998), 797. doi: 10.1002/(SICI)1097-0312(199807)51:7<797::AID-CPA3>3.0.CO;2-1. Google Scholar M. Gil', "Difference Equations in Normed Spaces. Stability and Oscillations,", North-Holland Mathematics Studies, 206 (2007). Google Scholar J. Goodman, Stability of viscous scalar shock fronts in several dimensions,, Trans. Amer. Math. Soc., 311 (1989), 683. doi: 10.2307/2001146. Google Scholar D. Henry, "Geometric Theory of Semilinear Parabolic Equations,", Lecture Notes in Mathematics, 840 (1981). Google Scholar S. Hsu and X. Zhao, Spreading speeds and traveling waves for nonmonotone integrodifference equations,, SIAM J. Math. Anal., 40 (2008), 776. doi: 10.1137/070703016. Google Scholar T. Kapitula, Multidimensional stability of planar travelling waves,, Trans. Amer. Math. Soc., 349 (1997), 257. doi: 10.1090/S0002-9947-97-01668-1. Google Scholar M. Kot and W. Schaffer, Discrete-time growth-dispersal models,, Math. Biosci., 80 (1986), 109. doi: 10.1016/0025-5564(86)90069-6. Google Scholar M. Kot, Discrete-time travelling waves: Ecological examples,, J. Math. Biol., 30 (1992), 413. doi: 10.1007/BF00173295. Google Scholar M. Kot, M. Lewis and P. van den Driessche, Dispersal data and the spread of invading organisms,, Ecology, 77 (1996), 2027. Google Scholar C. Levermore and J. Xin, Multidimensional stability of traveling waves in a bistable reaction-diffusion equation. II,, Comm. Partial Differential Equations, 17 (1992), 1901. doi: 10.1080/03605309208820908. Google Scholar B. Li, M. Lewis and H. Weinberger, Existence of traveling waves for integral recursions with nonmonotone growth functions,, J. Math. Biol., 58 (2009), 323. doi: 10.1007/s00285-008-0175-1. Google Scholar G. Lin and W. Li, Spreading speeds and traveling wavefronts for second order integrodifference equations,, J. Math. Anal. Appl., 361 (2010), 520. doi: 10.1016/j.jmaa.2009.07.035. Google Scholar G. Lin, W. Li and S. Ruan, Asymptotic stability of monostable wavefronts in disctrete-time integral recursions,, Sci. China Math., 53 (2010), 1185. doi: 10.1007/s11425-009-0123-6. Google Scholar R. Lui, A nonlinear integral operator arising from a model in population genetics. I. Monotone initial data,, SIAM J. Math. Anal., 13 (1982), 913. doi: 10.1137/0513064. Google Scholar R. Lui, A nonlinear integral operator arising from a model in population genetics. II. Initial data with compact support,, SIAM J. Math. Anal., 13 (1982), 938. doi: 10.1137/0513065. Google Scholar R. Lui, Existence and stability of travelling wave solutions of a nonlinear integral operator,, J. Math. Biol., 16 (): 199. doi: 10.1007/BF00276502. Google Scholar R. Lui, A nonlinear integral operator arising from a model in population genetics. III. Heterozygote inferior case,, SIAM J. Math. Anal., 16 (1985), 1180. doi: 10.1137/0516087. Google Scholar J. Miller and H. Zeng, Stability of travelling waves for systems of nonlinear integral recursions in spatial population biology,, Discrete Contin. Dyn. Syst. Ser. B, 16 (2011), 895. doi: 10.3934/dcdsb.2011.16.895. Google Scholar M. Neubert, M. Kot and M. Lewis, Dispersal and pattern-formation in a discrete-time predator-prey model,, Theoretical Population Biology, 48 (1995), 7. Google Scholar D. Sattinger, On the stability of waves of nonlinear parabolic systems,, Advances in Math., 22 (1976), 312. Google Scholar H. Weinberger, Asymptotic behavior of a model in population genetics,, in, (1978), 1976. Google Scholar H. Weinberger, Long-time behavior of a class of biological models,, SIAM J. Math. Anal., 13 (1982), 353. doi: 10.1137/0513028. Google Scholar J. Xin, Multidimensional stability of traveling waves in a bistable reaction-diffusion equation. I,, Comm. Partial Differential Equations, 17 (1992), 1889. doi: 10.1080/03605309208820907. Google Scholar K. Zumbrun and P. Howard, Pointwise semigroup methods and stability of viscous shock waves,, Indiana Univ. Math. J., 47 (1998), 741. doi: 10.1512/iumj.1998.47.1604. Google Scholar Adèle Bourgeois, Victor LeBlanc, Frithjof Lutscher. Dynamical stabilization and traveling waves in integrodifference equations. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020117 Grégory Faye. Multidimensional stability of planar traveling waves for the scalar nonlocal Allen-Cahn equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2473-2496. doi: 10.3934/dcds.2016.36.2473 Hua Chen, Ling-Jun Wang. A perturbation approach for the transverse spectral stability of small periodic traveling waves of the ZK equation. Kinetic & Related Models, 2012, 5 (2) : 261-281. doi: 10.3934/krm.2012.5.261 Aslihan Demirkaya, Milena Stanislavova. Numerical results on existence and stability of standing and traveling waves for the fourth order beam equation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 197-209. doi: 10.3934/dcdsb.2018097 Luyi Ma, Hong-Tao Niu, Zhi-Cheng Wang. Global asymptotic stability of traveling waves to the Allen-Cahn equation with a fractional Laplacian. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2457-2472. doi: 10.3934/cpaa.2019111 Xiaojie Hou, Wei Feng. Traveling waves and their stability in a coupled reaction diffusion system. Communications on Pure & Applied Analysis, 2011, 10 (1) : 141-160. doi: 10.3934/cpaa.2011.10.141 Zhaosheng Feng. Traveling waves to a reaction-diffusion equation. Conference Publications, 2007, 2007 (Special) : 382-390. doi: 10.3934/proc.2007.2007.382 Joseph Thirouin. Classification of traveling waves for a quadratic Szegő equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3099-3122. doi: 10.3934/dcds.2019128 Yaping Wu, Niannian Yan. Stability of traveling waves for autocatalytic reaction systems with strong decay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1601-1633. doi: 10.3934/dcdsb.2017033 Fengxin Chen. Stability and uniqueness of traveling waves for system of nonlocal evolution equations with bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 659-673. doi: 10.3934/dcds.2009.24.659 Grigori Chapiro, Lucas Furtado, Dan Marchesin, Stephen Schecter. Stability of interacting traveling waves in reaction-convection-diffusion systems. Conference Publications, 2015, 2015 (special) : 258-266. doi: 10.3934/proc.2015.0258 Je-Chiang Tsai. Global exponential stability of traveling waves in monotone bistable systems. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 601-623. doi: 10.3934/dcds.2008.21.601 Zigen Ouyang, Chunhua Ou. Global stability and convergence rate of traveling waves for a nonlocal model in periodic media. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 993-1007. doi: 10.3934/dcdsb.2012.17.993 Judith R. Miller, Huihui Zeng. Stability of traveling waves for systems of nonlinear integral recursions in spatial population biology. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 895-925. doi: 10.3934/dcdsb.2011.16.895 Tong Li, Jeungeun Park. Stability of traveling waves of models for image processing with non-convex nonlinearity. Communications on Pure & Applied Analysis, 2018, 17 (3) : 959-985. doi: 10.3934/cpaa.2018047 Xiao-Biao Lin, Stephen Schecter. Traveling waves and shock waves. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : i-ii. doi: 10.3934/dcds.2004.10.4i Yuqian Zhou, Qian Liu. Reduction and bifurcation of traveling waves of the KdV-Burgers-Kuramoto equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 2057-2071. doi: 10.3934/dcdsb.2016036 Rui Huang, Ming Mei, Yong Wang. Planar traveling waves for nonlocal dispersion equation with monostable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3621-3649. doi: 10.3934/dcds.2012.32.3621 Emile Franc Doungmo Goufo, Abdon Atangana. Dynamics of traveling waves of variable order hyperbolic Liouville equation: Regulation and control. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 645-662. doi: 10.3934/dcdss.2020035 Guy V. Norton, Robert D. Purrington. The Westervelt equation with a causal propagation operator coupled to the bioheat equation.. Evolution Equations & Control Theory, 2016, 5 (3) : 449-461. doi: 10.3934/eect.2016013 Judith R. Miller Huihui Zeng
CommonCrawl
A Refined Measure of Conditional Maximum Drawdown Rossello, Damiano Risks associated to maximum drawdown have been recently formalized as the tail mean of maximum drawdown distributions, called Conditional Expected Drawdown (CED). In fact, the special case of average maximum drawdown is widely used in the fund management industry also in association to performance management. As a path-dependent deviation measure, it lacks relevant information on worst case scenarios over a fixed horizon, mostly represented by the all time minimum of cumulative returns. Formulating a refined version of CED, we are able to add this piece of information to the risk measurement of drawdown, and then we get a new path-dependent risk measure that preserves all the good properties of the CED but following more prudential regulatory and management assessments, also in term of marginal risk contribution attributed to factors. A fractional-order difference Cournot duopoly game with long memory Baogui Xin,Wei Peng,Yekyung Kwon We reconsider the Cournot duopoly problem in light of the theory for long memory. We introduce the Caputo fractional-order difference calculus to classical duopoly theory to propose a fractional-order discrete Cournot duopoly game model, which allows participants to make decisions while making full use of their historical information. Then we discuss Nash equilibria and local stability by using linear approximation. Finally, we detect the chaos of the model by employing a 0-1 test algorithm. Advances in Incremental Valuation of Financial Contracts and Definition of the Economic Meaning of the Capital Value Adjustment (KVA) Castagna, Antonio We extend the analysis we sketched in Castagna [5] and we provide an application of the framework we introduced to incrementally evaluate financial contracts within a financial institution's balance sheet. Affine term structure models : a time-changed approach with perfect fit to market curves Cheikh Mbaye,Frédéric Vrins We address the so-called calibration problem which consists of fitting in a tractable way a given model to a specified term structure like, e.g., yield or default probability curves. Time-homogeneous jump-diffusions like Vasicek or Cox-Ingersoll-Ross (possibly coupled with compounded Poisson jumps, JCIR), are tractable processes but have limited flexibility; they fail to replicate actual market curves. The deterministic shift extension of the latter (Hull-White or JCIR++) is a simple but yet efficient solution that is widely used by both academics and practitioners. However, the shift approach is often not appropriate when positivity is required, which is a common constraint when dealing with credit spreads or default intensities. In this paper, we tackle this problem by adopting a time change approach. On the top of providing an elegant solution to the calibration problem under positivity constraint, our model features additional interesting properties in terms of implied volatilities. It is compared to the shift extension on various credit risk applications such as credit default swap, credit default swaption and credit valuation adjustment under wrong-way risk. The time change approach is able to generate much larger volatility and covariance effects under the positivity constraint. Our model offers an appealing alternative to the shift in such cases. Aircraft Financing and Leasing in India Challenges & Opportunities: An Exploratory Study of Developing Aircraft Financing and Leasing in India Shah, Dipesh,Chugan, Pawan K. Aircraft leasing industry is a significant industry in the world today. Activities of the leasing firms have increased drastically over the last few decades. Indian aviation industry has seen an unprecedented growth in air passenger traffic both international and domestic over the past decade. Over the past few years it has been observed that the geography of the operations for aircraft leasing industry have been moving fast from western nations to eastern nations due to presence of a strong and a stable financial market and active involvements of state. Today five out of the twelve biggest lessors belong to Asia. They have grown in size because of frequent acquisitions, collaborations and partnerships, China being the most prominent example of such development. The synergies from inorganic growth have been pivotal in the drastic growth of aircraft leasing companies in China. India is a laggard in this segment of aviation industry and must soon explore and derive strategies to develop a robust aircraft leasing industry in the country. Having experienced a surge in aircraft demand, many Indian airlines are meeting this demand through offshore aircraft lessors and paying them in dollars. An attempt has been made in this paper to identify the challenges in air craft leasing and financing which present good opportunities for India to develop this industry. For that, the paper first discusses the global scenario of aircraft leasing industry which is followed by the aircraft leasing industry in India. To draw some lessons from the Chinese experience it then covers in brief the Chinese aircraft leasing industry. This is followed by details covering challenges and opportunities for this sector and suggests the ways to make India a favourable destination for aircraft leasing business. Anti-Takeover Provisions and Investment Timing Guthrie, Graeme,Hobbs, Cameron We show how directors can set the strength of a firm's anti-takeover provisions in order to influence the investment-timing decision of an empire-building CEO. The prospect of future takeovers, which terminate the CEO's control benefits, affects the CEO's willingness to invest in low-value projects. If takeover defenses are too strong then the market for corporate control imposes insufficient discipline on the CEO, who invests too soon. If they are too weak then shareholders incur too many costs due to managerial distraction and the CEO invests too late. The optimal strength of anti-takeover provisions depends on the volatility of the value added to the firm's assets by alternative management teams. Consistent with our theory, firms with market-book ratios that vary more widely from those for the industry as a whole have fewer anti-takeover provisions. Does Risk Sorting Explain Bubbles? Kiss, Hubert Janos,Koczy, Laszlo A.,Pintér, Ágnes,Sziklai, Balazs A recent stream of experimental economics literature studies the factors that contribute to the emergence of financial bubbles. We consider a setting where participants sorted according to their degree of risk aversion trade in experimental asset markets. We show that risk sorting is able to explain bubbles partially: Markets with the most risk-tolerant traders exhibit larger bubbles than markets with the most risk averse traders. In our study risk aversion does not correlate with gender or cognitive abilities, so it is an additional factor that helps understand bubbles. Earnings Manipulation Benchmark for Nonfinancial Listed Companies in Vietnamese Stock Market Nguyen, Anh Huu,Linh, Nguyen Ha,Yoon, Sung Wook The paper examines earnings management detection using the Beneish M-score benchmark model on a sample of 468 non-financial Vietnamese companies listed on the Hochiminh Stock Exchange (HOSE) and Hanoi Stock Exchange (HNX) during 2013-2014. The results show that 40 % of non-financial Vietnamese-listed companies were involved in earnings management, and the sampled observations do fit the Beneish M-score model. This study suggests that the M-score model is a useful technique to use to detect the earnings manipulation behaviors of companies in Vietnam. The M-score model is also a reliable tool for investors to make when making decisions and verifying the reliability of accounting information found in financial reports. Fine Properties of the Optimal Skorokhod Embedding Problem Mathias Beiglböck,Marcel Nutz,Florian Stebegg We study the problem of stopping a Brownian motion at a given distribution $\nu$ while optimizing a reward function that depends on the (possibly randomized) stopping time and the Brownian motion. Our first result establishes that the set $\mathcal{T}(\nu)$ of stopping times embedding $\nu$ is weakly dense in the set $\mathcal{R}(\nu)$ of randomized embeddings. In particular, the optimal Skorokhod embedding problem over $\mathcal{T}(\nu)$ has the same value as the relaxed one over $\mathcal{R}(\nu)$ when the reward function is semicontinuous, which parallels a fundamental result about Monge maps and Kantorovich couplings in optimal transport. A second part studies the dual optimization in the sense of linear programming. While existence of a dual solution failed in previous formulations, we introduce a relaxation of the dual problem and establish existence of solutions as well as absence of a duality gap, even for irregular reward functions. This leads to a monotonicity principle which complements the key theorem of Beiglb\"ock, Cox and Huesmann [Optimal transport and Skorokhod embedding, Invent. Math., 208:327-400, 2017]. We show that these results can be applied to characterize the geometry of optimal embeddings through a variational condition. Firm Performance and Managing Downside Business Risk Pandher, Gurupdesh S.,Sun, Jerry Y.,Currie, Russell R. This paper focuses on the ultimate impact of hedging on firm performance. Successful risk management based on derivatives or operational activities should lead to a lesser occurrence of lower business outcomes than higher outcomes (i.e., positive skewness in quarterly earnings per share) compared to identical firms that do not manage adverse business scenarios. The study based on 5,586 non-financial firms in Compustat empirically detects the hedging profile from the distribution of quarterly firm earnings per share and relates it to firm performance (Tobin's Q, excess stock returns, ROE, ROA). We find a significant positive effect of the hedging profile on firm valuation and profitability. For example, Tobin's Q, ROE and ROA rise by 0.035, 2.6% and 1.4%, respectively, when EPS skewness rises by one (14% of firms) and these findings are robust to the level of firm earnings, firm size, leverage, market risk, and earnings management and firm-industry fixed effects. These results support the shareholder maximization and signaling theories of risk management. General Stopping Behaviors of Naive and Non-Committed Sophisticated Agents, with Application to Probability Distortion Yu-Jui Huang,Adrien Nguyen-Huu,Xun Yu Zhou We consider the problem of stopping a diffusion process with a payoff functional that renders the problem time-inconsistent. We study stopping decisions of naive agents who reoptimize continuously in time, as well as equilibrium strategies of sophisticated agents who anticipate but lack control over their future selves' behaviors. When the state process is one dimensional and the payoff functional satisfies some regularity conditions, we prove that any equilibrium can be obtained as a fixed point of an operator. This operator represents strategic reasoning that takes the future selves' behaviors into account. We then apply the general results to the case when the agents distort probability and the diffusion process is a geometric Brownian motion. The problem is inherently time-inconsistent as the level of distortion of a same event changes over time. We show how the strategic reasoning may turn a naive agent into a sophisticated one. Moreover, we derive stopping strategies of the two types of agent for various parameter specifications of the problem, illustrating rich behaviors beyond the extreme ones such as "never-stopping" or "never-starting". Heterogeneous Impact of the Minimum Wage: Implications for Changes in Between- and Within-group Inequality Tatsushi Oka,Ken Yamada Workers who earn at or below the minimum wage in the United States are mostly either less educated, young, or female. This paper shows that changes in the real value of the minimum wage over recent decades have affected the relationship of hourly wages with education, experience, and gender. Changes in the real value of the minimum wage account in part for the patterns of changes in education, experience, and gender wage differentials and mostly for the patterns of changes in within-group wage differentials. ICO vs IPO: Empirical Findings, Market Frictions and the Appropriate Regulatory Framework Ofir, Moran,Sadeh, Ido Initial coin offerings (ICOs) have been a prominent focus of legal and economic studies in recent years, which analyze their characteristics and determinants of their success. In this paper, we review these studies and identify key ICO success factors. We then compare the results with the empirical literature on initial public offerings (IPOs) and crowdfunding and offer theoretical explanations for the differences found. The results of this comparison are important for two reasons. Firstly, because there is no single formal data source, and there is evidence of inconsistencies across the different data sources available. Secondly, our results show in what circumstances ICO investors and initiators behave like IPO investors and initiators, and hence contribute to the literature on tokens as securities. Subsequently, we identify market frictions in ICOs, with a focus on information asymmetry and investor sentiment and biases. Finally, we discuss the regulatory implications of our findings. Institutional Order Handling and Broker-Affiliated Trading Venues Anand, Amber,Samadi, Mehrdad,Sokobin, Jonathan S.,Venkataraman, Kumar Using detailed order handling data over the life of 330 million institutional orders, we study whether order routing by brokers to Alternative Trading Systems (ATSs) that they own affects execution quality. In a multivariate regression specification that controls for stock attributes, order characteristics and market conditions, orders handled by brokers with high affiliated ATS routing are associated with lower fill rates. Trading costs based on the implementation shortfall approach are higher when clients select a broker with high affiliated ATS routing. Broker outcomes are highly persistent suggesting that improved disclosures on order handling could help institutional clients with broker selection. Italia Economia a Fine 2018 (Italy - At the Close of 2018) Mazziero, Maurizio,Lawford, Andrew,Serafini, Gabriele Italian Abstract: Ricerca sulla situazione economica italiana basata sui dati economici ufficiali; vengono analizzati e confrontati con il passato il debito pubblico, le riserve ufficiali, il PIL, l'inflazione e la disoccupazione. English Abstract: Research into the state of the Italian economy based on official economic data; the current Sovereign Debt, Official Reserves, GDP, Inflation and Unemployment situation is presented and and compared with the past. On occupation times in the red of L\'evy risk models David Landriault,Bin Li,Mohamed Amine Lkabous In this paper, we complement the existing literature on the occupation time in the red (below level $0$) of a spectrally negative L\'evy process, and later extend the analysis to the refracted spectrally negative L\'evy process. For both classes of processes, we derive an explicit expression for the distribution of such occupation time up to an independent exponential time. As an application, we consider the \emph{inverse occupation time} (also known as the time of cumulative Parisian ruin in \cite{guerinrenaud2015}), where ruin is deemed to occur at the earliest time the risk process cumulatively stays below a critical level over a pre-determined time-threshold. Some particular examples of spectrally negative L\'evy processes are also examined in more detail. On the Global Impact of Risk-off Shocks and Policy-put Frameworks Caballero, Ricardo J.,Kamber, Güneş Global risk-off shocks can be highly destabilizing for financial markets and, absent an adequate policy response, may trigger severe recessions. Policy responses were more complex for developed economies with very low interest rates after the Global Financial Crisis (GFC). We document, however, that the unconventional policies adopted by the main central banks were effective in containing asset price declines. These policies impacted long rates and inspired confidence in a policy-put framework that reduced the persistence of risk-off shocks. We also show that domestic macroeconomic and financial conditions play a key role in benefiting from the spillovers of these policies during risk-off episodes. Countries like Japan, which already had very low long rates, benefited less. However, Japan still benefited from the reduced persistence of risk-off shocks. In contrast, since one of the main channels through which emerging markets are historically affected by global risk-off shocks is through a sharp rise in long rates, the unconventional monetary policy phase has been relatively benign to emerging markets during these episodes, especially for those economies with solid macroeconomic fundamentals and deep domestic financial markets. We also show that unconventional monetary policy in the US had strong effects on long interest rates in most economies in the Asia-Pacific region (which helps during risk-off events but may be destabilizing otherwise we do not take a stand on this tradeoff). Optimal Execution in a Multiplayer Model of Transient Price Impact Elias Strehle Trading algorithms that execute large orders are susceptible to exploitation by order anticipation strategies. This paper studies the influence of order anticipation strategies in a multi-investor model of optimal execution under transient price impact. Existence and uniqueness of a Nash equilibrium is established under the assumption that trading incurs quadratic transaction costs. A closed-form representation of the Nash equilibrium is derived for exponential decay kernels. With this representation, it is shown that while order anticipation strategies raise the execution costs of a large order significantly, they typically do not cause price overshooting in the sense of Brunnermeier and Pedersen. Player-Compatible Equilibrium Drew Fudenberg,Kevin He Player-Compatible Equilibrium (PCE) imposes cross-player restrictions on the magnitudes of the players' "trembles" onto different strategies. These restrictions capture the idea that trembles correspond to deliberate experiments by agents who are unsure of the prevailing distribution of play. PCE selects intuitive equilibria in a number of examples where trembling-hand perfect equilibrium (Selten, 1975) and proper equilibrium (Myerson, 1978) have no bite. We show that rational learning and some near-optimal heuristics imply our compatibility restrictions in a steady-state setting. Predicting Default Risk under Asymmetric Binary Link Functions Dendramis, Yiannis,Tzavalis, Elias,Varthalitis, Petros,Athanasiou, Eleni In this paper we propose the use of an asymmetric binary link function to extend the proportional hazard model for predicting loan default. The rationale behind this approach is that the symmetry assumption, that has been widely used in the literature, could be considered as quite restrictive, especially during periods of financial distress. In our approach we allow for a flexible level of asymmetry in the probability of default by the use of the skewed logit distribution. This enable us to estimate the actual level of asymmetry that is associated with the data at hand. We implement our approach to both simulated data and a rich micro dataset of consumer loan accounts. Our results provide clear cut evidence that ignoring the actual level of asymmetry leads to seriously biased estimates of the slope coefficients, inaccurate marginal effects of the covariates of the model, and overestimation of the probability of default. Regarding the predictive power of the covariates of the model, we have found that loan specific covariates, contain considerably more information about the loan default than macroeconomic covariates, which are often used in practice to carry out macroprudential stress testing. Pricing Formulae of Power Binary and Normal Distribution Standard Options and Applications Hyong-Chol O,Dae-Sung Choe In this paper the Buchen's pricing formulae of (higher order) asset and bond binary options are incorporated into the pricing formula of power binary options and a pricing formula of "the normal distribution standard options" with the maturity payoff related to a power function and the density function of normal distribution is derived. And as their applications, pricing formulae of savings plans that provide a choice of indexing and discrete geometric average Asian options are derived and the fact that the price of discrete geometric average Asian option converges to the price of continuous geometric average Asian option when the largest distance between neighboring monitoring times goes to zero is proved. Pro-Cyclicality of Traditional Risk Measurements: Quantifying and Highlighting Factors at its Source Marcel Bräutigam,Michel Dacorogna,Marie Kratz Since the introduction of risk-based solvency regulation, pro-cyclicality has been a subject of concerns from all market participants. Here, we lay down a methodology to evaluate the amount of pro-cyclicality in the way financial institutions measure risk, and identify factors explaining this pro-cyclical behavior. We introduce a new indicator based on the Sample Quantile Process (SQP, a dynamic generalization of Value-at-Risk), conditioned on realized volatility to quantify the pro-cyclicality, and evaluate its amount in the markets, considering 11 stock indices as realizations of the SQP. Then we determine two main factors explaining the pro-cyclicality: the clustering and return-to-the-mean of volatility, as it could have been anticipated but not quantified before, and, more surprisingly, the very way risk is measured, independently of this return-to-the-mean effect. Risk-Adjusted Performance of Convertible Venture Investments: Failure and Valuation Risks & Search by Entrepreneurs Pandher, Gurupdesh S. The paper studies the risk-adjusted performance of convertible venture investments subject to venture failure risk and valuation uncertainty. The analysis uses a double-hazard agency framework where VCs maximize their expected investment return and entrepreneurs search for the best deal from VCs with unknown productivities. Results from the model and numerical study provide new insights on important venture finance outcomes. VCs with a diverse set of funding capabilities and productivities can coexist in the venture finance industry since their risk-adjusted return is similar, while venture failure risk and valuation uncertainty are the most salient determinants of the Sharpe ratio. On the venture funding cycle, the model predicts that VCs at later stages will experience higher risk-adjusted returns as valuation uncertainty declines and funding capacity enables them to wait till larger investments are required (although it does not impact the Sharpe ratio). When entrepreneurs search for the best deal, an equilibrium arises that reduces the disadvantage of lower productivity VCs (their expected return rises). Lastly, the model shows that the convertible contract largely mitigates against venture failure risk (as opposed to valuation risk) as it converges to the pure equity contract when the failure probability or recovery diminish to zero. Social Justice in EU Financial Consumer Law FejÅ's, Andrea This paper considers how social justice influences EU financial consumer law. It provides a new way of looking at social justice in consumer law by showing that equality of status based social justice has increasingly come to the fore in modern EU financial consumer law. This emergent and complex set of private and regulatory rules on credit, insurance, investment and payment products has responded to the consequences of inequality between financial firms and consumers by engaging in product and rights regulation that balances the parties' rights and duties and protects consumers from the consequences of status-based inequality. Looking forward the paper recommends that this social justice approach must be made transparent and become an express part of EU law and policy, both in order to raise consumer trust in the internal market and to more clearly set the future law and policy agenda. Stackelberg Independence Toomas Hinnosaar The standard model of sequential capacity choices is the Stackelberg quantity leadership model with linear demand. I show that under the standard assumptions, leaders' actions are informative about market conditions and independent of leaders' beliefs about the arrivals of followers. However, this Stackelberg independence property relies on all standard assumptions being satisfied. It fails to hold whenever the demand function is non-linear, marginal cost is not constant, goods are differentiated, firms are non-identical, or there are any externalities. I show that small deviations from the linear demand assumption may make the leaders' choices completely uninformative. The Benefits of Being Friends with the Boss Balsam, Steven,Kwack, So Yean We investigate whether connections with the CEO established outside the firm increase the likelihood of an individual being promoted to be on the top management team (TMT), i.e., become one of the firms' top five executives, and whether those connections increase the likelihood of a member of the TMT being appointed to the board of directors. We find evidence consistent with a connection increasing the likelihood of an individual being appointed to the TMT and the likelihood of a member of the TMT being appointed to the board of directors. We also evidence consistent with connections between the executive and the CEO reducing that executives' likelihood of turnover. In cross-sectional analysis we provide some evidence consistent with our turnover results being associated with CEO power, as well as firm complexity, and our board appointment results being associated with CEO power. The Effect of Market Regimes on the Performance of Market Capitalization-Weighted and Smart Beta Shariah-Compliant Equity Portfolios Raza, Muhammad Wajid,L'Huillier, Barbara,Ashraf, Dawood Shariah-compliant investment guidelines, while explicit on screening criteria for stock selection, are silent on the weighting methods to be used in the construction of Shariah-compliant equity portfolios. The market capitalization-weighted strategy and smart beta strategies (fundamental value-weighted, equal-weighted, and low-risk weighted strategies) exhibit different risk and return characteristics. This paper investigates whether the choice of weighting strategy affects the performance of Shariah-compliant equity portfolios under different market conditions. The sample consists of active constituent data from S&P 500 for the period 1986-2016. By utilizing the Markov-regime switching model, it is possible to categorize market returns into two regimes â€" high regimes for bullish market conditions and low regimes for bearish market conditions. The empirical results indicate a significant difference between the performances of Shariah-compliant equity portfolios in high and low regimes following different weighting strategies. The evidence suggests that market capitalization and fundamental value-weighted strategies perform better during market rallies. On the other hand, a low-risk strategy can be used as a hedge during periods of maximum drawdown and is associated with relatively lower value-at-risk and expected shortfalls as compared to alternative strategies. The Impact of Brexit on Francophone Africa Kohnert, Dirk Whereas the impact of Brexit on Anglophone Africa was a major issue in the controversial British discussions on the pros and cons of Brexit, possible repercussions on French-speaking Africa have been rarely mentioned up to now. If at all, mostly indirect general effects were declared, both concerning the former British Empire in Africa and a fortifori for the former French colonies as well. Yet, the range of possible Brexit effect is impressive. It spreads from direct influence on farm-gate cocoa-prices in the CFA-currency regions and subsequent percussions on the state budget of these countries, over more indirect effects, e.g. on the cooperation between CEMAC, WAEMU and the EU concerning EDF-programs of which Great Britain has been a major contributor so far, as well as enforced re-negotiation of controversial EPAs, to the revival of progressive social networks in Francophone Africa. The latter are already demanding more political and economic sovereignty, for example with respect to the increasingly anachronistic F CFA currency. Yet, in view of the lack of countervailing power of Britain within the EU in the case of Brexit, the murky network of Françafrique could be re-vitalized and consolidated as well. Besides, there could develop also direct effects of the Brexit. For example, on coca-farmers in Francophone West Africa, because their product is traditionally traded in Pound Sterling. Thus, any fall in the value of the Pound Sterling against the Euro once Britain leaves the EU would have damaging consequences, not only for the producers but also for public finances, because cocoa is priced in Sterling and the CFA franc is linked to the Euro. This impacts also on the revival of the long-standing controversy on the ill-adapted and increasingly anachronistic F CFA. African activists already demand a genuine African debate and a referendum on these issues similar to the Brexit vote. Last, but not least, British application for EU membership had been vetoed two times by France in 1963 and November 1967. Arguably, this veto has direct links to the British Brexit vote of 2016. Trading Strategies Generated Pathwise by Functions of Market Weights Ioannis Karatzas,Donghan Kim Almost twenty years ago, E.R. Fernholz introduced portfolio generating functions which can be used to construct a variety of portfolios, solely in the terms of the individual companies' market weights. I. Karatzas and J. Ruf recently developed another methodology for the functional construction of portfolios, which leads to very simple conditions for strong relative arbitrage with respect to the market. In this paper, both of these notions of functional portfolio generation are generalized in a pathwise, probability-free setting; portfolio generating functions are substituted by path-dependent functionals, which involve the current market weights, as well as additional bounded-variation functions of past and present market weights. This generalization leads to a wider class of functionally-generated portfolios than was heretofore possible, and yields improved conditions for outperforming the market portfolio over suitable time-horizons. WACC and CAPM According to Utilities Regulators: Confusions, Errors and Inconsistencies Fernandez, Pablo Regulators of many countries try to find the "true" WACC of Electricity, Gas, Water… activities. All their documents have in common a main confusion: they do not differentiate among expected, required, historical, and regulator allowed returns, which are 4 very different concepts. Most of the documents have several conceptual errors (they apply wrongly the CAPM and the WACC), several inconsistencies estimating parameters and multiply the WACC by the depreciated book value of assets. Another two common peculiarities of many regulators are a) their penchant for calculating averages and averages of averages, and b) their argument of doing strange calculations "because many other regulators do so." We show how a European Regulator arrives to a "WACC before taxes of the electricity regulated activities" of 5,58%. We also show that using the same data and the same method, but criteria of other regulators for the calculation of the parameters you may justify any "WACC before taxes" in the interval 2,4% - 7,4%.After reading the mentioned documents of the regulators a question arises: Are they fiction or science fiction? We show some confusions, errors, inconsistencies and useless arguments of the documents of Regulators and propose solutions. Regulator reports (such as the one in Section 1) are very helpful to discover conceptual mistakes, valuation errors… and to think about the meaning of several concepts widely used in Corporate Finance and Financial Markets. Which Related Party Transactions Should Be Subject to Ex Ante Review? Evidence from Germany Engert, Andreas,Florstedt, Tim The amended EU shareholder rights directive introduces a comprehensive regime of ex ante review for potentially conflicted transactions between listed companies and their major shareholders, downstream entities, and managers. Such 'related party transactions'â€"if considered materialâ€"will have to be evaluated in advance by the board of directors, the shareholders meeting, or the stock market. The paper offers an empirical basis for implementation in Germany and other continental European jurisdictions that lack experience with an ex ante procedural approach to related party transactions. Besides documenting ownership in and shareholdings of German listed companies, we use hand-collected data based on IAS 24 reporting of related party transactions to estimate the number of companies affected by different materiality thresholds based on accounting assets, sales, market capitalisation, and other financials. The main recommendations derived from the analysis are to use more than one single quantitative criterion, to adopt a more generous standard for transactions with downstream entities, and to abstain from imposing a specialised threshold for transactions with managers.
CommonCrawl
[Solutions] HKUST Undergraduate Math Competition 2018 Contest HKUST HongKong Undergraduate Find $\displaystyle \int_{0}^{\infty} \frac{d x}{\left(x^{2}+1\right)\left(1+x^{8}\right)} .$ Show details. Let $A$ be a real orthogonal $n \times n$ matrix. Determine for which positive integers $n$ there exists a real orthogonal $n \times n$ matrix $B$ such that $A+B$ is a real orthogonal matrix. Iet $f:[0,1] \rightarrow(0,+\infty)$ be continuous and strictly decreasing. Prove that $$\frac{\int_{0}^{1} x f^{2}(x) d x}{\int_{0}^{1} x f(x) d x} \leq \frac{\int_{0}^{1} f^{2}(x) d x}{\int_{0}^{1} f(x) d x}$$ In $\mathbb{R}^{2},$ let $D$ be a closed disk with positive radius and center at (0,0) . Prove that for every $(a, b)$ in $\mathbb{R}^{2},$ there exists a positive integer $n$ such that the set $S=$ $\{(x+n a, y+n b):(x, y) \in D\}$ contains an element $(p, q),$ where $p$ and $q$ are integers. Let $f:(-1,1) \rightarrow \mathbb{R}$ be a function of the form $\displaystyle f(x)=\sum_{i=0}^{\infty} c_{i} x^{i},$ where each coefficient $c_{i} \in\{0,1,2\} .$ If $f\left(\frac{4}{5}\right)=\frac{5}{4},$ then prove that $f\left(\frac{1}{3}\right)$ is an irrational number. Let $k_{1}, k_{2}, k_{3}, \ldots$ be a sequence of strictly increasing positive integers such that $\displaystyle \lim _{n \rightarrow \infty} \frac{k_{n}}{n}=+\infty .$ Prove that $\displaystyle \sum_{n=1}^{\infty} \frac{(-1)^{[\sqrt{n}]}}{k_{n}}$ converges, where $[x]$ is the greatest integer less than or equal to $x$. Let $n$ be a positive integer and $A, B$ be $n \times n$ matrices over the complex numbers. Prove that $A$ and $B$ have a common eigenvalue if and only if $A X=X B$ for some $n \times n$ matrix $X \neq 0$ Find all continuous functions $y:[0, \infty) \rightarrow \mathbb{R}$ such that $y(0)=0, y$ is differentiable on $(0, \infty)$ satisfying $y^{\prime}(x)=\int_{0}^{x} \sin (y(u)) d u+\cos x$ for all $x>0$ Let $a$ and $b$ be positive integers with $a>1 .$ If $a$ and $b$ are both odd or both even, then prove that $2^{a}-1$ does not divide $3^{b}-1$ Let $f:[0,1] \times \mathbb{R} \rightarrow \mathbb{R}$ be continuous such that for every $x \in[0,1]$ and $y_{0} \neq y_{1}$ in $\mathbb{R},$ we have $$\frac{1}{2} \leq \frac{f\left(x, y_{0}\right)-f\left(x, y_{1}\right)}{y_{0}-y_{1}} \leq \frac{3}{2}.$$ Prove that there exists a unique real-valued continuous function $h$ on [0,1] such that for all $x \in[0,1], f(x, h(x))=0$ Let $u \cdot v$ denote the usual inner product of $u, v \in \mathbb{R}^{n} .$ For positive integer $k<n,$ let $G(k, n)$ be the set of all $k$ -dimensional linear subspaces in $\mathbb{R}^{n} .$ For $v \in \mathbb{R}^{n}$ and a linear subspace $S$ in $\mathbb{R}^{n},$ let $d(v, S)$ denote the usual distance from $v$ to $S .$ For $V \in G(k, n)$, let $B(V)=\{v \mid v \in V, v \cdot v=1\} .$ For $V, U \in G(k, n)$, let $d(V, U)=\max \{d(v, U) \mid v \in B(V)\}$. a) Prove that for $V, W, U \in G(k, n), d(V, U) \leq d(V, W)+d(W, U)$. b) Let $\left\{v_{1}, v_{2}, \ldots, v_{k}\right\},\left\{w_{1}, w_{2}, \ldots, w_{k}\right\}$ be orthonormal basis of $V, W \in G(k, n)$ respectively. Let $A$ be the $k \times k$ matrix with $(i, j)$ entry equal $v_{i} \cdot w_{j} .$ Let $\lambda$ be the smallest eigenvalue of $A A^{T} .$ Determine the value of $d(V, W)$ in terms of $\lambda$. c) Prove that $d(V, W)=d(W, V)$ for all $V, W \in G(k, n)$ Let $S=\{z|z \in \mathbb{C}, 0<| z \mid<2\}$ and $f: S \rightarrow \mathbb{C}$ be a holomorphic function such that $\operatorname{Re} f(z) \geq 0$ and $\operatorname{Im} f(z) \geq 0$ for all $z \in S .$ Prove that $f$ has a removable singularity at $0 .$ MOlympiad.NET: [Solutions] HKUST Undergraduate Math Competition 2018 https://www.molympiad.net/2019/01/hkust-undergraduate-math-competition-2018.html
CommonCrawl
3rd Grade / Unit 2: Multiplication and Division, Part 1 / Multiplication and Division, Part 1 Unit 2: Multiplication and Division, Part 1 Topic A: The Meaning of Multiplication and Division Identify and create situations involving equal groups and describe these situations using the language and notation of multiplication. Identify and create situations involving arrays and describe these situations using the language and notation of multiplication. Identify and create situations involving unknown group size and find group size in situations. Identify and create situations involving an unknown number of groups and find the number of groups in situations. 3.OA.B.6 Relate multiplication and division and understand that division can represent situations of unknown group size or an unknown number of groups. Topic B: Multiplication and Division by 2, 5, and 10 3.OA.C.7 Build fluency with multiplication facts using units of 2, 5, and 10. Demonstrate the commutativity of multiplication. Build fluency with division facts using units of 2, 5, and 10. Solve one-step word problems involving multiplication and division using units of 2, 5, and 10. Topic C: Multiplication and Division by 3 and 4 Build fluency with multiplication and division facts using units of 3. Solve one-step word problems involving multiplication and division using units of 3 and 4. Topic D: More Complex Multiplication and Division Problems 3.OA.D.8 Determine the unknown whole number in a multiplication or division equation relating three whole numbers, including equations with a letter standing for the unknown quantity. Solve one-step word problems involving multiplication and division and write problem contexts to match expressions and equations. Solve two-step word problems involving multiplication and division and assess the reasonableness of answers. Solve two-step word problems involving all four operations and assess the reasonableness of answers. 3.OA.A.4 — Determine the unknown whole number in a multiplication or division equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 × ? = 48, 5 = ÷ 3, 6 × 6 = ?. 3.OA.C.7 — Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division (e.g., knowing that 8 × 5 = 40, one knows 40 ÷ 5 = 8) or properties of operations. By the end of Grade 3, know from memory all products of two one-digit numbers. 3.OA.D.8 — Solve two-step word problems using the four operations. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding. This standard is limited to problems posed with whole numbers and having whole-number answers; students should know how to perform operations in the conventional order when there are no parentheses to specify a particular order (Order of Operations). Write all possible equations relating the same three whole numbers (e.g., for 2, 5, 10, write 2 x 5 = 10, 5 x 2 = 10, 10 $$\div$$ 2 = 5, etc.). Determine the unknown divisor in a division equation. Determine the unknown dividend in a division equation. Determine the unknown whole number in a multiplication or division equation relating three whole numbers in all other cases not yet mentioned. (Spiral) Use a letter to represent an unknown quantity in an equation relating three whole numbers. Criteria for Success #3 is a spiral, since students have found quotients and products throughout the unit and know how to find an unknown factor since they've come to see its relationship to division. Solving for an unknown quantity in Grade 3 should not involve an algebraic approach, i.e., students should not be recording $$c \times 3 = 24 \rightarrow \frac{c \times 3}{3} = \frac {24}{3} \rightarrow c = 8$$. Instead, Anchor Task #1 encourages students to think of a complete fact family for three numbers that are related by multiplication and division, and then use those relationships to rewrite equations that make it easier to determine the unknown. Algebraic approaches to solving equations will come in the middle grades. If you think students would benefit from some straightforward computational practice before the lesson or before jumping into the Problem Set, you can play "Multiplication Madness," from Building Conceptual Understanding and Fluency Through Games by the North Carolina Department of Public Instruction. This game is also often referred to as "Four in a Row." You'll need to adapt the game so that the factors listed are 2, 3, 4, 5, and 10, and then change the sequence of products listed on the game board to be 4, 6, 8, 9, 10, 12, 15, 16, 20, 25, 30, 40, 50, and 100. Students have had far more practice with finding unknowns in multiplication sentences, since they've come to understand division as an unknown-factor problem, but in case you'd like students to have more practice with problems of that type, they can play an analogous game to the one above called, "Missing Numbers with Multiplication," from 3.OA.4 - About the Math, Learning Targets, and Rigor by the Howard County Public School System. Similar to above, you'll need to eliminate facts and cards related to 0 and 1. Write all of the multiplication and division equations you can think of that represent the following array. Tehya and Kenneth are trying to figure out which number could be placed in the box to make this equation true. Tehya insists that 50 is the only number that will make this equation true. Kenneth insists that 2 is the only number that will make this equation true. $$10 \div \space ? = 5$$ Who is right? Why? Illustrative Mathematics Finding the Unknown in a Division Equation Finding the Unknown in a Division Equation, accessed on Oct. 10, 2018, 3:50 p.m., is licensed by Illustrative Mathematics under either the CC BY 4.0 or CC BY-NC-SA 4.0. For further information, contact Illustrative Mathematics. Now, Tehya and Kenneth are trying to figure out which number could be placed in the box to make this next equation true. $$2 = \square \div 10$$ Instead of a question mark or a box, we can use a letter to represent the unknown quantity. For example, the equation in Anchor Task #3 could have been written as $$ 2 = a \div 10$$, with $$a$$ standing in for the unknown. Knowing that, determine the values of the unknowns that make each equation below true. a. $$c\times 3=24$$ b. $$5=50 \div g$$ c. $$w \div 3=6$$ d. $$24=4\times r$$ e. $$15 \div g=3$$ f. $$8= w \div 2$$ How did you find the missing value in #4? What was the solution to the riddle? How should we record our solutions when we determine the value of an unknown that's represented with a letter? How does the relationship between multiplication and division help to determine the value of the unknowns in equations like these? Find the value of each unknown below. 1. $$z = 5 \times 9$$ $$z$$ = _____ 2. $$20\div v = 5$$ $$v$$ = _____ 3. $$3 \times w = 24$$ $$w$$ = _____ 4. $$7 = y\div 4$$ $$y$$ = _____ EngageNY Mathematics Grade 3 Mathematics > Module 3 > Topic A > Lesson 3 — Exit Ticket, Questions #1-4 Grade 3 Mathematics > Module 3 > Topic A > Lesson 3 of the New York State Common Core Mathematics Curriculum from EngageNY and Great Minds. © 2015 Great Minds. Licensed by EngageNY of the New York State Education Department under the CC BY-NC-SA 3.0 US license. Accessed Dec. 2, 2016, 5:15 p.m..
CommonCrawl
Generalized and weighted Strichartz estimates Stability of stationary waves for full Euler-Poisson system in multi-dimensional space September 2012, 11(5): 1753-1773. doi: 10.3934/cpaa.2012.11.1753 Instability of coupled systems with delay Reinhard Racke 1, Department of Mathematics and Statistics, University of Konstanz, 78457 Konstanz Received January 2011 Revised July 2011 Published March 2012 We consider linear initial-boundary value problems that are a coupling like second-order thermoelasticity, or the thermoelastic plate equation or its generalization (the $\alpha$-$\beta$-system introduced in [1, 26]). Now, there is a delay term given in part of the coupled system, and we demonstrate that the expected inherent damping will not prevent the system from not being stable; indeed, the systems will shown to be ill-posed: a sequence of bounded initial data may lead to exploding solutions (at any fixed time). Keywords: $\alpha$-$\beta$-system., partial delay, heat equation, hyperbolic-parabolic system, plate equation, Well-posedness, stability. Mathematics Subject Classification: Primary: 35 B, 35K20, 35K55, 35L20, 35Q, 35R25; Secondary: 80A2. Citation: Reinhard Racke. Instability of coupled systems with delay. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1753-1773. doi: 10.3934/cpaa.2012.11.1753 F. Ammar Khodja and A. Benabdallah, Sufficient conditions for uniform stabilization of second order equations by dynamical controllers,, Dyn. Contin. Discrete Impulsive Syst., 7 (2000), 207. Google Scholar K. Ammari, S. Nicaise and C. Pignotti, Feedback boundary stabilization of wave equations with interior delay,, preprint, (). Google Scholar G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system without mechanical dissipation,, Rend. Instit. Mat. Univ. Trieste Suppl., 28 (1997), 1. Google Scholar A. Bátkai and S. Piazzera, "Semigroups for Delay Equations,'', Research Notes in Mathematics, 10 (2005). Google Scholar D. S. Chandrasekharaiah, Hyperbolic thermoelasticity: A review of recent literature,, Appl. Mech. Rev., 51 (1998), 705. doi: 10.1115/1.3098984. Google Scholar R. Denk and R. Racke, $L^p$ resolvent estimates and time decay for generalized thermoelastic plate equations,, Electronic J. Differential Equations, 48 (2006), 1. Google Scholar R. Denk, R. Racke and Y. Shibata, $L_p$ theory for the linear thermoelastic plate equations in bounded and exterior domains,, Advances Differential Equations, 14 (2009), 685. Google Scholar R. Denk, R. Racke and Y. Shibata, Local energy decay estimate of solutions to the thermoelastic plate equations in two- and three-dimensional exterior domains,, J. Analysis Appl., 29 (2010), 21. Google Scholar M. Dreher, R. Quintanilla and R. Racke, Ill-posed problems in thermomechanics,, Appl. Math. Letters, 22 (2009), 1374. doi: 10.1016/j.aml.2009.03.010. Google Scholar E. Fridman, S. Nicaise and J. Valein, Stabilization of second order evoluion equations with unbounded feedback with time-dependent delay,, SIAM J. Control Optim., 48 (2010), 5028. doi: 10.1137/090762105. Google Scholar S. Jiang and R. Racke, "Evolution Equations in Thermoelasticity,'', $\pi$ Monographs Surveys Pure Appl. Math. \textbf{112}, 112 (2000). Google Scholar P. M. Jordan, W. Dai and R. E. Mickens, A note on the delayed heat equation: Instability with respect to initial data,, Mech. Research Comm., 35 (2008), 414. doi: 10.1016/j.mechrescom.2008.04.001. Google Scholar J. U. Kim, On th energy decay of a linear thermoelastic bar and plate,, SIAM J. Math. Anal., 23 (1992), 889. doi: 10.1137/0523047. Google Scholar M. Kirane, B. Said-Houari and M. N. Anwar, Stability result for the Timoshenko systme with a time-varying delay term in the internal feedbacks,, Comm. Pure Appl. Anal., 10 (2011), 667. doi: 10.3934/cpaa.2011.10.667. Google Scholar J. Lagnese, "Boundary Stabilization of Thin Plates,'', SIAM Studies Appl. Math., 10 (1989). Google Scholar I. Lasiecka and R. Triggiani, Two direct proofs on the analyticity of the S.C. semigroup arising in abstract thermoelastic equations,, Adv. Differential Equations, 3 (1998), 387. Google Scholar I. Lasiecka and R. Triggiani, Analyticity, and lack thereof, of thermo-elastic semigroups,, ESAIM, 4 (1998), 199. Google Scholar I. Lasiecka and R. Triggiani, Analyticity of thermo-elastic semigroups with coupled hinged/Neumann boundary conditions,, Abstract Appl. Anal., 3 (1998), 153. doi: 10.1155/S1085337598000487. Google Scholar I. Lasiecka and R. Triggiani, Analyticity of thermo-elastic semigroups with free boundary conditions,, Annali Scuola Norm. Sup. Pisa, 27 (1998), 457. Google Scholar Z. Liu and M. Renardy, A note on the equation of a thermoelastic plate,, Appl. Math. Lett., 8 (1995), 1. doi: 10.1016/0893-9659(95)00020-Q. Google Scholar K. Liu and Z. Liu, Exponential stability and analyticity of abstract linear thermoelastic systems,, Z. angew. Math. Phys., 48 (1997), 885. doi: 10.1007/s000330050071. Google Scholar Z. Liu and J. Yong, Qualitative properties of certain $C_0$ semigroups arising in elastic systems with various dampings,, Adv. Differential Equations, 3 (1998), 643. Google Scholar Z. Liu and S. Zheng, Exponential stability of the Kirchhoff plate with thermal or viscoelastic damping,, Quart. Appl. Math., 53 (1997), 551. Google Scholar Z. Liu and S. Zheng, "Semigroups Associated with Dissipative Systems,'', $\pi$ Research Notes Math., 398 (1999). Google Scholar J. E. Muñoz Rivera and R. Racke, Smoothing properties, decay, and global existence of solutions to nonlinear coupled systems of thermoelastic type,, SIAM J. Math. Anal., 26 (1995), 1547. doi: 10.1137/S0036142993255058. Google Scholar J.E. Muñoz Rivera and R. Racke, Large solutions and smoothing properties for nonlinear thermoelastic systems,, J. Differential Equations, 127 (1996), 454. doi: 10.1006/jdeq.1996.0078. Google Scholar S. Nicaise and C. Pignotti, Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks,, SIAM J. Control Optim., 45 (2006), 1561. doi: 10.1137/060648891. Google Scholar S. Nicaise and C. Pignotti, Stabilization of the wave equation with boundary of internal distributed delay,, Diff. Integral Equations, 21 (2008), 935. Google Scholar J. Prüß, "Evolutionary Integral Equations and Applications,'', Monographs Math., 87 (1993). Google Scholar R. Quintanilla, A well posed problem for the dual-phase-lag heat conduction,, J. Thermal Stresses, 31 (2008), 260. doi: 10.1080/01495730701738272. Google Scholar R. Quintanilla, A well posed problem for the three dual-phase-lag heat conduction,, J. Thermal Stresses, 32 (2009), 1270. doi: 10.1080/01495730903310599. Google Scholar R. Quintanilla, Some solutions for a family of exact phase-lag heat conduction problems,, Mech. Research Communications, 38 (2011), 355. doi: 10.1016/j.mechrescom.2011.04.008. Google Scholar R. Racke, Thermoelasticity with second sound -- exponential stability in linear and nonlinear 1-d,, Math. Meth. Appl. Sci., 25 (2002), 409. doi: 10.1002/mma.298. Google Scholar R. Racke, Asymptotic behavior of solutions in linear 2- or 3-d thermoelasticity with second sound,, Quart. Appl. Math., 61 (2003), 315. Google Scholar R. Racke, "Thermoelasticity,'', Chapter in: Handbook of Differential Equations. Evolutionary Equations, 5 (2009). Google Scholar D. L. Russell, A general framework for the study of indirect damping mechanisms in elastic systems,, J. Math. Anal. Appl., 173 (1993), 339. doi: 10.1006/jmaa.1993.1071. Google Scholar B. Said-Houari and Y. Laskri, A stability result of a Timoshenko system with a delay term in the internal feedback,, Appl. Math. Comp., 217 (2010), 2857. doi: 10.1006/jmaa.1993.1071. Google Scholar Kun Li, Jianhua Huang, Xiong Li. Traveling wave solutions in advection hyperbolic-parabolic system with nonlocal delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2091-2119. doi: 10.3934/dcdsb.2018227 Huashui Zhan. On a hyperbolic-parabolic mixed type equation. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 605-624. doi: 10.3934/dcdss.2017030 George Avalos, Roberto Triggiani. Semigroup well-posedness in the energy space of a parabolic-hyperbolic coupled Stokes-Lamé PDE system of fluid-structure interaction. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 417-447. doi: 10.3934/dcdss.2009.2.417 Jiawei Chen, Zhongping Wan, Liuyang Yuan. Existence of solutions and $\alpha$-well-posedness for a system of constrained set-valued variational inequalities. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 567-581. doi: 10.3934/naco.2013.3.567 Tohru Nakamura, Shinya Nishibata. Energy estimate for a linear symmetric hyperbolic-parabolic system in half line. Kinetic & Related Models, 2013, 6 (4) : 883-892. doi: 10.3934/krm.2013.6.883 Aissa Guesmia, Nasser-eddine Tatar. Some well-posedness and stability results for abstract hyperbolic equations with infinite memory and distributed time delay. Communications on Pure & Applied Analysis, 2015, 14 (2) : 457-491. doi: 10.3934/cpaa.2015.14.457 Saoussen Sokrani. On the global well-posedness of 3-D Boussinesq system with partial viscosity and axisymmetric data. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1613-1650. doi: 10.3934/dcds.2019072 Vanessa Barros, Felipe Linares. A remark on the well-posedness of a degenerated Zakharov system. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1259-1274. doi: 10.3934/cpaa.2015.14.1259 Boris Kolev. Local well-posedness of the EPDiff equation: A survey. Journal of Geometric Mechanics, 2017, 9 (2) : 167-189. doi: 10.3934/jgm.2017007 Jerry Bona, Nikolay Tzvetkov. Sharp well-posedness results for the BBM equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1241-1252. doi: 10.3934/dcds.2009.23.1241 Nils Strunk. Well-posedness for the supercritical gKdV equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 527-542. doi: 10.3934/cpaa.2014.13.527 A. Alexandrou Himonas, Curtis Holliman. On well-posedness of the Degasperis-Procesi equation. Discrete & Continuous Dynamical Systems - A, 2011, 31 (2) : 469-488. doi: 10.3934/dcds.2011.31.469 Ahmed Bchatnia, Aissa Guesmia. Well-posedness and asymptotic stability for the Lamé system with infinite memories in a bounded domain. Mathematical Control & Related Fields, 2014, 4 (4) : 451-463. doi: 10.3934/mcrf.2014.4.451 Huafei Di, Yadong Shang, Xiaoxiao Zheng. Global well-posedness for a fourth order pseudo-parabolic equation with memory and source terms. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 781-801. doi: 10.3934/dcdsb.2016.21.781 Goro Akagi, Kei Matsuura. Well-posedness and large-time behaviors of solutions for a parabolic equation involving $p(x)$-Laplacian. Conference Publications, 2011, 2011 (Special) : 22-31. doi: 10.3934/proc.2011.2011.22 Stefan Meyer, Mathias Wilke. Global well-posedness and exponential stability for Kuznetsov's equation in $L_p$-spaces. Evolution Equations & Control Theory, 2013, 2 (2) : 365-378. doi: 10.3934/eect.2013.2.365 Gustavo Ponce, Jean-Claude Saut. Well-posedness for the Benney-Roskes/Zakharov- Rubenchik system. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 811-825. doi: 10.3934/dcds.2005.13.811 Hung Luong. Local well-posedness for the Zakharov system on the background of a line soliton. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2657-2682. doi: 10.3934/cpaa.2018126 Jaime Angulo, Carlos Matheus, Didier Pilod. Global well-posedness and non-linear stability of periodic traveling waves for a Schrödinger-Benjamin-Ono system. Communications on Pure & Applied Analysis, 2009, 8 (3) : 815-844. doi: 10.3934/cpaa.2009.8.815 Barbara Kaltenbacher, Irena Lasiecka. Well-posedness of the Westervelt and the Kuznetsov equation with nonhomogeneous Neumann boundary conditions. Conference Publications, 2011, 2011 (Special) : 763-773. doi: 10.3934/proc.2011.2011.763 Reinhard Racke
CommonCrawl
Exploring science-and-technology-led innovation: a cross-country study Viju Raghupathi1 & Wullianallur Raghupathi2 Countries can enhance endogenous innovation using multifaceted incentives for science and technology indicators. We explore country-level innovation using OECD data for research and development (R&D), patents, and exports. We deploy a dual methodology of descriptive visualization and panel regression analysis. Our results highlight industry variances in R&D spending. As a nation develops, governmental expenditure on R&D decreases, and businesses take on an increasing role to fill the gap, increasing local innovation. Our portfolio of local versus foreign resident ownership of patents highlights implications for taxation/innovation policies. Countries with high foreign ownership of patents have low tax revenues due to the lack of associated costs with, and mobility of income from, patents. We call on these countries to devise targeted policies encouraging local patent ownership. Policy makers should also recognize factors influencing high-technology exports for innovation. Lastly, we call on countries to reinstate horizontal and vertical policies, and design national innovation ecosystems that integrate disparate policies in an effort to drive economic growth through innovation. Innovation is a key driver of economic growth and a prime source of competition in the global marketplace (Organization for Economic Cooperation and Development (OECD) 2005); at least 50% of growth is attributable to it (Kayal 2008; Organization for Economic Cooperation and Development (OECD) 2005). Notable in this regard are the levels of adoption and creation of technological innovation (Grupp and Mogeec 2004; Niosi 2010) and technological learning (Koh and Wong 2005; Organization for Economic Cooperation and Development (OECD) 2005) in creating this expansion. A country's economic growth progresses through three stages of technological change and productivity, namely factor-driven growth, investment-driven growth, and innovation-driven growth (Koh and Wong 2005; Rostow 1959; World Economic Forum (WEF) 2012). Factor-driven economies produce goods based on natural endowments and low labor cost; investment-driven economies accumulate capital (technological, physical and human) and offer investment incentives; and innovation-driven economies emphasize research and development (R&D), entrepreneurship, and innovation. Progressing from one growth stage to another involves transitioning from a technology-importing economy that relies on endowments, capital accumulation, infrastructure, and technology imitation, to a technology-generating economy that focuses on creating new products or knowledge using state of the art technology (Akcali and Sismanoglu 2015). In the creation of new products or knowledge, economies use a systemic approach to represent the interaction of public and private institutions with policies, incentives, and initiatives (Organization for Economic Cooperation and Development (OECD) 1997). Institutions include research entities, public laboratories, innovative enterprises, venture capital firms, and organizations that finance, regulate, and enable the production of science and technology (Malerba 2004; Mazzoleni and Nelson 2007; Niosi 2010). The government also plays a key role in supporting these institutions through policies, targeted incentives, R&D collaboration, and coordinated infrastructure. According to the Oslo manual (Organization for Economic Cooperation and Development (OECD) 2005), innovation has been defined as the implementation of a new or significantly improved product/service, process, marketing method, or organizational method in business practices, workplace organization, or external relations. Though many innovation studies use variations of this definition, the common thread is Schumpeter (2008). Schumpeter proposes that the crux of capitalism lies in production for the mass market through creative destruction, that is, the continuous process of generating new products, processes, markets, and organizational forms that make existing ones obsolete (Lee 2015). In today's digital era, technology is integral to advancing such creative destruction. It is no surprise that innovation vis-à-vis technological innovation drives economic growth. Technological innovation is measured using science and technology (S&T) indicators. These indicators include resources devoted to R&D, patents, technology balance of payments, and international trade in R&D-intensive industries. The importance given to S&T indicators increased with the call for a comprehensive analysis of the economy that not only incorporates economic indicators, but also those that represent knowledge creation (Lepori et al. 2008). However, for S&T to translate to improved economic development (represented by improved quality of life, as well as wealth and employment creation), it needs to be geared towards bringing new products/processes into the marketplace—that is, towards innovation (Siyanbola et al. 2016). Attaining national development goals requires evidence-based and informed policy-making. Incorporating S&T indicators offers the scientific evidence needed to effectively design, formulate, and implement national innovation policies that contribute to economic development. We base our study on this premise and utilize the S&T indicators of R&D, patents, and exports to explore country-level technological innovation for policy analysis. We offer research-based suggestions for governments to compare innovation policy initiatives, seek insights into solving national development problems, identify outstanding best practices, and work collaboratively. The rest of the paper is organized as follows: section 2 describes the research background; section 3 covers methodology; section 4 discusses the analyses results and discussion; section 5 offers scope and limitations; section 6 covers contributions and implications for future research; and section 7 presents the conclusions of our research. Research background There are several theoretical models that offer rationales for technological innovation. The technology gap model developed by Posner (1961) explains how countries that are technologically advanced introduce new products into a market and enjoy the innovative advantage (Krugman 1986). However, this comparative advantage is transient and shifts over time to other countries that show sustained innovative activities due to technological improvements. The product life cycle hypothesis shows that industrialized countries with a high degree of human capital and R&D investment produce more technical innovations and new products (Maksimovic and Phillips 2008). These countries enjoy the comparative advantage early in the product's life cycle. But as the country exports and the product becomes more standardized, it allows other countries to reproduce at a lower cost (with advanced technology) and gain market share. The endogenous growth model shows that technology, knowledge, and human capital are endogenous and primary contributors to innovation and economic growth (Gocer et al. 2016). It is clear that technological innovation offers a country a competitive edge that contributes to economic development and growth (Gocer et al. 2016). Therefore, measuring this phenomenon takes on increasing significance at national and global levels. We now describe our research framework and conceptualization of the innovation phenomenon. Research framework for innovation In this research, we adapt the comprehensive framework presented at the OECD workshop for national innovation capability using S&T indicators (Qiquan et al. 2006). According to Miles and Huberman (1994), a conceptual framework explains either graphically or in narrative form the main things to be studied—the key factors, concepts, or variables—and the presumed relationships among them. Using this definition, we have laid out the key concepts in our research and how they relate to the overall phenomenon of innovation. We conceptualize innovation using the three components of inputs, knowledge creation and absorption, and outputs. Inputs to innovation are represented by efforts at research and development, including the expenditure and personnel hired for R&D. Knowledge creation and absorption represents national efforts at motivating and rewarding the innovative process. The outputs of innovation represent exports of products and services. Figure 1 shows the research framework. Framework for national innovation Inputs—R&D expenditure and R&D personnel R&D is an important input to national innovation. It includes creative work undertaken systematically to increase the stock and the use of knowledge to devise new applications—both of which have the potential to influence innovation. R&D expenditure is often used to encourage innovation and provide a stimulus to national competitiveness. Research has used R&D expenditure as a percentage of GDP (referred to as R&D intensity) to explain the relationship between firm size and innovative effort (Cohen 2010) and as an input to innovation (Kemp and Pearson 2007). In general, developed countries have higher R&D intensity than developing countries. For a country, the gross domestic expenditure on R&D (GERD), which represents the expenditure on scientific research and experimental development, offers an indication of the allocation of financial resources to R&D in terms of the share in the GDP. Sufficient R&D funding is essential for innovation, economic growth, and sustainable development. Changes in R&D expenditure suggest evolving long-term strategies and policies related to innovation for economic development. GERD can be broken down among the performance sectors of business enterprise, government, higher education, and private not-for-profit institutions serving households. Business enterprise expenditure on R&D (BERD) is an important indicator of business commitment to innovation. Although not all business investments yield positive results, efforts towards R&D signal a commitment to the generation and application of new ideas that lead to new or improved products/services for innovation. Research suggests that R&D spending is associated with productivity and GDP growth. An increase of 0.1% point in a nation's BERD to GDP ratio could eventually translate to a 1.2% increase in GDP per capita (Expert Panel on Business Innovation 2009). Also, over the last few years, R&D intensity in the business sector has varied considerably between countries (Falk 2006). It is, therefore, useful to analyze this at a global level. The government intramural expenditure on R&D (GOVERD) represents efforts by the government to invest in R&D. Several motivations have been proposed for the same. Endogenous theories present such investment to be the foundation for economic growth (Griliches 1980). Governments respond to market failures in which firms under-invest due to the risk of externalities and information issues (Arrow 1962). Additionally, government funding can stimulate corporate R&D activities (Audretsch et al. 2002; Görg and Strobl 2007). The average government-funded R&D expenditure in 24 OECD countries doubled in three decades, from $6.04 billion in 1981 to $12.3 billion in 2008 (in US dollars, constant prices) (Kim 2014). Higher education expenditure on R&D (HERD) has been the focus of much research since the 1980s. Research has studied the transfer of knowledge and technology between the university and industry, using firm- or industry-level data (Adams et al. 2001; Collins and Wakoh 2000; Furman and MacGarvie 2007; Grady and Pratt 2000; Siegel et al. 2001). Since the 1990s, higher education institutions have played an increasingly important role in regional and national development in OECD countries, owing to the growth of strategic alliances across industry, research institutions, and knowledge intensive business services (Eid 2012). There is growing recognition that R&D in higher education institutions is an important stimulus to economic growth and improved social outcomes. Funding for these institutions comes mainly from the government, but businesses, too, fund some activities (Eid 2012). Private, not-for-profit organizations are those that do not generate income, profits, or other financial gain. These include voluntary health organizations and private philanthropic foundations. Expenditure on R&D represents the component of GERD incurred by units belonging to this sector. In addition to expenditure, R&D personnel are an input to the innovation phenomenon in a country. R&D personnel refer to all human capital including direct service personnel, such as managers, administrators, and clerical staff, who are involved in the creation of new knowledge, products, processes, and methods, and can be employed in the sectors of public, private, or academia. Knowledge creation and absorption—patents In the innovation framework, knowledge creation is the process of coming up with new ideas through formal R&D (Organization for Economic Cooperation and Development (OECD) 2005). Knowledge absorption is the process of acquiring and utilizing knowledge from entities such as universities, public research organizations, or domestic and international firms (Organization for Economic Cooperation and Development (OECD) 2005). Factors influencing knowledge absorption include human capital, R&D, and linkages with external knowledge sources. On a national level, the creation and absorption of knowledge is manifested through the evolution of intellectual property (IP). Countries institute regulatory frameworks in the form of patents and copyrights, to protect intellectual property and innovation (Blind et al. 2004). The rationale for protection arises from the fact that innovation amounts to knowledge production, which is inherently non-rival and non-excludable. Non-rival refers to the notion that the amount of knowledge does not decrease when used by others, and non-excludable refers to the unlimited ability of others to use and benefit from the knowledge once it is produced. Countries, therefore, institute legal systems to protect the rights of inventors and patent holders. An example is the Bayl-Dohl Act in the USA. By calibrating the strength of patent protection rights, policymakers can influence national innovation systems (Raghupathi and Raghupathi 2017). Legal protection and exclusivity of the use of knowledge allows investment in R&D and leads to the production of knowledge and innovation. Research in the area of patents has often centered on whether stronger IP rights lead to more innovation (Hall 2007; Hu and Jaffe 2007; Jaffe 2000) and on the endogeneity of patent rights on industries (Chen 2008; Moser 2005; Qian 2007; Sakakibara and Branstetter 2001). As patent rights change at a national level, industries within a country may react differently according to the importance of such rights to the respective industries (Rajan and Zingales 1998). Exploring patent applications or distribution by industry is therefore a key estimator of the extent of national innovation (Qian 2007). Patent laws have a significant effect on the direction of technological innovation in terms of which industries have more innovations (Moser 2005). In addition to using individual patent applications as a measure of patent activity, national innovation research also uses patent families, which are sets of patents/applications covering the same invention in one or more countries. These applications relate to each other by way of one or several common priority filings (Organisation for Economic Cooperation and Development OECD 2009). Patent families reflect the international diffusion of technology and represent an excellent measure of national innovativeness (Dechezlepreˆtre et al. 2017). In the current research, in contrast to studies that look at domestic patent families in an American (Hegde et al. 2009) or European context (Gambardella et al. 2008; Harhoff 2009; van Zeebroeck and van Pottelsberghe 2011), we adopt a global approach and consider patent families in all three major patent systems: the United States Patent and Trademark Office, the European Patent Office, and the Japan Patent Office. This examination allows us to identify possible international patent-based indicators that enable rigorous cross-country comparisons of innovation performances at national and sectoral levels. Outputs—exports Exports represent an output of the innovative activity of a country. Endogenous growth models suggest that firms must innovate to meet stronger competition in foreign markets (Aghion and Howitt 1998; Grossman and Helpman 1991; Hobday 1995). Firms enhance their productivity prior to exporting in order to reach an optimum level that qualifies them to compete in foreign markets (Grossman and Helpman 1995). Upon entry into the export market, continued exposure to foreign technology and knowledge endows a "learning-by-exporting" effect that offers economies of scale that further enable covering the cost of R&D (Harris and Moffat 2012). In the context of our research framework, we explore the following research questions using a descriptive visualization and an econometric panel regression approach: How do the S&T indicators (R&D expenditure, patents, and exports) influence country-level innovation? How do countries around the world differ in terms of innovation with S&T indicators? We now discuss our methodology in studying science and technology indicators for national innovation. Table 1 summarizes the research methodology. Table 1 Research methodology Data collection and variable selection We downloaded innovation data from the Master Science and Technology Indicator (MSTI) database of the Organization for Economic Cooperation and Development (OECD), for the years 2000 to 2016 (https://stats.oecd.org). The MSTI database contains indicators that reflect efforts towards science and technology of OECD member countries and non-member countries. The data covers provisional or final results, as well as forecasts from public authorities. MSTI indicators include, among others, resources dedicated to research and development, patent (including patent families), and international trade in R&D intensive industries. Table 2 shows the variables in the research. Table 2 Variables in the research Research and development variables include gross domestic expenditure on R&D (GERD) for the sectors of business enterprise, higher education, government, and private/non-profit households; funding sources include industry, government, abroad, and other national sources. For business enterprise, we included the industries of aerospace, computer/electronic/optical, and pharmaceutical. We also look at R&D personnel as a percentage of the national total in the abovementioned business sectors. In patents, we consider indicators such as the number of triadic patent families, which includes patents filed at the offices of the European Patent Office, the United States Patent and Trademark Office, and the Japan Patent Office, for the same invention by the same applicant or inventor. Patent variables also include those representing the international flow of patents and cross-border ownerships in the inventive process. Patents are considered for the technology sectors of biotechnology, information and communication technology (ICT), environmental technology, and pharmaceutical. As exports represent the outputs of the innovative process, we consider total exports in the three industries of aerospace, computer/electronic/optical, and pharmaceutical. We use panel data that includes variables for multiple indicators spanning multiple countries and time period. There are several ways to group country-level data. We use the income-level classification of the World Bank of high, upper-middle, lower-middle, and low income. However, due to the lack of availability of data, we only focus on upper-middle and high-income categories. In addition, we use the region classification of East Asia and Pacific, Europe and Central Asia, Latin America and the Caribbean, Middle East and North Africa, North America, South Asia, and Sub-Saharan Africa. Analytics platform/tools selection For the data analysis, we use the platforms of Tableau, which is an advanced business intelligence and data mining tool, for visualization and descriptive analysis, and R for the panel regression analysis. Analytics implementation We use the visualization tools in Tableau to analyze and get insight from the innovation data. The goal of visualization is to analyze, explore, discover, illustrate, and communicate information. In today's digital era, users are overwhelmed with myriad information. Visualization provides models of the information (Khan and Khan 2011; North 2005) and makes intelligible huge amounts of complex information. The visual analytics capabilities of the business intelligence software, Tableau, represent data graphically, filter out what is not relevant, drill down into lower levels of detail, and highlight subsets of data across multiple graphs—and do all these simultaneously. This level of customization results in insights unmatched by traditional approaches. Static graphs delivered on paper or electronically via computer screen help communicate information in a clear and enlightening way, an enormous benefit. But users derive the greatest benefits from Tableau's visual analytics. Visual representation allows one to identify patterns that would otherwise be buried in vast, unconnected datasets. Unlike the traditional hypothesis-and-test method of inquiry, which relies on asking the right questions, data visualizations bring themes and ideas to the surface without pre-conceived assumptions and where they can be easily discerned. In the current context of international or global data, we integrate and display dimensions of information that are needed to rapidly monitor aspects of global innovation on interactive dashboards. In this way, we discover patterns and seek relationships in various areas of innovation inquiry for policymaking and national development. In addition to visualization approach to analysis, we also deploy the econometric panel regression analysis. Our research contains panel data of observations of multiple phenomena obtained over multiple time periods for different regions. Using panel analysis allows us to address analytical questions that regular cross-sectional or longitudinal analysis cannot. In addition, it provides for means to handle missing data; failing which, problems such as correlation of omitted variables with explanatory variables can undermine the accuracy of the effects. We discuss the results of our analysis and offer a comprehensive summary for each component in the research. We first discuss the visualization approach and then the panel analysis approach. Visualization and descriptive analytics Research and development expenditure Expenditure on R&D encompasses gross domestic expenditure (GERD), business enterprise expenditure (BERD), government intramural expenditure (GOVERD), and higher education expenditure (HERD). We analyze the distribution of GERD by country (Fig. 2). GERD as a percentage of GDP for economies As seen in Fig. 2, there are variations between countries in the expenditure on R&D. Some countries take the lead in GERD. For example, Israel takes the lead in GERD, with 25% more than the expenditure of Japan, double that of China, and quadruple that of Chile. Others include South Korea, Finland, Austria, Denmark, Chinese Taipei, and the USA. In terms of the trend of expenditure, there are countries that show a spiked increase over the years—South Korea shows an increase from 2.18 to 4.23% from 2000 to 2015; China from 0.885% in 2000 to 2.067% in 2015; and Denmark from 2.32 to 2.95% between the years 2001 and 2015. Some countries show a consistent range of R&D expenditure for this same time span. The USA, for example, shows between 2.62 and 2.79% for the period 2000 to 2015. Mexico shows a minimal change from 0.33 to 0.49% between the years 2000 and 2015. In terms of R&D intensity, countries like Chile, Mexico, and Romania show a consistently low rate (between 0.2 and 0.4%), while others like Canada actually show a decrease over the years (1.865% in 2000 to 1.605% in 2014). A country's increase in expenditure on R&D reflects a dedicated effort towards national innovation. We analyze the distribution of the expenditure for each of the performance sectors of business, government, and higher education. Figure 3 shows the distribution of business enterprise expenditure on R&D (BERD) as a percentage of GDP among countries. Business enterprise expenditure on R&D (BERD) BERD encompasses all R&D activities performed by enterprises and is normalized to account for the size of the country. As seen in Fig. 3, most countries show an increasing trend in BERD for the years 2000 to 2015. Israel takes the lead in BERD, followed by South Korea and Japan. Chile, Romania, and Latvia have the lowest expenditure. The distribution of government intramural expenditure on R&D (GOVERD) is shown in Fig. 4. Government intramural expenditure on research and development (GOVERD) As Fig. 4 shows, most countries show a decreasing trend in government expenditure on R&D for the years 2000 to 2015. Iceland shows a large drop, from 0.66% in 2000 to 0.11% in 2010. Chinese Taipei also shows a decrease, from a relatively high 0.55% in 2003 to 0.38% in 2015. Other countries with a relatively low share of GOVERD also demonstrate a decreasing trend over these years. For example, Denmark shows a fall from 0.27 to 0.07% in 2015. In general, we see that government expenditure on R&D takes a decreasing pattern over time. A possible explanation is that research carries a degree of uncertainty and risk, initially keeping businesses and private investors at bay and forcing the government to step in to fill this void and encourage innovation (Walwyn and Cloete 2016). The implications will be discussed in detail in the "Conclusions" section. Figure 5 shows the distribution of higher education expenditure (HERD). Higher education expenditure on R&D As Fig. 5 illustrates, a majority of countries show an increase over the years. Denmark, in particular, shows a large increase, from 0.44% in 2000 to 0.99% in 2015. Sweden, Switzerland, Portugal, and the Czech Republic show similar increases. A few countries demonstrate a decrease; Hungary is one of them (0.25% in 2003 to 0.17% in 2015). For the most part, emphasis among countries on HERD is a promising trend. It reflects increasing awareness that R&D in higher education institutions offers integral stimulus to economic growth and innovation. It follows that most higher education institutions receive their funding from national governments and businesses (Eid 2012). Using regional and income-level classifications, we looked for differences in the distribution of expenditure (GERD) in each of the performance sectors of business, government, and higher education. Figure 6 shows the sectoral distribution by region. Percentage of GERD by region and sector Figure 6 shows that for all regions, the business sector shows the highest expenditure, followed by the education and government sectors. The private non-profit shows the lowest expenditure. In Europe and Central Asia, there is a greater proportion of expenditure from the higher education sector than the government sector. In terms of government expenditure on R&D, the Sub-Saharan region shows the highest spending, while the region of North America shows the lowest. We then mapped the sectoral distribution by income level (Fig. 7). GERD by income level and sector In both the high- and upper-middle-income countries, the business sector has the highest expenditure on R&D. This is followed by the higher education sector in the high-income countries and the government sector in the upper-middle-income countries. Expenditure on R&D from the private, non-profit sector is negligible for both income levels. It can be seen that the governments of high-income countries are less focused on spending on R&D. The implication of this finding is discussed in our conclusions. The source of funds for GERD is an important factor in analyzing the pattern of investments and expenditure for R&D. We wanted to see if there were any identifiable patterns in country-level funding for R&D. Figure 8 shows the source of funds for R&D expenditure for regions around the world. Source of funding for GERD by region Figure 8 shows the expenditure financed by government, industry, and other national and foreign sources. Funding from abroad represents an important source for expenditure because R&D is an activity that entails significant transfer of resources between such entities as organizations and countries. Our analysis shows notable regional variation in the structure of funding. For instance, in East Asia and Pacific, industry accounts for funding more than three fifths of total GERD (66.51%). However, it has a relatively low share of funding by government, national, and foreign sources. In the Sub-Saharan region, 44.54% of expenditure is funded by industry and an almost equal share is sourced by the government (41.31%). It therefore follows that, contrary to our hypothesis, only the developed regions show more funding from the industry while the developing (upper middle income) regions show at best equal or more funding from the government. Industries play a vital role in business R&D expenditure and are pivotal to improving the innovative landscape of a country. Figure 9 shows the distribution of BERD among regions for the industries of service, aerospace, computer/electronic/optical, and pharmaceutical. Business enterprise expenditure on R&D (BERD) by industry and region Figure 9 shows that in East Asia and Pacific, the highest expenditure is by the computer/electronic/optical industry; service follows. In Europe and Central Asia, expenditure in the year 2000 is equal across all four industries, but during later years, the service industry dominates expenditures. The distribution is different in Latin America and Caribbean, where more than 95% of expenditure is concentrated on the service and computer industries over the span we are studying. In North America, the service and computer industries dominate throughout the time span, with the aerospace industry taking the smallest share—less than 20%. In the sub-Saharan region, the service industry plays a dominant role. Note that our analysis for this region is limited because data is missing for the years 2010 to 2015, but we chose to include in our analysis the data that is available in order to provide as holistic an outlook as possible. In general, it appears that developed regions focus more on the technology sectors while developing regions focus more on service sectors. We looked at the number of researchers engaged in each performance sector as a percentage of the national total. Regions and countries differ in the allocation of personnel in the different sectors of business, government, and higher education. In developing regions since the government invests more in R&D expenditure, we expected the same trend to reflect in personnel. Figure 10 shows the sectoral distribution of R&D personnel as a percentage of national totals for each region. Fig. 10 R&D Personnel by region and sector Figure 10 clearly displays structural variations in the regions for R&D personnel. The regions of East Asia and Pacific, Europe and Central Asia, and North America reveal a steady business sector pattern, engaging the highest share of personnel; the government sector engages the lowest. In other regions, such as Latin America and the Caribbean, the education sector leads in personnel. The business sector is significant as it shows a steady annual increase. It appears that, in developed regions the business sector engages a higher share of R&D personnel than in developing regions. We now move on to analyze patents. The average number of patent applications filed under the Patent Cooperation Treaty (PCT) reflects the extent of technological innovation in a country (Fig. 11). Figure 11 shows the patent applications filed under PCT for the years 2000 to 2016. Average number of patent applications filed under PCT In the figure, green denotes a high number and red a low number of applications. The USA has the highest number of patent applications, followed by Japan and China. By comparison, Russia, Argentina, and other countries see a very low number of patent applications, signaling a need for innovative focus. As for the number of patent applications filed under European Patent Office (EPO) (Fig. 12), the USA leads Japan, Germany, and France. We now turn to applications for triadic patent families (Fig. 13). Average number of patent applications filed under EPO Applications for triadic patent families A triadic patent family is defined as a set of patents registered in patent offices of various countries to protect the same invention. The European Patent Office (EPO), the Japan Patent Office (JPO), and the United States Patent and Trademark Office (USPTO) are the three major patent offices worldwide. Counting triadic patent families begins with each inventor's country of residence and the initial date of registration of the patent. Indicators based on patent families normally enhance the international comparability and the quality of patent indicators. The greatest number of triadic patent families originated in the USA, followed by Japan, Germany, and France. In general, countries like the USA, Japan, Germany, and France are high in the number of patent filings under EPO and PCT. Co-inventions reflect international collaboration and represent the flow of information and knowledge across borders. They also indicate the flow of funds from multinational companies for R&D. Figure 14 shows the analysis for the percentage of patents owned by foreign residents under EPO. Percentage of patents owned by foreign residents under EPO In Fig. 14, countries represented in green indicate a high percentage of ownership of patents with foreign residents; the darker the green, the higher the percentage. Countries in red indicate a low percentage of ownership. The US, Japan, and other EU economies have a relatively low share of patents owned by foreign residents, and most of the patent ownership in these countries is local. By contrast, countries like Argentina, Russia, and Mexico show a high percentage of patents owned by foreign residents. These countries rely on foreign collaboration to strengthen their resources and facilities for innovation (Raghupathi and Raghupathi 2017). This signals a need to strengthen local innovation by targeting education systems to offer relevant skills and knowledge that foster growth. The portfolio of ownership of patents between local and foreign residents is an interesting revelation that offers implications for national policies on taxation and innovation, and will be discussed further in our conclusions. In the analysis of the percentage of patents invented abroad (Fig. 15), the difference globally is not as varied as it is for patents owned by foreign residents (as shown earlier, in Fig. 14). Percentage of patents invented abroad under EPO As Fig. 15 illustrates, the majority of countries in our study show less than 1% of patents are invented abroad. Only Switzerland (1.16%) and Ireland (1.28%) show relatively high levels. We looked next at the differences in distribution of patents by sector. We considered the various technology domains of environmental technology, pharmaceutical, ICT, and biotechnology. Figure 16 shows the comparison of the number of patents in each technology domain to the benchmark of the total number of patent applications under PCT. Distribution of patents under PCT for various technology domains Environmental technology is the application of environmental science, green chemistry, environmental monitoring, and electronic technology to monitor and conserve the natural environment and resources and to mitigate the negative effects of humans on the environment. Sustainable development is the primary objective of environmental technology. Figure 16 shows the number of patents in the environmental technology sector and indicates a steady increase from 2000 to 2011 and a sudden decrease from 2011 to 2013. That said, the total number of PCT patents over the years shows a trend of increase. The biotechnology sector includes technological applications that use biological systems or living organisms to develop or make products for specific use. The number of patents in the pharmaceutical and biotechnology sectors has been increasing over the years. Among all the sectors and throughout the time span we studied, ICT saw a high number of patents and a consistently positive trend. The number of patents in this sector equals the total number under PCT, highlighting the overall dominance of the ICT sector in the patent industry. We then looked for significant associations among sets of innovation indicators. We started with expenditure on R&D and the R&D personnel in the sectors of business, government, and higher education. Association between R&D expenditure and R&D personnel Governments use R&D statistics collected by countries from businesses to make and monitor policy related to national science and technology. These stats also feed into national economic statistics, such as GDP and net worth. Different performance sectors may have different kinds of associations between R&D expenditure and personnel. Figure 17 shows the associations between BERD and R&D personnel in the business sector. Association between BERD and R&D personnel in business enterprise sector Figure 17 shows a significant positive association (p < 0.0001) in the business sector between expenditure on R&D (BERD) and R&D personnel. Interestingly, the implication is that we should also see an increase in R&D personnel in this sector. But this is not the case when we break down the analysis by region. In Latin America and Caribbean, while the percentage of researchers in the business sector is high, the expenditure is consistently low for all countries in the region. The explanation is that in developing regions such as these, while there is recognition of the need to focus on R&D by employing more personnel, the cost of deployment is relatively low. By contrast, in North America, both the R&D expenditure and R&D personnel are high, likely because of the high cost of deployment of personnel. In East Asia and Pacific, while some countries are high in both business expenditure and personnel, others are low in both. Across all regions, some countries show a large business expenditure on R&D with no associated increase in personnel. Examples include Israel in Latin America and Caribbean, Japan in East Asia and Pacific, and Slovenia in Europe and Central Asia. On the flip side, there are countries, Romania and Ireland (in the region of Europe and Central Asia) among them, that show a large fluctuation in personnel with little change in expenditure. In general, there is a positive association between R&D expenditure and R&D personnel in the business sector. Figure 18 shows the analysis of R&D personnel and intramural expenditure on R&D (GOVERD) for the government sector. Association between GOVERD and R&D personnel in government sector Figure 18 shows a significant positive association (p < 0.0001) between GOVERD and R&D personnel. Though the region of East Asia and Pacific is similar to Latin America and Caribbean in government expenditure on R&D, it has a lower percentage of R&D personnel. This can be attributed to a relatively high cost of labor in East Asia & Pacific compared to Latin America & Caribbean. In North America, both expenditure and personnel are lower than those of East Asia & Pacific. This is due to the increased emphasis on R&D by the business sector over the government. Figure 19 shows the relationship between expenditure and personnel in the higher education sector (HERD). Association between HERD and R&D personnel in higher education In the case of higher education (Fig. 19), we did not find a significant association between the expenditure on R&D and the percentage of researchers (p > 0.05). It is important to analyze gross domestic expenditure on R&D (GERD) because it represents an aggregate of the sectors of business, government, and higher education and because it is considered the preferred method for international comparisons of overall R&D expenditure. Figure 20 shows the relationship between expenditure (GERD) and R&D personnel in all the sectors. Association between GERD and R&D personnel in the sectors The relationship between GERD and the percentage of researchers is significant and positive (p < 0.0001) for the business sector, but significant and negative (p < 0.001) for the government and higher education sectors. This means that in the business sector, an increase in expenditure is associated with an increase in the R&D personnel percentage of researchers). In the government and higher education sectors, an increase in expenditure is associated with a decrease in personnel. This highlights the fact that, in general, most of the R&D expenditure and personnel come from the business sector and not from the government or higher education sectors. We searched for associations between national exports and business expenditure on R&D (BERD) by industry to see if there were dominant patterns. Association between exports and BERD by industry We analyze business expenditure on R&D and exports for different industries. Figure 21 shows the analysis for the aerospace industry. Exports and BERD in the aerospace industry Figure 21 depicts the exports and BERD for the aerospace industry for each country. The intensity of the color indicates the quantity of exports, while grid size denotes expenditure. Only countries for which data can be adequately mapped are shown in the diagram. The USA is the leader in exports and expenditure in this industry. France, Germany, the UK, and others show high exports and relatively low expenditure. Japan and China are low in both expenditure and exports in the aerospace industry. Figure 22 shows the exports and business expenditure on R&D for the computer/electronic/optical industry. Exports and BERD in computer, electronic, and optical industry In the computer/electronic/optical industry, the USA leads China, Japan, and Korea in terms of expenditure. In terms of exports, China is the leader. While the UK performed well in exports in the aerospace industry (shown in Fig. 21), it fares low in both exports and expenditure in the computer/electronic/optical industry. Italy and Israel show very low exports and expenditure in this industry. Figure 23 illustrates the analysis of exports and business expenditure for the pharmaceutical industry. Exports and BERD in pharmaceutical industry Figure 23 shows the USA leading in business expenditure on R&D in the pharmaceutical industry. In exports, Germany takes the lead, followed by the USA, the UK, France, Switzerland, and Belgium. China and Japan have very low exports but moderate expenditure, while Canada and Korea are low in both exports and expenditure in the pharmaceutical industry. Overall, the USA ranks high in exports and expenditure on R&D, in all three industries; China does best in the computer industry; Japan does better in the computer and pharmaceutical industries than in aerospace; and the UK fares best in the pharmaceutical industry. Countries can use these findings to adjust their resource allocations to R&D in terms of industries. Association between patents and R&D expenditure We analyzed the association between R&D expenditure and international cooperation in patents (Figs. 24, 25, and 26). Association between patents with foreign co-inventors and R&D expenditure Association between patents owned by foreign residents and R&D expenditure Association between patents invented abroad and R&D expenditure Figure 24 shows the association between the percentage of patents invented with co-inventors and the four kinds of R&D expenditure. The association is significant (p < 0.0001) and negative for all types of R&D expenditure (GERD, BERD, GOVERD, HERD). With an increase in the percentage of R&D expenditure from any sector, the percentage of patents invented with co-inventors decreases, implying more local innovation. This holds promise for countries trying to enhance innovation in terms of their contribution to GDP. For patents owned by foreign residents (Fig. 25), there is a significant negative relationship (p < 0.0001) with all four types of R&D expenditures. With an increase in expenditure on R&D in any sector, there is a decrease in the percentage of patents owned by foreign residents. Specifically, our analyses reveals that with a 1% increase in gross expenditure on R&D, the % of patents owned by foreign residents decreases by 12.27%; in case of business expenditure, the % of patents decreases by 14.07%; in case of government and education expenditure, the decrease is significant with 58.95% and 46.25% respectively. However, the association of R&D expenditure with patents invented abroad (Fig. 26) differs from that with patents invented with co-inventors and with foreign residents. There are significant and moderately positive associations between patents invented abroad and R&D expenditure in all sectors, with the exception of government. As the R&D expenditure in the business or higher education increases, the percentage of patents invented abroad increases. In the face of increasing expenditure, local innovation becomes more expensive, leading to more foreign collaboration. The exception is with government expenditure. In this case, the relationship is significantly negative in that as the government expenditure on R&D increases, the percentage of patents invented abroad decreases. This finding has important policy implications for governments of developing countries, which should direct more resources to R&D with a view to improving local innovation and its contribution to GDP. This is discussed in detail in the conclusions section. We now discuss our second approach of econometric panel analysis. Econometric panel analysis Our panel analysis follows a threefold structure: first, we perform a regression analysis on exports for each industry; we then analyze the influence of R&D expenditure on patents; and lastly, we explore the international ownership of and investment in patents and the influence on exports. We deploy the PLM package of R for all the analyses. There are certain steps that need to be taken in order to prepare the data for panel analysis. As a first step to ensuring integrity, we inspected the dataset for missing data (Table 3). As the results show, some variables had more than 20% missing values. We deleted these and used the Random Forest algorithm to fill in values for the remaining variables. Table 3 Missing value percentages for variables The descriptive statistics for the complete dataset are depicted in Table 4. Table 4 Descriptive statistics for the variables The next step was to ensure that the data is stationary and usable for panel analysis. For this, we did unit root testing with Augmented Dickey Fuller (ADF) values (Table 5). As seen in Table 5, the ADF values are all significant (p < 0.01), confirming the appropriateness of data for panel analysis. Table 5 ADF test results The next test was to check for multicollinearity among variables. Since the preliminary correlation analysis revealed high correlation between certain variables, we did the variance inflation factor (VIF) for the variables within each industry (Table 6). The results showed some VIFs above 10, confirming multicollinearity. We therefore deleted these variables and reran the test for the remaining. The results were now satisfactory, with all VIFs below 10. Table 5 shows the results before and after multicollinearity analysis for each industry. The variables are now ready to be deployed into a regression model for each industry. Table 6 VIF tests before and after multicollinearity analysis In panel analysis, the commonly used approaches include independently pooled model, fixed-effect model (also known as first differenced model), and random effect model. The equation for the independently pooled model is shown below: $$ {y}_{it}=a+\sum \limits_{K=2}^K{\beta}_{it}{x}_{kit}+{\varepsilon}_{it} $$ In this equation, a represents the intercept, βit represents the coefficient for each attribute, xkit represents the attributes, and εit represents the residual of this model. The equation for the fixed-effect model is as follows: $$ {y}_{it}={\lambda}_i+\sum \limits_{K=2}^K{\beta}_{it}{x}_{kit}+{\varepsilon}_{it} $$ In the equation above, λi represents the intercept for each individual in the panel dataset, βit represents the coefficient for each attribute, xkit represents the attributes, and εit represents the residual of this model. The equation for the random-effect model is shown below: $$ {y}_{it}=\mathrm{a}+{\lambda}_i+\sum \limits_{k=2}^K{\beta}_{it}{x}_{kit}+{\varepsilon}_{it} $$ In the equation, a represents the intercept of the model, λi represents the intercept for each individual in the panel dataset, βit represents the coefficient for each attribute, xkit represents the attributes, and εit represents the residual of this model. However, as seen from the results of the ADF test (Table 5), all the variables satisfied the stationary assumption, thereby eliminating the need for first-differenced models. We therefore decide on the final model through a two-step comparison between the pooling (mixed effects) and fixed effects model, and the fixed effects and random effects model. We first use the analysis of variance (ANOVA) F test for both the time and country dimensions. Then, we use the L-M test to check for random effects, and the Hausman test to compare the leading influence of fixed or random effects models. We now discuss the regression analysis by industry. Regression analysis for the computer industry We first did a regression analysis on exports for the computer industry. Table 7 shows the results for the comparison of the pooling (mixed) and fixed effects models for the industry. Table 7 Comparison of pooling and fixed effects models for computer industry As shown in Table 7, the individual fixed effects model is better than the mixed effects model (p < 0.0001) and the time fixed effects model (p < 0.0001). We used the LM test (Table 8) to check for random effects, and the Hausman test (Table 9) to analyze the significance of the effects. Table 8 Test results of random effects for computer industry Table 9 Comparison of fixed effects and random effects model for computer industry The LM test (Table 8) confirms the significance of the random effects (p < 0.0001). The results of the Hausman test (Table 9) indicate that the random effects model is better than the fixed effects model. Accordingly, we ran the random effects model for the computer industry (Table 10). Table 10 Random effects model for the computer industry As Table 10 shows, five variables are significant in influencing exports: business expenditure (BERD), government expenditure (GOVERD), income level, gross expenditure on high education performance (GERD_HIGH_PERFORM), and patent filings under EPO (PATENT_EPO). Of these, government expenditure is the most significant, since we see that a unit increase in government expenditure is associated with an almost 3-unit (2.8649) increase in exports. Time and country are significant influencers in this industry, along with government and business expenditure, income level, and educational performance. Even though patents have a positive influence on the final exports, the low coefficient (0.0000794) indicates that the effect is not as significant as for other variables. Regression analysis for the aerospace industry Table 11 shows the comparison between the pooling (mixed) and fixed effects models for the aerospace industry. Table 11 Comparison of pooling and fixed effects models for aerospace industry As shown in Table 11, the individual fixed effects model is better than mixed effects model (p < 0.0001). In comparing the mixed and time fixed effects models, we see that that time is not a significant influencer of exports in the industry. The results of the comparison between fixed effects models also support the conclusion that individual fixed effects model is better (p < 0.0001). Next, we performed the LM test to check for random effects (Table 12) and the Hausman test to compare the random and the fixed effects models (Table 13). Table 12 Test results of random effects for aerospace Table 13 Comparison of fixed effect and random effects model for aerospace The results of the LM test (Table 12) show the random effects to be significant (p < 0.0001) in the analysis for the industry. The results of the Hausman test (Table 13) show the fixed effects model to be better than the random effects model (p < 0.0001). We therefore ran a fixed effects model analysis for the aerospace industry (Table 14). Table 14 Aerospace individual fixed effects model result However, the results in Table 14 show that the model is not significant (p > 0.05) in explaining the relationships in the panel dataset for the industry. In light of this, we performed a random effects model analysis for the aerospace industry (Table 15). Table 15 Aerospace random effects model result In the random effects model shown in Table 15, the variables that have a positive influence on exports include business expenditure (BERD), government expenditure (GOVERD), income level, gross expenditure in the performance sector of higher education (GERD_HIGH_PERFORM), patent filings under EPO (PATENT_EPO), under ICT (PATENT_ICT), triadic patents (PATENT_TRIADIC), patents in environmental technology (PATENT_ENV), and OECD membership (OECD_MEM). In terms of patents, we see that all four types of patents have a positive influence on exports and the influence is much higher here than in the computer industry. Income-level and OECD status (being a member) show a positive influence on exports in this industry. Exports analysis for the pharmaceutical industry The comparison between the fixed and mixed effects for the pharma industry (Table 16) shows that the individual fixed effects model is better (p < 0.0001). Time does not seem to be a significant factor in this industry (p > 0.05). In the comparison between the fixed effects models, the results also support the conclusion that individual fixed effects model is better (p < 0.0001). Table 16 Comparison of pooling and fixed effects models for pharmaceutical We did the LM test to see if random effects are significant (Table 17) and the Hausman test to compare the random and fixed effects models (Table 18). Table 17 Test results of random effects for pharmaceutical Table 18 Comparison of fixed effects and random effects models for pharmaceuticals From the LM test results (Table 17), we see that the random effects are better (p < 0.0001), and from the Hausman test (Table 18), we see that the random effects model better explains the relationships in this industry. Accordingly, we did the random effects model for the pharma industry (Table 19). Table 19 Pharmaceutical random effects model result The results in Table 19 show that patents, in terms of the number of applications, as well as ownership and funding from abroad, play a more prominent role in exports of this industry than in the others. Patents and government-related variables In this section, we analyze the relationship between the number of patent applications and the variables relating to government, such as expenditure and personnel. The dependent variable is the number of patents under PCT (PATENT_PCT). The six independent variables include GOVERD, GOVER_RESEARCHER, INCOME_LEVEL, OECD_MEM, GERD_GOV_PERFORM, and GERD_GOV_FINANCE. Using a correlation matrix, we found that most of the coefficients are less than 0.5 for the variables (Table 20). We also calculated the variance inflation factor (VIF) to test for multicollinearity (also shown in Table 20). The results of the table show all variables having a VIF of less than 5, thereby indicating no multicollinearity. Table 20 Correlation coefficients and VIF test results Table 21 shows the comparison between the pooling (mixed) effects and the fixed effects models. The results indicate that the individual fixed effects model is better than both the mixed effects (p < 0.0001) and the time-fixed effects models (p < 0.0001). Table 21 Comparison of pooling and fixed effects models for pharma We did the LM test (Table 22) and confirmed that the random effects model is significant (p < 0.0001) in this analysis. From the Hausman test (Table 23), we confirmed that the individual fixed effects model is better than the random effects model (p < 0.0001). Accordingly, we ran the individual fixed effects model for patents (Table 24). Table 22 Comparison of pooling and random effects model Table 23 Comparison of fixed and random effects model Table 24 Individual fixed effects model result The results of the individual fixed effects model (Table 24) show government expenditure on R&D (GOVERD) and gross expenditure on R&D in the government sector (GERD_GOV_PERFORM) as having a significant positive effect on the number of patent applications (PATENT_PCT). GOVERD, in particular, has a very large influence—a unit increase brings about 7040 unit increase in patents. This reflects the importance of government investment in R&D as a driving factor for patents and for innovation. International patents and gross expenditure on R&D We now explore the influence of variables relating to gross expenditure on R&D (GERD, GERD_BUS_PERFORM, GERD_GOV_PERFORM, GERD_OTHER_FINANCE, GERD_IND_FINANCE, GERD_ABROAD_FINANCE, and GERD_HIGH_PERFORM), on the percentage of patents owned by foreign residents (COOP_OWN). We checked for multicollinearity with the VIF test and resolved it by deleting the affected variables, and redoing the test. Table 25 shows the results before and after multicollinearity analysis. Table 25 VIF tests before and after resolving multicollinearity Table 26 shows the comparison between pooling (mixed) and fixed effects models. Table 26 Comparison of pooling and fixed effects models The results in Table 26 show that the fixed effects model is better (p < 0.0001) than the mixed effects model. Also, between the fixed effects model, the individual effects model is better (p < 0.0001). This means the influence of R&D expenditure on patents varies among countries. On the contrary, the fixed effects model has no effect by time (p > 0.05). Table 27 shows the LM test, and Table 28 shows the Hausman test. Table 28 Comparison of fixed effects and random effects models From the results of the LM test (Table 27), we see that the random effects model is significant (p < 0.0001). The Hausman test (Table 28) shows that the individual fixed effects model is good for this analysis (p < 0.0001). Therefore, we constructed the individual fixed effects model for the variables (Table 29). Table 29 Fixed effects model result According to the model results in Table 29, GERD financing from abroad (GERD_ABROAD_FINANCE) and gross expenditure on R&D by higher education sector (GERD-HIGH-PERFORM) have negative coefficients. This is the case with business (BERD) and government expenditures (GOVERD) as well. Of the variables, government expenditure has the largest influence on patent ownership by foreign residents since a unit increase in government expenditure on R&D is associated with a decrease of 27.94 units in patents owned by foreign residents. This depicts a large impact. In summary, a few things stand out from the panel analysis. First, it follows that the random effects model is better at explaining the relationship between exports and the independent variables in all the three industries. Time and country factors are significant in the model. Government expenditure on R&D has a positive influence on exports in all three industries (coefficients = 2.865, 0.085, and 0.563 respectively), implying that increased government expenditure will positively influence innovation through exports. Business expenditure, government expenditure, and gross expenditure in higher education are all key drivers for country-level innovation. Second, the computer industry has the largest R-square value among the three industries (0.41 in computer, 0.02 in Aerospace, and 0.005 in Pharmaceutical), indicating that the independent variables will have a stronger influence on exports in the computer industry than in the others. The results also point out that, of the three, investment of R&D in the computer industry will have a stronger influence on national innovation. Third, for all three industries, in terms of patents, government expenditure and government financing have a positive correlation with the number of patent applications. For instance, one unit increase in government expenditure (GOVERD) shows a 7040-unit increase in number of patent applications. This reflects the potential for government investment in R&D to influence innovative efforts. Fourth, there is a negative correlation between R&D expenditure being financed from abroad and patents owned by foreign residents. This reflects the necessity for countries to ramp up internal financing for R&D so as to encourage local innovation. Scope and limitations There are some limitations to our study. First, although we cover a period of 16 years, future studies can look at a longer span, facilitating the prospect of uncovering more trends and patterns in the data. Second, we explore associations but not causality in the relationships among S&T indicators for innovation. Third, we consider a small segment of innovation indicators relating to S&T, whereas there is a gamut of variables that can be incorporated in future research. Fourth, the data are extracted from a secondary data source (OECD); the aggregated data from multiple sources/models inherently poses some limitations. For instance, data is missing for some years or regions—either it was not collected, or it was collected but not reported. We use patents as an indicator of innovation, but patents are subject to certain drawbacks. Many inventions are not patented, or inventors utilize alternative methods of protection such as secrecy and lead-time. Furthermore, the propensity for patenting varies across countries and industries. Differences in patent regulations and laws pose a challenge for analyzing trends and patterns across countries and over time. Contributions and future research Despite the limitations, our study contributes in many ways to the literature on innovation and policy making. While most studies adopt a firm or enterprise level of innovation analysis, we deploy a country-level analysis using a large and comprehensive dataset from the OECD. The breadth of the indicators in the dataset allows for in-depth multi-dimensional analysis. We utilize a dual methodology of visualization and panel analysis, each of which offers a suite of benefits in terms of research insights and knowledge on a phenomenon. Visualization is an assumption-free and data-driven approach, allowing the data to speak for itself. With no pre-conceived notions, the methodology allows for previously undetected patterns and relationships to emerge from the data. Panel analysis provides the researcher with a large number of data points and reduces the issue of multicollinearity among the explanatory research variables. It therefore improves the efficiency of econometric estimates and allows for multidimensional investigation of a phenomenon. In addition to the contributions in terms of methodologies, the research adds to the literature on empirical innovation studies that deploy an analytic approach. By comparing innovation indicators at a national level, this study calls on policy makers to design appropriate horizontal or vertical S&T policies. The analysis of R&D expenditure by sector and by industry, along with R&D personnel, allows for effective and optimum resource allocation and talent distribution. Patent analysis is done incorporating individual applications as well as triadic families, thereby offering an individual and holistic perspective. The study presents insights on the phenomenon of international collaboration in the inventive process. By focusing on cross-border ownership of patents, the study highlights the international flow of knowledge and research funds between countries, offering valuable lessons for global policy making for innovation. While the study offers a panorama of results for innovation using S&T indicators, more theoretical and empirical work is needed to further advance our understanding of country-level innovation. An important direction for future work is to incorporate other forms of IP in addition to patents in analyzing national innovation. More research is needed in terms of analyzing the relationship to important policy concepts and their performance in different domains. We view the culture in innovation a particularly fertile area. It is not only important for a nation to be creative in imagining, developing, and commercializing new technologies, products, and services, but also to be scalable in terms of attitudes towards adapting to change and willingness to take risks (Hofstede 2001; Strychalska-Rudzewicz 2016). Our analysis shows differences between developed (high income) and developing (middle income) countries in terms of sectoral R&D expenditure. While in developing countries, R&D expenditure is predominantly from the government sector, in developed countries, it is from the business sector. As a nation develops, governmental expenditure on R&D decreases. Businesses and education take on greater roles to fill the gap. It is no surprise that most higher education institutions receive funding from governments and businesses. The analysis of patents offers interesting revelations. Countries (such as the USA, Japan, Germany, and France) with a high share of patents (individual and patent families) show a low percentage of patents owned by foreign residents. The portfolio of local versus foreign resident ownership of patents offers major implications for taxation and innovation policies at a national level. Countries with high foreign ownership of patents have low tax revenues as a percentage of GDP (Raghupathi and Raghupathi 2017). This is due to the fact that patents can be granted in countries other than those in which they were created, primarily because of the lack of associated costs and because income from IP is mobile (Griffith et al. 2014). Naturally, multinational companies routinely search for tax havens to locate their IP (Lipsey 2010). In order to spur more local innovation, countries with a large percentage of patents with foreign ownership introduce "patent boxes" that lower the tax rate on income derived from patents. For example, Belgium reduced the tax rate from 34 to 6.8% in 2007, Netherlands from 31.5 to 10% in 2007, Luxembourg from 30.4 to 5.9% in 2008, and the UK from 30 to 24% in 2013 (Griffith et al. 2014). We call on countries that have a high proportion of foreign ownership of patents to devise policies and incentives to encourage ownership by local residents and boost innovation and tax revenues. Exports are an important aspect of innovation, and there is a relationship between exports and RD expenditure in an industry. A high level of expenditure on R&D enables more exports by meeting higher standards, while a high level of exports allows countries to recover sufficient capital to focus on R&D. Countries' promotion efforts for exports should parallel ways to maximize innovation creation and economic development (Leonidou et al. 2011). Policy makers need to recognize that the extent of experience within a country affects which factors influence high-technology exports. Additionally, different innovation characteristics can be identified and encouraged to support national innovation. Lastly, countries need to reinstate horizontal and vertical S&T policies for innovation (Niosi 2010). Horizontal policies apply equally to all sectors (e.g., tax credit for R&D). While these policies are easy to implement and can strengthen existing sectors, they do not contribute to creation of new sectors. For new sectors to emerge, specifically high technology ones that contribute to growth, resources have to be concentrated in that direction. This is very important for developing countries that are looking to reap comparative advantages in targeted sectors. These countries need to develop and apply vertical policies directed to selected sectors. Since we show in our analysis that the ICT sector takes the lead in the number of patents over the years for most countries, it would be worthwhile to target resources and efforts in this sector to boost national innovation. Stimulating local innovation lowers both a dependence on foreign collaboration and foreign patent ownership. This holds promise for countries looking to enhance the contribution to GDP. Additionally, with globalization, economies are moving towards service-based and knowledge-based industries that are primarily ICT-driven, encouraging new patterns of growth and innovation (Raghupathi et al. 2014). Countries can attain a level of endogenous innovation using multifaceted incentives for science and technology indicators. However, these policies and reforms need to be constantly evaluated and revised in light of the evolutionary economic and educational infrastructure. We propose a call on countries to design sophisticated national innovation ecosystems that integrate disparate policies of science, technology, finance, education, tax, trade, intellectual property, government spending, labor, and regulations in an effort to drive economic growth by fostering innovation. OECD: Organisation for Economic Cooperation and Development S&T: Adams, J. D., Chiang, E. P., & Starkey, K. (2001). Industry–university cooperative research centers. The Journal of Technology Transfer, 26(1–2), 73–86. Aghion, P., & Howitt, P. (1998). Endogenous growth theory. Cambridge, MA: MIT Press. Akcali, B. Y., & Sismanoglu, E. (2015). Innovation and the effect of research and development (R&D) expenditure on growth in some developing and developed countries. Procedia-Social and Behavioral Sciences, 195, 768–775. Arrow, K. (1962). Economic welfare and the allocation of resources for invention. In: The Rate and Direction of Inventive Activity: Economic and Social Factors. Princeton University Press, Princeton, 609–626. Audretsch, D. B., Link, A. N., & Scott, J. T. (2002). Public/private technology partnerships: evaluating SBIR-supported research. Research Policy, 31(1), 145–158. Blind, K., Bührlen, B., Kotz, C., Menrad, K., & Walz, R. (2004). New products and services: analysis of regulations shaping new markets. Luxembourg: European Commission DG Enterprise. Chen, Q. (2008). The effect of patent laws on invention rates: evidence from cross country panels. Journal of Comparative Economics, 36(4), 694–704. Cohen, W. M. (2010). Fifty years of empirical studies of innovative activity and performance. Handbook of the Economics of Innovation, 1, 129–213. Collins, S., & Wakoh, H. (2000). Universities and technology transfer in Japan: recent reforms in historical perspective. The Journal of Technology Transfer, 25, 213–222. Dechezlepreˆtre, A., Me'nie're, Y., & Mohnen, M. (2017). International patent families: from application strategies to statistical indicators. Scientometrics, 111, 793–828. Eid, A. (2012). Higher education R&D and productivity growth: an empirical study on high-income OECD countries. Education Economics, 20(1), 53–68. Expert Panel on Business Innovation. (2009). Innovation and Business Strategy: Why Canada Falls Short. Ottawa: Council of Canadian Academies Available at http://www.conferenceboard.ca/hcp/details/innovation/berd.aspx#_ftn2, Retrieved Sept 24, 2017. Falk, M. (2006). What drives business research and d(R&D) intensity across organisation for economic co-operation and development (OECD) countries? Applied Economics, 38, 533–547. Furman, J. L., & MacGarvie, M. J. (2007). Academic science and the birth of industrial research laboratories in the U.S. pharmaceutical industry. Journal of Economic Behavior & Organization, 63, 756–776. Gambardella, A., Harhoff, D., & Verspagen, B. (2008). The value of European patents. European Management Review, 5(2), 69–84. Gocer, I., Alatas, S., and Peker, O. (Winter 2016). Effects of R&D and innovation on income in EU countries: new generation panel cointegration and causality analysis. Theoretical and Applied Economics, XXIII, 4(609), 153–164. Görg, H., & Strobl, E. (2007). The effect of R&D subsidies on private R&D. Economica, 74(294), 215–234. Grady, R., & Pratt, J. (2000). The UK technology transfer system: calls for stronger links between higher education and industry. The Journal of Technology Transfer, 25(2), 205–211. Griffith, R., Miller, H., & O'Connell, M. O. (2014). Ownership of intellectual property and corporate taxation. Journal of Public Economics, 112, 12–23. Griliches, Z. (1980). R&D and productivity slowdown. American Economic Review, 70, 343–348. Grossman, G. M., & Helpman, E. (1991). Innovation and growth in the global economy. Cambridge, MA: The MIT press. Grossman, G. M., & Helpman, E. (1995). Technology and trade. In G. M. Grossman & K. Rogoff (Eds.), Handbook of international economics (Vol. 3). Amsterdam; New York: Elsevier. Grupp, H., & Mogeec, M. E. (2004). Indicators for national science and technology policy: how robust are composite indicators? Research Policy, 33, 1373–1384. Hall, Bronwyn H. (Winter 2007). Patents and patent policy. Oxford Review of Economic Policy, 23(4), 568–587. Harhoff, D. (2009). Economic cost-benefit analysis of a unified and integrated European patent litigation system. Final report to the European Commission http://ec.europa.eu/internal/_market/indprop/docs/patent/studies/litigation/_system/_en.pdf. Harris, R. I., & Moffat, J. D. (2012). R & D, innovation & exporting in Britain: an empirical analysis (pp. 1–43). University of Glasgow: Centre for Public Policy for Regions. Hegde, D., Mowery, D. C., & Graham, S. (2009). Pioneers, submariners, or thicket-builders: Which firms use continuations in patenting? Management Science, 55(7), 1214–1226. Hobday, M. (1995). East Asian latecomer firms: learning the technology of electronics. World Development, 23, 1171–1193. Hofstede, G. (2001). Culture's consequences: comparing values, behaviors, institutions and organizations across nations (2nd ed.). Thousand Oaks CA: Sage Publications. Hu, A. G. Z., & Jaffe, A. B. (2007). IPR, innovation, economic growth and development, Department of Economics, National University of Singapore. Jaffe, A. B. (2000). The U.S. patent system in transition: policy innovation and the innovation process. Research Policy, 29, 531–557. Kayal, A. A. (2008). National innovation systems: a proposed framework for developing countries. International Journal of Entrepreneurship and Innovation Management, 8(1), 74–86. Kemp, R., & Pearson, P. (2007). Final report of the MEI project measuring economic innovation. UM MERIT: Maastricht. Khan, M., & Khan, S. S. (2011). Data and information visualization methods, and interactive mechanisms: a survey. International Journal of Computer Applications (0975–8887), 34(1), 1–14. Kim, S. Y. (2014). Government R&D funding in economic downturns: testing the varieties of capitalism conjecture. Science and Public Policy, 41, 107–118. Koh, W. T. H., & Wong, P.-K. (2005). Competing at the frontier: the changing role of technology policy in Singapore's economic strategy. Technological Forecasting and Social Change, 72(3), 255–285. Krugman, P. (1986). Pricing to market when the exchange rate changes. NBER Working Papers: National Bureau of Economic Research Inc. Lee, Y. (2015). Evaluating and extending innovation indicators for innovation policy. Research Evaluation, 24, 471–488. Leonidou, L. C., Palihawadana, D., & Theodosiou, M. (2011). National export promotion programs as drivers of organizational resources and capabilities: effects on strategy, competitive advantage, and performance. Journal of International Marketing, 19(2), 1–29. Lepori, B., Barre, R., & Filliatreau, G. (2008). New perspectives and challenges for the design and production of S&T indicators. Research Evaluation, 17(1), 33–44. Lipsey, R. (2010). Measuring the location of production in a world of intangible productive assets, FDI, and intrafirm trade. Review of Income and Wealth, 56, 99–110. Maksimovic, V., & Phillips, G. (2008). The industry life cycle, acquisitions and investment: does firm organization matter? The Journal of Finance, 63(2), 673–708. Malerba, F. (Ed.). (2004). Sectoral systems of innovation. Cambridge: Cambridge University Press. Mazzoleni, R., & Nelson, R. R. (2007). Public research institutions and economic catching up. Research Policy, 36(10), 1512–1528. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: an expanded sourcebook. Thousand Oaks, CA: Sage Publications. Moser, P. (Sept 2005). How do patent laws influence innovation? Evidence from nineteenth-century world fairs. American Economic Review, 95(4), 1214–1236. Niosi, J. (2010). Rethinking science, technology and innovation (STI) institutions in developing countries. Innovation: Management, policy & practice, 12, 250–268. North, C. (2005). Toward measuring visualization insight. IEEE Computer Graphics and Applications, 11(4), 443–456. Organisation for Economic Cooperation and Development OECD. (2009). Patent statistics manual. Paris: OECD http://dx.doi.org/10.1787/9789264056442-en. Retrieved 1 Oct 2017. Organization for Economic Cooperation and Development (OECD). (1997). The measurement of scientific and technological activities, proposed guidelines for collecting and interpreting technological innovation data: Oslo manual (2nd ed.) http://www.oecd.org/dataoecd/35/61/2367580.pdf . Retrieved Oct 15, 2017. Organization for Economic Cooperation and Development (OECD). (2005). Guidelines for collecting and interpreting innovation data: Oslo manual (3rd ed.) http://unstats.un.org/unsd/EconStatKB/Attachment336.aspx?AttachmentType=1, Retrieved Oct 1, 2017. Posner, M. V. (1961). International trade and technical change. Oxford Economic Papers, 13(3), 323–341. Qian, Y. (2007). Do additional national patent laws stimulate domestic innovation in a global patenting environment: a cross-country analysis of pharmaceutical patent protection, 1978–2002. Review of Economics and Statistics, 89(3), 436–453. Qiquan, Y., Changlin, G., Weiguo, S., & Chen, W. (2006). Benchmarking national innovation capability: indicators framework and primary findings, OECD Workshop, Oct 19–20. Raghupathi, V., & Raghupathi, W. (2017). Innovation at country-level: association between economic development and patents. Journal of Innovation and Entrepreneurship, 6(4), 1–20. https://doi.org/10.1186/s13731-017-0065-0. Raghupathi, W., Wu, S., & Raghupathi, V. (2014). The role of information and communication technologies in global sustainability. Journal of Management for Global Sustainability, 2(1), 123–145. Rajan, R. G., & Zingales, L. (June 1998). Financial dependence and growth. American Economic Review, 88(3), 559–586. Rostow, W. W. (1959). The stages of economic growth. The Economic History Review, 12(1), 1–16. Sakakibara, M., & Branstetter, L. (2001). Do stronger patents induce more innovation? Evidence from the 1988 Japanese patent law reforms. RAND Journal of Economics, 32, 77–100. Schumpeter, J. A. (2008). The theory of economic development: an inquiry into profits, capital, credit, interest and the business cycle, New Brunswick (U.S.A) and London (U.K.): Transaction Publishers. Journal of Comparative Research in Anthropology and Sociology, 3, 137–148. Siegel, D., Thursby, J., Thursby, M., & Ziedonis, A. (2001). Organizational issues in university industry technology transfer: an overview of the symposium issue. The Journal of Technology Transfer, 26(1–2), 5–11. Siyanbola, W., Adeyeye, A., Olaopa, O., & Hassan, O. (2016). Science, technology and innovation indicators in policy-making: the Nigerian experience. Palgrave Communications. https://doi.org/10.1057/palcomms.206.15. Strychalska-Rudzewicz, A. (2016). The impact of national culture on the level of innovation. Journal of Intercultural Management, 8(1), 121–145. https://doi.org/10.1515/joim-2016-0006. Van Zeebroeck, N., & van Pottelsberghe, B. (2011). Filing strategies and patent value. Economics of Innovation and New Technology, 20(6), 539–561. Walwyn, D., & Cloete, L. (2016). Universities are becoming major players in the national system of innovation. South African Journal of Science, 112(7/8), 121–128. World Economic Forum (WEF). (2012). The global competitiveness report 2012–2013. http://www3.weforum.org/docs/WEF_GlobalCompetitivenessReport_2012-13.pdf, Retrieved Sep 29, 2017. There was no funding for the research. The datasets supporting the conclusions of this article are available in the OECD repository, https://stats.oecd.org. Koppelman School of Business, Brooklyn College of the City University of New York, 2900 Bedford Ave, Brooklyn, NY, 11210, USA Viju Raghupathi Gabelli School of Business, Fordham University, 140 W. 62nd Street, New York, NY, 10023, USA Wullianallur Raghupathi Search for Viju Raghupathi in: Search for Wullianallur Raghupathi in: Both authors read and approved the final manuscript. Correspondence to Viju Raghupathi. Viju Raghupathi is an Associate Professor at the Koppelman School of Business, Brooklyn College of the City University of New York. She received her PhD in Information Systems from The Graduate Center, City University of New York. Her research interests include business analytics, social media, big data, innovation/entrepreneurship, sustainability, corporate governance, and healthcare. She has published in academic journals including Communications of the AIS, Journal of Electronic Commerce Research, IEEE Access, Health Policy and Technology, International Journal of Healthcare Information Systems and Informatics, Information Resources Management Journal, and Information Systems Management. Wullianallur Raghupathi is a Professor of Information Systems at the Gabelli School of Business, Fordham University, New York; Program Director of the M.S. in Business Analytics Program; and Director of the Center for Digital Transformation (http://www.fordhamcdt.org/). He is co-editor for North America of the International Journal of Health Information Systems & Informatics. He has also guest edited (with Dr. Joseph Tan) a special issue of Topics in Health Information Management (1999) and a special section on healthcare information systems for Communications of the ACM (1997). He was the founding editor of the International Journal of Computational Intelligence and Organizations (1995–1997). He also served as an Ad Hoc Editorial Review Board Member, Journal of Systems Management of the Association for Systems Management, 1996–1997. Prof. Raghupathi has published 40 journal articles and written papers in refereed conference proceedings, abstracts in international conferences, book chapters, editorials, and reviews, including several in the healthcare IT field. Raghupathi, V., Raghupathi, W. Exploring science-and-technology-led innovation: a cross-country study. J Innov Entrep 8, 5 (2019) doi:10.1186/s13731-018-0097-0 Panel analysis
CommonCrawl
A company charges $2.50 per bottle when a certain beverage is bo Edmund Adams 2021-11-16 Answered A company charges \(\displaystyle\${2.50}\) per bottle when a certain beverage is bought in lots of 120 bottles or less, with a price per bottle of \(\displaystyle\${2.25}\) if more than 120 bottles are purchased. Let \(\displaystyle{C}{\left({x}\right)}\) represent the cost of x bottles. Find the cost for the following numbers of bottles. a) 90 Greg Snyder Answered 2021-11-17 Author has 8000 answers Here the number of bottles are less than 120. \(\displaystyle\Rightarrow\text{Price of each bottle is}\${2.50}\) \(\displaystyle\Rightarrow\text{The price of 90 bottles is,}\) \(\displaystyle{\left({90}\right)}\times{\left(\${2.50}\right)}=\${225}\) The number of bottles are 120. \(\displaystyle\Rightarrow\text{The price of 120 bottles is,}\) \(\displaystyle{120}\times{\left(\${2.50}\right)}=\${300}\) Here the number of bottles are more than 150. \(\displaystyle{\left({150}\right)}\times{\left(\${2.25}\right)}={337.5}\) Javier earned $8.50 per hour for 32 hours and $12.75 per hour for 8 hour. What was his average hourly wage for those 40 hours? The equation \(a=50h+50\) represents the amount a that an air-conditioning repair company charges for h hours of labour. Make a table and sketch a graph of the equation A party mix is a mixture of nuts that are sold for $2.50 per kg to a cereal mixture that is sold for $1 per kg. How much of each needed to be added to obtain 60 kg of a mix that will be sold for $2.10 per kg? Numbers of kilograms Price per kilogram(in dollars) Values(in dollars) Nuts x 2.50 Cereal y 1.00 Mixture 2.10 Decision analysis. After careful testing and analysis, an oil company is considering drilling in two different sites. It is estimated that site A will net $20 million if successful (probability.2) and lose $3 million if not (probability .8); site B will net $80 million if successful (probability .1) and lose $7 million if not (probability.9). Which site should the company choose according to the expected return from each site? 1. What is the expected return for site A? The Munchies Cereal Company makes a cereal from several ingredients. Two of the ingredients, oats and rice, provide vitamins A and B. The company wants to know how many ounces of oats and rice it should include in each box of cereal to meet the minimum requirements of 48 milligrams of vitamin A and 12 milligrams of vitamin B while minimizing cost. An ounce of oats contributes 8 milligrams of vitamin A and 1 milligram of vitamin B, whereas an ounce of rice contributes 6 milligrams of A and 2 milligrams of B. An ounce of oats costs $0.05, and an ounce of rice costs $0.03. a. Formulate a linear programming model for this problem. b. Solve this model by using graphical analysis. c. What would be the effect on the optimal solution if the cost of rice increased from $0.03 per ounce to $0.06 per ounce? Engineering company has a task of checking compressive strength for 100 concrete cubes. The results revealed that 85 cubes passed the compressive strength test successfully and 15 cubes failed in the test. If 10 cubes are selected at random to be inspected by the company, determine the probability that the 8 cubes will pass the test and 2 cubes will fail in the test by using the Combinatorial Analysis. a. 0.4522 b. 0.3415 c. 0.6553 d. 0.1156 e. 0.2919 Multivariable functions Green's, Stokes', and the divergence theorem
CommonCrawl
A whole canopy gas exchange system for the targeted manipulation of grapevine source-sink relations using sub-ambient CO2 Jason P. Smith ORCID: orcid.org/0000-0002-9841-36091,2 nAff3, Everard J. Edwards4, Amanda R. Walker4, Julia C. Gouot1,5, Celia Barril1,5 & Bruno P. Holzapfel1,6 Elucidating the effect of source-sink relations on berry composition is of interest for wine grape production as it represents a mechanistic link between yield, photosynthetic capacity and wine quality. However, the specific effects of carbohydrate supply on berry composition are difficult to study in isolation as leaf area or crop adjustments can also change fruit exposure, or lead to compensatory growth or photosynthetic responses. A new experimental system was therefore devised to slow berry sugar accumulation without changing canopy structure or yield. This consisted of six transparent 1.2 m3 chambers to enclose large pot-grown grapevines, and large soda-lime filled scrubbers that reduced carbon dioxide (CO2) concentration of day-time supply air by approximately 200 ppm below ambient. In the first full scale test of the system, the chambers were installed on mature Shiraz grapevines for 14 days from the onset of berry sugar accumulation. Three chambers were run at sub-ambient CO2 for 10 days before returning to ambient. Canopy gas exchange, and juice hexose concentrations were determined. Net CO2 exchange was reduced from 65.2 to 30 g vine− 1 day− 1, or 54%, by the sub-ambient treatment. At the end of the 10 day period, total sugar concentration was reduced from 95 to 77 g L− 1 from an average starting value of 23 g L− 1, representing a 25% reduction. Scaling to a per vine basis, it was estimated that 223 g of berry sugars accumulated under ambient supply compared to 166 g under sub-ambient, an amount equivalent to 50 and 72% of total C assimilated. Through supply of sub-ambient CO2 using whole canopy gas exchange chambers system, an effective method was developed for reducing photosynthesis and slowing the rate of berry sugar accumulation without modifying yield or leaf area. While in this case developed for further investigations of grape and wine composition, the system has broader applications for the manipulation and of study of grapevine source-sink relations. In the wine industry, where high yields are often perceived to be negatively associated with quality, there is a need to better understand how source-sink relations and the availability of carbohydrates during ripening influences berry composition and wine quality attributes. Firstly, because significant resources can be devoted to the management of canopy structure, fruit light exposure and yield [1,2,3], and knowledge of processes being targeted is important for predicting potential benefits for fruit composition and wine quality. Secondly, where the viticulture system, water availability, and climate allows it, higher yields provide an opportunity to increase production efficiency if quality can otherwise be maintained. A key challenge in understanding the yield-quality relationship is determining the extent to which berry metabolites are directly influenced by the availability of carbohydrates [4, 5], and separating these from responses to fruit exposure that may occur when canopy or crop adjustments are made [6,7,8,9]. Crop reductions in the form of shoot or bunch thinning have been demonstrated to improve aspects of berry composition and prevent delays in ripening in particularly high-yielding seasons [10, 11]. Conversely, as many wine producing regions face advanced ripening as the climate warms, the rate of sugar accumulation can be slowed by reducing the leaf area and photosynthetic output of the canopy [12,13,14]. However, in applying such treatments it can be difficult to ensure the fruit microclimate remains exactly the same across treatments, and compensatory responses may offset the intended effect on source-sink relations. Photosynthesis may be upregulated following leaf removal, while stored reserves can provide an alternative source of carbohydrates if photosynthetic supply is insufficient [15, 16]. If fruit is instead removed to increase sugar accumulation in remaining berries, this has been shown to have the desired effect in some studies [10], but minimal effect in other situations where yield or photosynthetic carbohydrate supply are apparently not limiting [17, 18]. An alternative method for the manipulation of whole plant carbon balance involves scaling up the system widely employed with single leaf gas exchange instruments, and modifying net canopy carbon assimilation by varying the amount of CO2 available for photosynthesis. Such an approach has been used for short term gas exchange measurements of whole tree canopies [19], with CO2 added to the air supply of two large chambers after scrubbing through columns of soda-lime. A similar capacity to vary CO2 supply above or below ambient for more extended periods with grapevines could provide a method of modifying photosynthetic carbohydrate supply relative to berry demand without the need to remove fruit or leaves. This would address the issue of fruit exposure changes, and combined with measurements of whole canopy gas exchange, the effects on net canopy photosynthesis in relation to effects on berry sugar accumulation could also be quantified. The aim of this study was to build and validate a multi-chamber system with CO2 scrubbing capacity to enclose the canopies of mature fruiting grapevines that were grown outside in large pots. The chambers were built to complement field based studies into crop load and fruit composition, and to provide a method to examine the relationship between berry sugar accumulation and key primary and secondary metabolites in the absence of fruit exposure differences. Although CO2 re-injection was tested, and could be readily added to the system for routine use, the initial series of experiments undertaken were run at ambient and sub-ambient CO2 concentrations. In development of the CO2 scrubbing system a target reduction of 200 ppm, approximately half of ambient, was conceived to match the crop load variation that could be generated with commercially realistic crop or leaf area adjustments in the field. It also orientated the work towards the question of high yield effects on berry composition. In this paper, the design details of the system and its operation are described. To demonstrate the effectiveness of the approach for reducing photosynthesis and slowing ripening, gas exchange and berry sugar results are presented from the first experiment with the system conducted at onset of véraison. Chamber environmental conditions For whole canopy gas exchange chambers previously used with grapevine studies [20,21,22,23,24,25,26,27], designs have ranged from simpler 'balloon' type designs that enclose the entire canopy with a transparent plastic film [24, 25], to fully framed chambers that provide greater resistance to wind [26, 27]. The design of the chambers in the present study was primarily influenced by the canopy shape, which was wide relative to the length of the cordon, and therefore most effectively enclosed by a rectangular shape (Fig. 1a). Fabrication of the chamber tops from solid plastic panel avoided issues with damage from wind and trimmed shoots on the outside edge of the canopy. From observations during smoke tests, air mixing and movement through the chambers was uniform despite the rectangular shape, and the horizontal air entry from the six internal fans mixed the air underneath the vine before rising vertically through the canopy. The time taken for smoke to clear the chamber was approximately 1 min with the 2.6 m3 min− 1 daytime air flow rate, and the air flow did not result in any obvious leaf movement except where lower shoots hung in the immediate vicinity of the fan outlets. Although no attempt was made to describe the effects of solar elevation and reflection from the panels in detail, a comparison between photosynthetic photon flux density (PPFD) measured above the wire of the bird-proof enclosure with a sensor placed underneath a horizontal sheet of the same acrylic plastic showed just over 80% transmission at solar noon. With PPFD values of 2100 μmol m− 2 s− 1 usually attained on a clear summer day, light conditions within the chambers would be at, or close to light saturated. Ambient air supplied gas exchange chambers installed on potted Shiraz grapevines (a). Design details (b) showing the two-part chamber base (1), two part transparent acrylic chamber (2), six internal blower fans (3), air exhaust cover and baffle (4), ambient air intake (5) and removable lower side panels for fruit sampling (6). Additional text indicates position air velocity sensor, air sampling and temperature sensor on the inlet side, and two points of internal air sampling and air temperature measurement Air flow rates through open gas exchange chambers represent a compromise between minimizing internal temperature rise above ambient and maintaining sufficient CO2 differential for accurate photosynthesis measurements. In a previous study with field-grown grapevines it was estimated that the internal temperature of a sun exposed 8 m3 chamber could be expected to rise + 3 °C above ambient with two full air volume exchanges per minute [26]. Measured values with chambers designed to this specification were found to be consistent with this estimate, with temperatures at canopy height inside the chamber within 2.5 °C of the same position outside the chambers. Similarly, in another study with field grown grapevine, Pagay [23] reported a maximum rise of + 3 °C under clear conditions for 2.5 m3 volume chambers. For chambers built for large pot-grown vines, Poni et al. [21] reported an average temperature rise for 0.6 m3 volume chambers of + 1.8 °C and + 2.4 °C for well-watered and water stress treatments, respectively. The authors of the latter study also suggested an alternative design guideline of a minimum of approximately 3 to 4 L s− 1 of air flow per square meter of leaf area to avoid overheating. For the three chambers configured for ambient air supply in the present study, with 2.2 air volume changes per minute, the mean maximum temperature rise over 14 days was + 3.5 °C above the inlet temperature, and mean daily temperature rise + 1.4 °C (Fig. 2). With an average leaf area of 4.49 m2 in the ambient treatment (Table 1), the air flow rate relative to leaf area was 9.7 L m− 2 s− 1 and more than double that suggested [21]. However, given the otherwise similar temperature rise within the chambers to previously described systems, higher air flow rates appeared to be required for the location of the study which is characterized by high summer temperatures and solar radiation. The chamber temperature increase closely followed light intensity, reaching a peak at solar noon and then cooling below the inlet air temperature as the PPFD fell below 1000 μmol m− 2 s− 1 in the late afternoon. While the maximum temperature rise was higher than reported for other chambers systems, the mean inlet chamber temperature was 1.3 °C cooler than the air temperature measured at approximately the same height outside the chambers. The mean internal chamber temperature of 24.0 °C across the 14 days was therefore close to the mean ambient temperature of 23.8 °C. This may reflect the greater height of the intake chimneys above the concrete base of the bird-proof enclosure, which at 1.8 m above the ambient temperature sensor, was in more open conditions above the canopy. Diurnal pattern of ambient intake air temperature (a), temperature change from inlet to outlet (b), and vapour pressure deficit (c) of supply air for the three ambient air supplied chambers from December 262,013 to January 82,014. Shaded area ± standard deviation Gas exchange chamber with CO2 scrubber connected (a, b). Design details (c), indicate intake for soda lime scrubbing tubes (1), ambient by-pass air and manual control valve (2,3), lower chamber of fully scrubbed air (4), open-cell foam air filters (5), mixing chamber for by-pass and scrubbed air (6), centrifugal fan (7) and outlet to chamber (8). The lower pipe fitting below the by-pass valve is only included for support of the intake chimney, and is not continuous with the lower chamber of the scrubber Table 1 Summary of reproductive and vegetative growth characteristics of three grapevines in each CO2 supply treatment CO2 scrubber performance and operation With fresh soda lime, and all connections fully sealed, the scrubbers had the capacity to deliver air that had CO2 completely removed. However, to facilitate refiling the tubes after the soda lime was exhausted, the pipe fittings that were used to construct the scrubbing beds were left as a push-on fit only. When connected to a chamber for normal operation (Fig. 3a), the lowest CO2 concentration obtainable was therefore ~ 15 ppm (Fig. 4a). With all flow being directed through the highest resistance pathway of the scrubbing beds, the scrubbers could deliver air with this concentration of CO2 at a volume of 1.5 m3 min− 1. At the other end of the operating range, when unrestricted flow was allowed through the inlet chimney and lowest resistance pathway, the scrubbers could deliver ~ 300 ppm CO2 at 3.5 m3 min− 1. If intermediate values between 300 ppm and ambient were required, then the scrubber tube inlets could be progressively covered to prevent residual air flow through the soda lime. To achieve the targeted 200 ppm reduction from ambient air required for the present study, and to match the intake of the chamber fans such that air flow was not modified relative to the ambient chambers, the scrubbers were run at approximately 50% bypass and close to full fan speed. Previously, Lloyd et al. [19] provided a brief description of a scrubbing system that used 80 kg of soda lime to scrub air at a rate of 12 m3 min− 1 for large whole tree chambers. This equates to 6.6 kg of soda lime per 1 m3 of air, which is comparable to the scrubbers in the present study with a ratio of 5.5 kg of soda lime per 1 m3 of air when running at 50% bypass. The six fans within each chamber did have the capacity to pull air through the soda lime beds without the assistance of the scrubber fan if the duty cycle of their control signal was increased. However, the approach used here avoided a large vacuum developing between the chamber and scrubber, and allowed adjustments to be made with the manual scrubber controls rather than changing the microcontroller program that ran the chamber fans. Operating range (grey shaded area) of CO2 scrubbers as determined by fan speed and ambient air mixing (a). Each fitted line indicates pre-programmed fan speed, and the points down each curve the progressive closure of the ambient air by-pass valve. Concentration difference of CO2 from ambient for the 10 day scrubbing period of the veraison experiment described in this paper (b), and an example from a subsequent experiment in 2015 (not presented) where the scrubbing tubes were re-wet on a daily basis to replace water lost due to evaporation (c) With the first extended period of use of the scrubbers, a stable 200 ppm reduction in CO2 from ambient air could be maintained for 1 to 2 days depending on weather conditions. Forcing a higher proportion of air through the scrubbing beds could prolong this period to some extent, but after 3 or 4 days the soda lime needed replacement (Fig. 4b). Thus, the target reduction of 200 ppm could be achieved with fresh soda lime, but the average treatment difference was reduced to 177 ppm (Table 2). It was subsequently recognized that this loss of scrubbing effectiveness was due to drying of the soda lime rather than the chemical exhaustion of the CO2 absorption reactants. For the soda lime reaction, water is required for the first stage when CO2 from the air dissolves to form carbonic acid. Subsequent reactions with calcium hydroxide catalysed by sodium or potassium hydroxide lead to the production of calcium carbonate. In the presence of adequate water, the CO2 absorption capacity of the soda lime is therefore determined by the utilization of the calcium hydroxide. The Sofnolime® used in the present study had a moisture content of 16–20% according to the manufacturer's specifications, meaning that each scrubber tube would contain an average of 608 mL of water if filled with fresh absorbent. During the current study, the average water loss from each tube over the 10 day period was 278 mL based on the difference in water vapour concentration between the scrubber air outlet and the intake of the ambient chambers. In a later experiment with the system, where a daily addition of approximately 250 mL of water was made to each tube, a period of 12 days of effective scrubbing was maintained without the need for new soda lime (Fig. 4c). This period ended with the completion of the sub-ambient treatment rather than exhaustion of the soda lime. Table 2 Summary of sub-ambient CO2 effects on daily canopy gas exchange parameters and water use efficiency (WUE) from December 26 to January 8 In addition to a gain in water vapour, the temperature of supply air also increased in transit through the CO2 scrubbers. While not specifically apportioned to either effect, this was likely due to a combination of solar heating on the scrubber itself and the exothermic soda lime reaction. To compare the possible implications of this temperature rise for the fruit microclimate, an average of the inlet and outlet temperature has been used to approximate the temperature at the fruiting-zone at approximately halfway between the fan outlets and the measurement point trellis foliage wire. For the 10 day scrubbing period, the mean day time temperature for the fruiting zone was 27.2 °C for the ambient chambers and 28.2 °C for the sub-ambient chambers, with a difference between treatments of 1.04 °C (Fig. 5). Including overnight temperatures, where the scrubbers were disconnected from the chambers, the mean overall temperature difference between treatments was 0.68 °C. Difference in average daytime chamber air temperature from ambient during the 10 day period of CO2 scrubbing and 4 day post-scrubbing period Whole canopy photosynthesis and transpiration During the 10 day scrubbing period, the mean CO2 depletion on transit through the chambers for one hour either side of solar noon was 23.6 and 12.1 ppm respectively for the ambient and sub-ambient treatments. An example period of gas exchange measurements for a full cycle of 6 chambers in shown in Fig. 6. For the ambient chambers, individual CO2 concentration differences between the intake and outlet reached 32 ppm, and for the sub-ambient chambers, 16 ppm. Under ambient CO2 supply, the maximum daily average photosynthesis rates were mostly between 9 and 11 μmol m− 2 s− 1 over the 14 day period with occasional individual values above 12 μmol m− 2 s− 1. These rates are consistent with those reported for field-grown Cabernet Sauvignon with a similar training system to the one used in the present study [20] and intermediate to values of 2 to 6 μmol m− 2 s− 1 for shaded leaves and 12 to 17 μmol m− 2 s− 1 for sun-exposed leaves reported for field-grown vines under comparable irrigated hot climate growing conditions [28, 29]. During the 10 day period of sub-ambient CO2, which included two heavily overcast days, the average maximum photosynthesis rates were 6.5 and 11.4 μmol m− 2 s− 1 respectively for the sub-ambient and ambient treatments. For the whole canopy, these values were 29 and 54 μmol vine− 1 s− 1, respectively. For other studies with grapevine that have reported assimilation rates on a canopy basis, values of up to 60.7 μmol vine− 1 s− 1 for field grown Sauvignon Blanc were recorded by Petrie et al. [27], and ~ 30 μmol vine− 1 s− 1 for pot-grown Sangiovese Poni et al. [21]. With an average leaf area of 4.5 m2 for the vines in the present study, and ~ 4.6 and 2.87 m2 respectively for these earlier studies, the CO2 assimilation rates would also be comparable on a leaf area basis. Example 30 min measurement cycle of the six chamber system showing inlet and outlet CO2 concentration (a) and corresponding water vapour concentrations (b). Filled symbols indicates values used for subsequent gas exchange calculations The daily time course of CO2 and H2O fluxes are shown in Figs. 7 and 8, respectively. The corresponding average daily sums for CO2 exchange and water use are shown in Table 2. The average net daily CO2 assimilation over the 10 days was 65.2 g for the ambient vines and 30 g for the sub-ambient, representing a reduction in carbon assimilation of 54%. By slowing the airflow through the chambers at night, the system was also able to resolve the 1 to 3 ppm rise in CO2 due to respiration. These rates were higher early in the night than pre-dawn, and although not clearly visible at the scale shown in Fig. 6. With mean night temperature used a covariate, which was significant at P = 0.029, there was also a weak significant difference in total respiration between treatments with vines in the sub-ambient chambers losing an average 3.1 g of CO2 compare to 3.8 g in the ambient chambers. At 10.5 and 6.0% of the day time CO2 assimilation, respectively, this had minimal effect on the net daily balance of CO2 exchange, but indicates a possible effect of carbohydrate (CHO) availability on night respiration. It has previously been demonstrated that dark respiration is correlated with leaf CHO fractions and in turn photosynthesis in the preceding light period [30]. Under elevated CO2, leaf CHO and dark respiration can increase in some species [31] suggesting that reduced dark respiration of vines following the day period at sub-ambient CO2 in the present study could be due to a lower concentration of leaf CHO at the end of the day. Average diurnal canopy photosynthesis rates for ambient sub-ambient CO2 supplied chambers (n = 3) for 10 days from December 26, 2013, and 4 days following the return of the sub-ambient chambers to ambient air supply on January 5. Measurement points shown in relation to Australian Eastern Daylight Time (AEDT). The corresponding supply CO2 difference for sub-ambient to ambient in shown in Fig. 4b Average diurnal canopy transpiration rates for ambient sub-ambient CO2 supplied chambers (n = 3) for 10 days from December 26, 2013, and 4 days following the return of the sub-ambient chambers to ambient air supply on January 5 Under elevated CO2, stomatal conductance has in most cases been observed to decrease, leading to reduced water use on both a leaf and whole canopy basis [31, 32]. Reduced stomatal conductance has also been observed with mature field-grown grapevines under elevated CO2 [29, 33]. Conversely, if CO2 concentrations are lowered, stomatal conductance would be normally expected to increase [34, 35]. Although stomatal conductance was not measured in the current study, there was a trend for increased canopy water use across the 10 days of sub-ambient CO2 treatment which would suggest an underlying stomatal response (Table 2). As the air supply from the scrubber was slightly warmer with a day time average of 28.2 compared to 26.8 °C for the ambient supply chambers for the first 10 days shown in Fig. 6, it is possible that the transpiration increase could partly reflect the differences in chamber environmental conditions. However, with the evaporation of water from the soda lime beds, supply air water vapour concentration co-varied with temperature. The mean increase in vapour pressure deficit (VPD) was therefore offset to some extent, with average day time values of 2.87 and 2.68 kPa for the sub-ambient and ambient chambers respectively. The identical daily water use for both treatments as the sub-ambient chambers returned to ambient air supply in the last 4 days of the experiment would indicate that the 11% increase in transpiration during the scrubbing period was real. For future studies where stomatal responses and transpiration were to be the main focus, CO2 could be scrubbed down to the sub-ambient target across all chambers and then re-injected to avoid the small temperature differences associated with the CO2 scrubbers. There may be some remaining effect of stomatal responses to CO2 on transpiration and VPD, which can be managed in chambers with re-circulated air [36], but would need further consideration with the open design used here. Irrespective of cause and significance, the slight rise in water use, coupled with the large decrease in photosynthesis under sub-ambient CO2, meant that water use efficiency was more than halved over the 10 day period from an average of 7.1 to 3.0 g of CO2 assimilated per litre of water transpired (Table 2). As with all other parameters there was no significant difference between treatments on return of all chambers to ambient CO2 supply, which would indicate that the results here reflect the gas exchange responses of the initial canopy, and the 10 days was not sufficient for acclimation or shoot growth responses when the canopy was already fully established. Berry sugar accumulation At the onset of ripening, unloading of sucrose from the phloem in grape berries transition from symplastic to apoplastic [37]. Following import from the phloem, this sucrose is split into fructose and glucose by cell wall or vacuolar invertases and then stored in the vacuole. The combined total concentration of these sugars typically reaches 200 to 300 g L− 1 at maturity [38]. In the juice collected from the berry samples, fructose and glucose concentrations were significantly lowered by 10 days of sub-ambient CO2 supply (Table 3), indicating a reduction in sucrose translocation to berries as a result of reduced photosynthesis. However, despite the 54% reduction in net canopy photosynthesis, the sub-ambient treatment only reduced sugar concentration by 36%, with a gain of 30.8 g L− 1 over the 10 days compared to 48.5 g L− 1 under ambient CO2. On a whole vine basis the sugar accumulated by the fruit in the ambient treatment was 223 g, representing just over 50% of assimilated C during the period. In contrast, fruit in the sub-ambient treatment accumulated 166 g of sugars or an amount equivalent to 72% of the C assimilated by photosynthesis during the period. Table 3 Summary of berry composition changes following 10 days of sub-ambient CO2 supply, and comparison of total canopy carbon assimilation with berry sugar accumulation during the corresponding period Although not assessed in the current study, carbohydrate reserves, and particularly those of the root system, can provide an alternative C supply for ripening fruit [39, 40]. Under heavy fruit load, the growth of new fine roots and C allocation to the roots can also be reduced [41]. Thus, the mobilization of stored C, or diversion of a greater percentage of current assimilate away from vegetative parts of the vines, may have been sufficient to offset some of the effects of reduced photosynthesis on berry sugar accumulation. While upregulation of photosynthesis was a possibility based on findings of earlier studies [eg. 15] net canopy C assimilation was measured here and shown to be reduced by over 50%. The relative increase in the proportion of C accumulated by the fruit therefore highlight the high sink strength of developing berries at véraison, and their capacity to compete for C under source limitation. The combination of custom built soda lime-filled CO2 scrubbers and whole canopy gas exchange chambers was found to be an effective method for reducing the net carbon assimilation of grapevine canopies. Even at the early stage of berry ripening, which was just commencing when the chambers were installed, the reduction in canopy photosynthesis resulted in a significant reduction in berry sugar accumulation. At the concentrations used in the present study, where CO2 was reduced to an average of 235 ppm compared to ambient at 412 ppm, there was an average 52% reduction in day time carbon assimilation which was consistent with the near linear rubisco response to CO2 across this range. While the CO2 scrubbers developed in this study were designed for the purpose of slowing berry sugar accumulation and could maintain sufficient air flow across a range of approximately 180 to 300 ppm, any desired range could be achieved with a combination of two scrubbers connected in parallel and additional CO2 injected back into the air supply. This would extend the potential application of the system to increasing as well as decreasing photosynthetic carbon supply, and extend the scope to source and well as sink-limited studies. Location and grapevine growing conditions All work described in paper was undertaken at the National Wine and Grape Industry Centre, Charles Sturt University in Wagga Wagga, New South Wales, Australia (35.06 °S, 147.36 °E), and utilized an existing population of large potted Shiraz grapevines (Vitis vinifera L., clone PT23) that were located in a bird-proof wire mesh enclosure. The vines were planted on own roots in 2008, and trained to single spur-pruned bilateral cordon on a fixed fruiting wire 80 cm from the ground. A second fixed wire at 120 cm above the ground provided support for the canopy. The pots were 52 L in volume (500 mm wide × 380 mm high), and arranged with 1 m spacing in rows of 5 vines in an E-W alignment with 3 m between rows. Pots were filled with a commercial bulk-composted potting mix, and fertilized manually with diluted liquid fertilizer (Megamix Plus®, RUTEC, Tamworth) and additional applications of magnesium sulphate and gypsum. Irrigation was provided via a 4 L h− 1 pressure compensated emitter installed on each side of the trunk to provide water to two sides of the root system. Irrigation was controlled by an automated timer, with the schedule and duration of three to four daily irrigations adjusted manually according to seasonal water requirements. Shoot numbers ranged from approximately 30 to 38 per vine, providing a sprawl canopy with similar structure and density to field-grown grapevines. Gas exchange chambers Six gas exchange chambers were constructed with transparent acrylic tops that incorporated a lightweight plastic and aluminum frame (Cubelok, Capral Limited, Parramatta, Australia), and a base made from polyethylene plastic panels and a galvanized steel frame. The chamber top and base were both designed in two halves that could be installed from each side of the trellis with minimal impact on the shape of the canopy (Fig. 1). The chamber top enclosed a volume of 1.2 m3 and was constructed from flat sheets of 3 mm thick acrylic on the sides parallel to the vine row, and 4 mm thick on the two sides perpendicular to the vine row. Under clear sky light transmission between 400 and 700 nm was 95%. An 85 mm gap between the top of the 3 mm sheets allowed outgoing air to exit the chamber, with an overlapping ridge cap supported 24 mm above the chamber sides to minimize ambient air intrusion during windy conditions. When installed, the maximum outside dimensions of the assembled chambers were 1620 mm perpendicular to the row, 1000 mm parallel to the row and 2000 mm high. The external panels of the chamber base were constructed from 19 mm plastic boards made from a combination of high density polyethylene and polypropylene (UniboardECO, Dotmar Plastic Solutions, Sydney, Australia). On the top of each half of the chamber base, three 24 VDC blower fans (San Ace B97, Model 9BMB24P2G01, Sanyo Denki Co. Ltd., Tokyo, Japan) were mounted against a 64 mm diameter hole, and angled to mix and distribute air in a horizontal plane across the base of the chamber. An air-tight rectangular box made from a combination of 12 mm and 6 mm sheets of the same plastic was fixed to the underside of each chamber base half to enclose the air intake of the three fans. A port on the underside of each of the boxes was then connected to a common air intake made of 105 mm internal diameter (nominal size 100 mm) polyvinyl chloride (PVC) pipe and compatible fittings. This air intake could either be attached to a 3 m high chimney for ambient air, or to the outlet of the CO2 scrubber described in the next section. The air inside the chamber was then displaced vertically through the canopy and exited through the vent at the top of the chambers. Smoke tests (DATAX, BJÖRNAX AB, Nora, Sweden) were used to assess the uniformity of air flow through the chambers. Removable panels on the lower face of the acrylic chamber tops allowed access to fruit on both side of the vine for berry sampling. Adhesive tape was used to seal the gaps created by the trellis wires between the chamber halves, and adhesive foam weather strips were used between the panels of the chamber top and base. The speed of the 6 fans was controlled by a pulse width modulation (PWM) signal from an Arduino® UNO compatible micro-controller mounted inside the intake box in one side of the chamber base. During the day, the PWM signal was programmed to a duty cycle of 25%, and an input signal from a manually operated switch was used to slow the PWM to a duty cycle of 10% at night. For the day speed, this equated to a flow rate of approximately 2.6 m3 per minute, or 2.2 chamber volumes per minute, and at night, 1.6 m3 per minute or 1.3 chamber volumes per minute. CO2 scrubbers A portable CO2 scrubber was designed so that it could be attached to a chamber, in place of the ambient air intake chimney and reduce the CO2 concentration of supply air by approximately 200 ppm compared to ambient air (Fig. 2). Three scrubbers were constructed from a combination of 15 mm plywood and 100 mm nominal diameter (105 mm ID) PVC pipe and fittings that completely scrubbed all CO2 from incoming air and then mixed this back in with ambient air to obtain the desired CO2 concentration. Each scrubber contained a total of 27 kg of soda-lime (Sofnolime®, Molecular Products Limited, Harlow, UK), divided equally across eight tubes of 100 mm diameter and 480 mm depth that were connected in parallel. Air was pulled through the soda-lime beds, and then an open-cell foam filter under vacuum using a single 24 VDC centrifugal fan (SanAce C175, Model 9TG24P0G01, Sanyo Denki Co. Ltd). A manually operated valve was used to allow some air from an ambient inlet chimney at 3 m to by-pass the scrubbing beds on the upstream side of the fan. Fan speed was controlled by a PWM signal in the same manner as those in the chamber bases, but a potentiometer was used instead of a switch to provide variable control over the fan speed. Through a combination of ambient air bypass and fan speed control, the design allowed variable control over the airflow rate and CO2 concentration range. To minimize temperature differences between the ambient and low CO2 treatments, scrubbers were covered with reflective foil to reduce solar heating, and ice placed on top of scrubbers and replaced during the day as required. To test the full operating range of the scrubbers, a series of measurements were made between the minimum and maximum fan speeds with the bypass valve closed progressively until the maximum amount of air possible was forced through the scrubbing beds. A portable infrared gas analyser (Li400XT, LiCOR Biosciences, Lincoln, Nebraska, USA) was used to measure the concentration of outgoing CO2, and the process was repeated with a second scrubber. Gas exchange measurements When assembled and running as a complete system with six chambers, the CO2 and H2O concentrations of incoming and outgoing air were recorded on a 30 min cycle, providing 48 measurement points in a 24 h period. During the measurement period for each chamber, air was sampled simultaneously from the chamber inlet and outlet at a rate of approximately 2 L min− 1 via 10 m of 6 mm external diameter polyethylene tubing using a pair of diaphragm pumps (SP550 EC-BLa, Schwarzer Precision GmbH + Co. KG, Essen, Germany) connected in parallel on each line. As shown in Fig. 1b, the inlet air sample was drawn from the PVC air intake tube on the downstream side of the intake fans. The outlet sample was taken from two points within the chamber at 20 cm below the air outlet and 30 cm apart to provide an average air sample. For both inlet and outlet lines, the two pumps were connected in parallel across the outlets of six individually switchable solenoid valves (V2 miniature pneumatic solenoid valve, Parker Hannifin Corp., Cleveland, Ohio, USA) to avoid dead-air volumes and maximize flushing between cycles. These pumps then delivered the air to a 50 mL buffer container that was then sub-sampled via a switchable solenoid valve by a single pump (SP550 EC-BLa) at a rate of approximately 800 mL min− 1 for subsequent CO2 and H2O measurements. Excess air from the 50 mL buffer volumes vented to atmosphere. Carbon dioxide and H2O vapour concentrations were measured at 5 s intervals using an infrared gas analyser (Li840A, LiCOR Biosciences, Lincoln, Nebraska, USA) set to 50 °C and the average value recorded at 30 s intervals with an external data logger (CR1000, Campbell Scientific, Logan, Utah, USA). For the inlet air, these measurements were made for 2 min, and for the outlet air sample, 3 min. A relay controller (SDM-CD16AC, Campbell Scientific) was used to drive the progressive switching of the six pairs of solenoid valves at 5 min intervals, and within that period, to switch the single sub-sampling solenoid from the chamber inlet to outlet after 2 min. Data points recorded during transition periods were removed during subsequent processing, leaving two inlet measurements and four outlet measurements to average for subsequent gas exchange calculations. Chamber and external environmental parameters were recorded with a second data logger and multiplexer (CR1000 and AM2 5 T, Campbell Scientific, Logan, Utah). Air inlet temperature was measured with T-type thermocouples (24 AWG) at a single point inside the inlet tube under the chamber base, and at two outlet points on each side of the canopy (wired in parallel for average value) from under white plastic radiation shields attached to the fixed foliage wire. Air velocity was recorded in the centre of the inlet tube with a hot-film element sensor (EE576 or EE671, E + E Elektronik, Engerwitzdorf, Austria). Photosynthetically active radiation outside the chamber was recorded with Quantum sensor (LI-190R, LI-COR Biosciences), and temperature and relative humidity (HMP-50, Vaisala, Helsinki, Finland) inside the bird-proof enclosure were recorded at the same intervals as the chamber measurements. Sub-ambient CO2 experiment At the first visual indication of berry colour change in late December 2013 the chambers were installed on the canopies of six grapevines selected from the larger population in the bird-proof enclosure. For a period of 10 days from December 26 three chambers were supplied with ambient air, and three chambers connected to the CO2 scrubbers and supplied with air at a target of 200 ppm below ambient during daylight hours. While no pre-treatment comparison was able to be made due to an earlier expected onset of véraison, from January 5 all chambers were run at ambient CO2 for an additional 4 days to compare them in the post-treatment period. When the chambers were removed, the length of every leaf from each vine was recorded. Using a regression established between leaf area, measured (LI-3100C, LI-COR Biosciences), and length, with leaves sampled destructively from vines that had not been used in the chambers, the total leaf area of each vine was then calculated. Berry sampling and analysis Berry samples collected on the first day of the 10 day scrubbing period, and then on the morning of the eleventh day, were used to assess the effect of the sub-ambient treatment on berry sugar accumulation. Accessed via removable panels on the side of the chambers, 20 random berries were collected from each side of the vine to provide a 40 berry sample. The samples were immediately weighed, separated into skin and seeds over ice and then frozen in liquid nitrogen. The pulp was manually homogenised while cooled over ice, and juice separated and 1 mL sub-samples frozen in liquid nitrogen. Remaining juice was used to determine soluble solids. All samples were stored at − 80 °C prior to subsequent analysis. Juice samples were thawed, vortexed to mix and filtered to 0.22 μm and fructose and glucose concentrations determined by HPLC-RI on a 300 mm × 7.8 mm Aminex HPXM87H ion exclusion column (BioMRad Laboratories, Berkeley, USA) using the method of Frayne [42]. To estimate the total amount of sugar accumulated by each treatment in the 10 days between the first and last berry sampling, sugar content per berry was calculated using the weight to volume relationship for Shiraz described by Gray and Coombe [43]. Fruit from all vines was harvested and weighed on February 12, and using harvest sampling berry weights, the total number of berries per vine was calculated. Allowing for the removal of berries at each sampling date, an estimate of total sugar concentration per vine could be made based on total berry volume and juice sugar concentration. Gas exchange calculations Volumetric air flow was calculated from the inlet air velocity measurements for the 5 min period of gas exchange for each chamber and converted to molar air flow (Eq. 1) as per Long and Hallgren [44], where ue = molar flow of air (mol s− 1), fv = volumetric air flow (cm3 min− 1), 22.4 = volume (dm3) of one mole of air at standard temperature and pressure of 273.15 K and 101.3 kPa respectively, T = air temperature (°C), and P = atmospheric pressure (kPa). The inlet thermocouple for each chamber was used for the air temperature, while atmospheric pressure was obtained from the Australian Bureau of Meteorology Wagga Wagga airport weather station (35.16 °S, 147.46 °E), situated approximately 15 km south east of the experiment location. $$ {u}_e=\frac{f_v}{1000}.\frac{1}{22.4}.\frac{273.15}{\left(273.15+T\right)}.\frac{P}{101.3}.\frac{1}{60}\kern5.25em (1) $$ Average canopy transpiration (E; Eq. 2), photosynthesis (A; Eq. 3) and rates were calculated using the following equations [45], were s = leaf area (m2), we and wo = inlet and outlet vapor concentration (mol mol− 1) respectively, Δ = difference between inlet and outlet CO2 concentrations (mol mol− 1) and ce = inlet CO2 concentration. $$ \kern0.75em E=\frac{u_e}{s}.\frac{\left({w}_o-{w}_e\right)}{\left(1-{w}_o\right)}\kern5.5em (2) $$ $$ \kern0.5em A=\frac{u_e}{s}.\frac{\left(1-{w}_e\right)}{\left(1-{w}_o\right)}.\varDelta -E.{C}_e\kern1.5em (3) $$ Outlier values, screened based on the ratio of the inlet and outlet CO2 concentrations and to allow daily sums to be calculated, were replaced with the average of two adjacent readings for that chamber. An outlier or potentially incorrect reading was defined as outlet concentration that was more than 10% lower than the inlet during the day, or an outlet concentration more than 2% higher at night. Only 25 points from 4032 measurement points over 14 days required replacement with this method, and in most cases these could be explained by berry sampling, or the connection or disconnection of the CO2 scrubbers inadvertently coinciding with the 5 min monitoring period for each chamber. For the gas exchange data, the three chambers in each treatment were blocked according to the leaf area of each vine and analysed as a split plot experiment with treatment as a main plot and time as the subplot factor using Genstat v18 (VSNI, Hemel Hempstead, England, UK). Mean chamber temperature inside the chamber was used as a covariate for comparison of night respiration rates. The 10 day period with the ambient and sub-ambient treatments was analysed separately from the 4 day period. Treatment means for single measurements were compared using Student's T-test. The datasets used and/or analysed during the current study, as well design files for the chambers and CO2 scrubbers, are available from the corresponding author on reasonable request. CHO: CO2 : PPFD: photosynthetic photon flux density PWM: VPD: vapour pressure deficit Smart RE, Dick JK, Gravett IM, Fisher BM. Canopy management to improve grape yield and wine quality-principles and practices. S Afr J Enol Vitic. 1980;11:3–17. Jackson DI, Lombard PB. Environmental and management practices affecting grape composition and wine quality-a review. Am J Enol Viticult. 1993;44:409–30. Poni S, Gatti M, Palliotti A, Dai Z, Duchêne E, Truong TT, Ferrara G, Matarrese AM, Gallotta A, Bellincontro A, Mencarelli F. Grapevine quality: a multiple choice issue. Sci Hort. 2018;234:445–62. González-Sanjosé ML, Diez CJ. Relationship between anthocyanins and sugars during the ripening of grape berries. Food Chem. 1992;43:193–7. Bobeica N, Poni S, Hilbert G, Renaud C, Gomès E, Delrot S, Dai Z. Differential responses of sugar, organic acids and anthocyanins to source-sink modulation in cabernet sauvignon and Sangiovese grapevines. Front Plant Sci. 2015;6:382. Tarara JM, Lee J, Spayd SE, Scagel CF. Berry temperature and solar radiation alter acylation, proportion, and concentration of anthocyanin in merlot grapes. Am J Enol Viticult. 2008;59:235–47. Downey MO, Dokoozlian NK, Krstic MP. Cultural practice and environmental impacts on the flavonoid composition of grapes and wine: a review of recent research. Am J Enol Viticult. 2006;57:257–68. Young PR, Eyeghe-Bickong HA, du Plessis K, Alexandersson E, Jacobson DA, Coetzee Z, Deloire A, Vivier MA. Grapevine plasticity in response to an altered microclimate: sauvignon Blanc modulates specific metabolites in response to increased berry exposure. Plant Physiol. 2016;170:1235–54. Gouot JC, Smith JP, Holzapfel BP, Walker AR, Barril C. Grape berry flavonoids: a review of their biochemical responses to high and extreme high temperatures. J Exp Bot. 2019;70:397–423. Reynolds AG. 'Riesling' grapes respond to cluster thinning and shoot density manipulation. J Am Soc Hortic Sci. 1989;114:246–68. Petrie PR, Clingeleffer PR. Crop thinning (hand versus mechanical), grape maturity and anthocyanin concentration: outcomes from irrigated cabernet sauvignon (Vitis vinifera L.) in a warm climate. Aust J Grape Wine Res. 2006;12:21–9. Stoll M, Bischoff-Schaefer M, Lafontaine M, Tittmann S, Henschke J. Impact of various leaf area modifications on berry maturation in Vitis vinifera L.'Riesling'. Acta Hortic. 2013;978:293–9. Palliotti A, Tombesi S, Silvestroni O, Lanari V, Gatti M, Poni S. Changes in vineyard establishment and canopy management urged by earlier climate-related grape ripening: a review. Sci Hort. 2014;178:43–54. Parker AK, Hofmann RW, van Leeuwen C, McLachlan AR, Trought MC. Manipulating the leaf area to fruit mass ratio alters the synchrony of total soluble solids accumulation and titratable acidity of grape berries. Aust J Grape Wine R. 2015;21:266–76. Candolfi-Vasconcelos MC, Koblet W. Influence of partial defoliation on gas exchange parameters and chlorophyll content of field-grown grapevines: mechanisms and limitations of the compensation capacity. Vitis. 1991;30:129–41. Poni S, Casalini L, Bernizzoni F, Civardi S, Intrieri C. Effects of early defoliation on shoot photosynthesis, yield components, and grape composition. Am J Enol Viticult. 2006;57:397–407. Keller M, Mills LJ, Wample RL, Spayd SE. Cluster thinning effects on three deficit-irrigated Vitis vinifera cultivars. Am J Enol Viticult. 2005;56:91–103. Holzapfel BP, Smith JP. Developmental stage and climatic factors impact more on carbohydrate reserve dynamics of Shiraz than cultural practice. Am J Enol Viticult. 2012;63:333–42. Lloyd J, Wong SC, Styles JM, Batten D, Priddle R, Turnbull C, McConchie CA. Measuring and modelling whole-tree gas exchange. Funct Plant Biol. 1995;22:987–1000. Tarara JM, Peña JEP, Keller M, Schreiner RP, Smithyman RP. Net carbon exchange in grapevine canopies responds rapidly to timing and extent of regulated deficit irrigation. Funct Plant Biol. 2011;38:386–400. Poni S, Merli MC, Magnanini E, Galbignani M, Bernizzoni F, Vercesi A, Gatti M. An improved multichamber gas exchange system for determining whole canopy water use efficiency in the grapevine. Am J Enol Viticult. 2014;65:268–76. Tarara JM, Peña JEP. Moderate water stress from regulated deficit irrigation decreases transpiration similarly to net carbon exchange in grapevine canopies. J Am Soc Hortic Sci. 2015;140:413–26. Pagay V. Effects of irrigation regime on canopy water use and dry matter production of 'Tempranillo' grapevines in the semi-arid climate of southern Oregon, USA. Agric Water Manag. 2016;178:271–80. Dragoni D, Lakso AN, Piccioni RM, Tarara JM. Transpiration of grapevines in the humid northeastern United States. Am J Enol Viticult. 2006;57:460–7. Poni S, Magnanini E, Rebucci B. An automated chamber system for measurements of whole-vine gas exchange. HortScience. 1997;32:64–7. Perez Peña JP, Tarara J. A portable whole canopy gas exchange system for several mature field-grown grapevines. Vitis. 2004;43:7–14. Petrie PR, Trought MC, Howell GS, Buchan GD, Palmer JW. Whole-canopy gas exchange and light interception of vertically trained Vitis vinifera L. under direct and diffuse light. Am J Enol Viticult. 2009;60:173–82. Greer DH, Weedon MM. Photosynthetic light responses in relation to leaf temperature in sun and shade leaves of grapevines. Acta Hortic. 2012;956:149–56. Edwards EJ, Unwin D, Kilmister R, Treeby M. Multi-seasonal effects of warming and elevated CO2 on the physiology, growth and production of mature, field grown, Shiraz grapevines. OENO One. 2017;51:127–32. Azcón-Bieto J, Osmond CB. Relationship between photosynthesis and respiration: the effect of carbohydrate status on the rate of CO2 production by respiration in darkened and illuminated wheat leaves. Plant Physiol. 1983;71:574–81. Leakey AD, Ainsworth EA, Bernacchi CJ, Rogers A, Long SP, Ort DR. Elevated CO2 effects on plant carbon, nitrogen, and water relations: six important lessons from FACE. J Exp Bot. 2009;60:2859–76. Ainsworth EA, Rogers A. The response of photosynthesis and stomatal conductance to rising [CO2]: mechanisms and environmental interactions. Plant Cell Environ. 2007;30:258–70. Moutinho-Pereira JM, Bacelar EA, Gonçalves B, Ferreira HF, Coutinho JF, Correia CM. Effects of open-top chambers on physiological and yield attributes of field grown grapevines. Acta Physiol Plant. 2010;32:395–403. Farquhar GD, Wong SC. An empirical model of stomatal conductance. Funct Plant Biol. 1984;11:191–210. Anderson LJ, Maherali H, Johnson HB, Polley HW, Jackson RB. Gas exchange and photosynthetic acclimation over subambient to elevated CO2 in a C3–C4 grassland. Glob Chang Biol. 2001;7:693–707. Barton CV, Ellsworth DS, Medlyn BE, Duursma RA, Tissue DT, Adams MA, Eamus D, Conroy JP, McMurtrie RE, Parsby J, Linder S. Whole-tree chambers for elevated atmospheric CO2 experimentation and tree scale flux measurements in South-Eastern Australia: the Hawkesbury Forest experiment. Agric For Meteorol. 2010;150:941–51. Zhang XY, Wang XL, Wang XF, Xia GH, Pan QH, Fan RC, Wu FQ, Yu XC, Zhang DP. A shift of phloem unloading from symplasmic to apoplasmic pathway is involved in developmental onset of ripening in grape berry. Plant Physiol. 2006;142:220–32. Liu HF, Wu BH, Fan PG, Li SH, Li LS. Sugar and acid concentrations in 98 grape cultivars analyzed by principal component analysis. J Sci Food Agric. 2006;86:1526–36. Candolfi-Vasconcelos MC, Candolfi MP, Kohlet W. Retranslocation of carbon reserves from the woody storage tissues into the fruit as a response to defoliation stress during the ripening period in Vitis vinifera L. Planta. 1994;192:567–73. Rossouw GC, Smith JP, Barril C, Deloire A, Holzapfel BP. Carbohydrate distribution during berry ripening of potted grapevines: impact of water availability and leaf-to-fruit ratio. Sci Hort. 2017;216:215–25. Morinaga K, Imai S, Yakushiji H, Koshita Y. Effects of fruit load on partitioning of 15N and 13C, respiration, and growth of grapevine roots at different fruit stages. Sci Hort. 2003;97:239–53. Frayne RF. Direct analysis of the major organic components in grape must and wine using high performance liquid chromatography. Am J Enol Viticult. 1986;37:281–7. Gray JD, Coombe BG. Variation in shiraz berry size originates before fruitset but harvest is a point of resynchronisation for berry development after flowering. Aust J Grape Wine R. 2009;15:156–65. Long SP, Hallgren JE. Measurement of CO2 assimilation by plants in the field and the laboratory. In: Coombs J, et al., editors. Techniques in bioproductivity and photosynthesis. Oxford: Pergamon Press; 1985. p. 62–93. Von Caemmerer SV, Farquhar GD. Some relationships between the biochemistry of photosynthesis and the gas exchange of leaves. Planta. 1981;153:376–87. The authors would like to thank Sebastian Holzapfel, Gerhard Rossouw, for their contribution to the construction of the system, as well as Dr. Inigo Auzmendi for advice and helpful early discussions around the measurement of whole plant gas exchange. Ginger Korosi, David Foster, and Robert Lamont are thanked for assistance with the system installation and other measurements during the study. This research was supported by funding from Wine Australia, who invests in, and manages research development, extension on behalf of Australia's grape growers and winemakers and the Australian Government. Wine Australia set the broader industry research priorities to which this project was aligned, but did not have a direct role in design of the study and collection, analysis, and interpretation of data or in writing the manuscript. Financial support for open-access publication was provided by Charles Sturt University's Faculty of Science. Jason P. Smith Present address: Current Address: Faculty of Science, Charles Sturt University, Leeds Parade, Orange, New South Wales, 2800, Australia National Wine and Grape Industry Centre, Wagga Wagga, New South Wales, 2678, Australia Jason P. Smith, Julia C. Gouot, Celia Barril & Bruno P. Holzapfel Department of General and Organic Viticulture, Hochschule Geisenheim University, Von-Lade-Strasse 1, D-65366, Geisenheim, Germany CSIRO Agriculture & Food, Locked Bag 2, Glen Osmond, South Australia, 5064, Australia Everard J. Edwards & Amanda R. Walker School of Agricultural and Wine Sciences, Faculty of Science, Charles Sturt University, Wagga Wagga, New South Wales, 2678, Australia Julia C. Gouot & Celia Barril New South Wales Department of Primary Industries, Wagga Wagga, New South Wales, 2678, Australia Bruno P. Holzapfel Everard J. Edwards Amanda R. Walker Julia C. Gouot Celia Barril JS, EE, AW, CB, BH devised the experimental approach, and JS designed and constructed the chambers and CO2 scrubbing system. JS ran the experiment and analysed berry samples with assistance of BH and CB. JS and JG wrote the R scripts for processing the chamber gas exchange data. JS wrote the first draft of the manuscript, and all authors contributed to revisions and editing. All authors have read and approved the final manuscript. Correspondence to Jason P. Smith. Smith, J.P., Edwards, E.J., Walker, A.R. et al. A whole canopy gas exchange system for the targeted manipulation of grapevine source-sink relations using sub-ambient CO2. BMC Plant Biol 19, 535 (2019). https://doi.org/10.1186/s12870-019-2152-9 Source-sink relations Grape berry
CommonCrawl
Only $35.99/year ASTR 101 Exam 2 (Ch 4 & 5) alyssa_leggio1 As a giant cloud of gas collapses due to gravity, you would expect it's rate of rotation to Which of the following represents a change of potential energy to kinetic energy? They rock starting from rest on a high cliff, then moving faster and faster as it falls Supposed to objects are attracting each other's gravitationally. If you double the distance between them, the strength of the gravitational attraction ____? Decreases by a factor of 4 Which of the following are allowed orbital paths under gravity? parabolic, hyperbolic, elliptical Suppose a parachutist is falling toward the ground, and the downward force of gravity is exactly equal to the upward force of air resistance. Which statement is true? The velocity of the parachutist is not changing with time. A kilogram is a measure of an object's ___? Suppose object A has three times as the mass of object B. Identical forces area exerted on the two objects. Which statement is true? The acceleration of object B is three times that of object A A bowling ball and a small marble will fall downward to the surface of the Moon at the same rate because ____________. the ratio of the force of gravity exerted on an object to the object's mass is the same. If you stood on a planet with four times the mass of Earth, and twice Earth's radius, how much would you weigh? same as on earth Imagine Earth's identical twin planet "Farth" is twice as far away from the Sun as Earth is. Compared to the force of gravity the Sun experiences due to Earth, how strong is the force of gravity the Sun experiences due to Farth? one-fourth as strong Suppose the Sun suddenly shrunk, reducing its radius by a half (but keeping its mass the same). The force of gravity exerted on Earth by the Sun would ____________. remain the same When the sun dies it will become a white dwarf, which will be roughly the same size as the Earth. Assuming the sun doesn't lose any mass as it becomes a white dwarf, the force of gravity exerted on Earth due to the sun will ________ Not change as the sun turns into a white dwarf A person has a choice of sliding down three slides. All slides start at the same height above the ground. Ignoring friction, for which slide will the final speed of the person at the bottom be the highest? The final speed will be the same, regardless of which slide is used. A 2kg ball is moving with a speed of 4 m/s, and a 4 kg ball is moving with a speed of 2m/s. What can you conclude about the of kinetic energies of the two balls? The 2kg ball has more kinetic energy A ball is dropped from a distance 5 m above the ground, and it hits the ground with a certain speed. If the same ball is dropped from a distance 10 m above the ground, it's final speed will be 1.4 times high The satellite is orbiting earth with a distance R=2Rearth from Earth's center. If the satellite is moved to a distance of R=4Rearth (twice as far) it's potential energy would be ________ what is was before Greater than but less than twice If an object at a temperature of absolute zero it's Thermal energy would be zero A star forms by a gas club collapsing due to gravity. As the size of the cloud decreases, the temperature of the cloud A positron is a particle similar to an electron, but with the opposite charge of an electron. If a positron and an electron collide, sometimes both the electron and positron disappear, and two photons are created. This is an example of rest-mass energy being converted into radiative energy. Three helium nuclei can fuse together, forming one carbon nucleus. The mass of carbon is less than three times the mass of helium. When helium fuses into carbon, which of the following occurs? Rest-mass energy is converted into some other form of energy. Consider Earth and the Moon. As you should now realize, the gravitational force that Earth exerts on the Moon is equal and opposite to that which the Moon exerts on Earth. Therefore, according to Newton's second law of motion __________. the Moon has a larger acceleration than Earth, because it has a smaller mass The video shows a collapsing cloud of interstellar gas, which is held together by the mutual gravitational attraction of all the atoms and molecules that make up the cloud. As the cloud collapses, the overall force of gravity that draws the cloud inward __________ because the _________. gradually becomes stronger, the strength of gravity follows an inverse square law with distance As the cloud shrinks in size, its rate of rotation speeds up because its total angular momentum is conserved As the cloud shrinks in size, its central temperature increases as a result of its gravitational potential energy being converted to thermal energy Suppose that the Sun were to collapse from its current radius of about 700,000 km to a radius of only about 6000 km (about the radius of Earth). What would you expect to happen as a result? A tremendous amount of gravitational potential energy would be converted into other forms of energy, and the Sun would spin much more rapidly. Suppose that two asteroids are orbiting the Sun on nearly identical orbits, and they happen to pass close enough to each other to have their orbits altered by this gravitational encounter. If one of the asteroids ends up moving to an orbit that is closer to the Sun, what happens to the other asteroid? It will end up on an orbit that is farther from the Sun. Suppose you are in an elevator. As the elevator starts upward, its speed will increase. During this time when the elevator is moving upward with increasing speed, your weight will be __________. greater than your normal weight at rest Suppose you are in an elevator that is moving upward. As the elevator nears the floor at which you will get off, its speed slows down. During this time when the elevator is moving upward with decreasing speed, your weight will be __________. less than your normal weight at rest In Part A, you found that your weight will be greater than normal when the elevator is moving upward with increasing speed. For which of the following other motions would your weight also be greater than your normal weight? The elevator moves downward while slowing in speed. When you are standing on a scale in an elevator, what exactly does the scale measure? the force you exert on the scale Suppose you are in an elevator car when the elevator cable breaks. Which of the following correctly describes what happens and why. You float weightlessly within the elevator car because you and the elevator both begin to accelerate downward at the same rate. Assume you have completed the two trials chosen in Part A. Which of the following possible outcomes from the trials would support Newton's theory of gravity? Neglect effects of air resistance. Both balls fall to the ground in the same amount of time. If you actually performed and compared the two trials chosen in Part C, you would find that, while the basketball and marble would hit the ground at almost the same time, it would not quite be exact: The basketball would take slightly longer to fall to the ground than the marble. Why? Because air resistance has a greater effect on the larger ball. Einstein's theory, like Newton's, predicts that, in the absence of air resistance, all objects should fall at the same rate regardless of their masses. Consider the following hypothetical experimental results. Which one would indicate a failure of Einstein's theory? Scientists dropping balls on the Moon find that balls of different mass fall at slightly different rates. What do the three black arrows represent? the Moon's gravitational force at different points on Earth A penny weighs about 3 My water bottle has a volume of 1.5 I can run 60 ______ in 10 seconds my thumb is 2 _____ wide I bought 3 _____ of apples at the store kilograms The average distance from Earth to the Moon is 384,400 kilometers. What is this distance in miles, rounded to the nearest 100 miles? (384,400 km×0.6214 mile/km=238,900 miles) The Moon's average density is about 3.34 grams per cubic centimeter. What is this density in units of pounds per cubic inch? 0.121 pounds per cubic inch As shown in the video, Earth has two tidal bulges at all times. Approximately where are these bulges located? One faces the Moon and one faces opposite the Moon. Most people are familiar with the rise and fall of ocean tides. Do tides also affect land? Yes, though land rises and falls by a much smaller amount than the oceans. Any particular location on Earth experiences __________. two high tides and two low tides each day One tidal bulge faces toward the Moon because that is where the gravitational attraction between Earth and the Moon is strongest. Which of the following best explains why there is also a second tidal bulge? The second tidal bulge arises because gravity weakens with distance, essentially stretching Earth along the Earth-Moon line. As you watch the video, notice that the size of the tidal bulges varies with the Moon's phase, which depends on its orbital position relative to the Sun. Which of the following statement(s) accurately describe(s) this variation? High tides are highest at both full moon and new moon, and Low tides are lowest at both full moon and new moon. You have found that tides on Earth are determined primarily by the position of the Moon, with the Sun playing only a secondary role. Why does the Moon play a greater role in causing tides than the Sun? because the gravitational attraction between Earth and the Moon varies more across Earth than does the gravitational attraction between Earth and the Sun Part complete A car is accelerating when it is going around a circular track at a steady 100 miles per hour. Compared to their values on Earth, on another planet your mass would be the same but your weight would be different. Which person is weightless? A child in the air as she plays on a trampoline. To make a rocket turn left, you need to: fire an engine that shoots out gas to the right. Compared to its angular momentum when it is farthest from the Sun, Earth's angular momentum when it is nearest to the Sun is The gravitational potential energy of a contracting interstellar cloud gradually transforms into other forms of energy. If Earth were twice as far from the Sun, the force of gravity attracting Earth to the Sun would be one-quarter as strong. According to the law of universal gravitation, what would happen to Earth if the Sun were somehow replaced by a black hole of the same mass? Earth's orbit would not change. If the Moon were closer to Earth, high tides would be higher than they are now. Consider the statement "There's no gravity in space." This statement is true or false? completely false A planet with twice Earth's mass orbiting at a distance of 1 AUAU from a star with the same mass as the Sun. A planet with the same mass as Earth orbiting at a distance of 1 AUAU from a star with four times the Sun's mass. Each chemical element has a unique The number of protons in an atom is called the atom's An atom with more electrons than protons has a negative The sum of the number of protons and neutrons in an atom is called the atomic mass number Most hydrogen atoms have only a single proton in their nucleus, so a hydrogen atom that also has one neutron is an isotope of hydrogen What is the correct composition of a neutral atom of helium-4? 2 protons, 2 neutrons, 2 electrons Which of the following type of spectrum would you expect if you view star light that has passed through a cool cloud of interstellar gas on its way to Earth? an absorption line spectrum When you listen to the radio, you are hearing neither visible light, nor radio waves, nor x-rays Suppose you go outside and look at three stars. Star A is blue, Star B is white, and Star C is red. Which star is the hottest and which star is the coldest? Star A is the hottest and Star C is the coldest. Which object emits more infrared radiation? A star that is the same size as the Sun but five times hotter If a star is moving away from you at constant speed, the absorption lines in its spectrum will be redshifted (have wavelengths longer than those of an identical stationary star). If a star is moving away from you at a constant speed, how do the wavelengths of the absorption lines change as the star gets farther and farther? The wavelengths remain the same If the emission lines in the spectrum of one object are more strongly blue shifted than those from a second object, then the first object is moving toward us faster than the second object. By only measuring an object's Doppler shift, astronomers tend to underestimate the object's total speed. For the radial speed of an astronomical object to be determined, what must the object's spectrum contain? Either absorption or emission lines If a hot gas cloud is moving across the sky (neither towards or away from us), the emission lines would be neither blue shifted nor redshifted. If a hot gas cloud is moving toward us, the frequency of the emission lines will be higher than those of a stationary gas cloud. What are the two most important properties of a telescope? angular resolution and light-collecting area How much more light does an 8-meter telescope gather than a 2-meter telescope? 16 times as much Suppose that two stars are separated in the sky by 0.01 arcsecond, and you observe them with a telescope that has an angular resolution of 1 arcsecond. What will you see? The light from the two stars will be blended together so that they look like one star. A camera is an example of an instrument used for imaging observations ______ refers to telescopic observations in which we separate an object's light so we can measure its intensity at different wavelengths. If we want to confirm that a star's brightness alternately dims and brightens, we need time monitoring observations of the star. The familiar twinkling of the stars is caused by __________, which also blurs telescopic images. atmospheric turbulence Human civilization is responsible for what astronomers call ________ light pollution. What is the primary reason that we launch X-ray telescopes into space rather than building them on the ground? X-rays from space do not reach the ground. If the angular separation of two stars is larger than the angular resolution of your eyes, you will be able to see both stars. How much greater is the light-collecting area of a 4-m telescope than that of a 1-m telescope? 16 times greater Consider the total amount of light collected by a 4-m telescope observing a star for 10 minutes. If you wanted to collect the same amount of light with a 2-m telescope, how long would you have to observe? Which of the following statements is not a valid advantage for having a telescope in orbit above our atmosphere? The telescope is closer to the astronomical objects. Which telescope has a better (smaller) angular resolution: a 2-m telescope observing visible light (wavelength 5.0×10-7 m) or a 10-m radio telescope observing radio waves (wavelength 2.1×10-2 m)? The optical telescope. Which of the following types of light cannot be studied with telescopes on the ground? What types of observation would you use for: Are stars in the Orion Nebula surrounded by dusty disks of gas? How large is the Andromeda Galaxy? What are the major surface features of Mars? What types of observation would you use for: What is the temperature of Jupiter's atmosphere? Is the star Vega moving toward us or away from us? What is the chemical composition of the Crab Nebula? Is the X-ray emission from the galactic center steady or changing? Does the star Mira vary in brightness? time monitoring If you assume that there are exactly 365 days in a year, how many seconds are there in one year? Give your answer to the nearest 1000 seconds. 31,536,000 seconds Convert a mass of 10^2 micrograms to kilograms. 1000 kilograms The acceleration of gravity on Mars is about 3.7 meters per second squared. Suppose a rock falls from a tall cliff on Mars. Which of the following equations indicates how fast the rock will be falling after 8 seconds? 3.7 (m/s^2) x 8s At a supermarket in France, the price of apples is 2.50 euros per kilogram. Suppose the exchange rate is 1 euro = $1.35. What is the price of the apples in dollars per pound? Recall that 1 kilogram=2.205 pounds $1.53/pound Listed following are distinguishing characteristics and examples of reflecting telescopes The Hubble Space Telescope, Most commonly used by professional astronomers today, world's largest telescope Listed following are distinguishing characteristics and examples of refracting telescopes Galileo's telescopes, The world's largest is 1-meter in diameter, incoming light passes through glass, very large telescopes become "top-heavy" Which of the following forms of light can be observed with telescopes at sea level? visible light and radio waves If our eyes were sensitive only to X rays, the world would appear __________. dark because X-ray light does not reach Earth's surface If you had only one telescope and wanted to take both visible-light and ultraviolet pictures of stars, where should you locate your telescope? The James Webb Space Telescope is designed primarily to observe __________. Study the graph of the intensity of light versus wavelength for continuous spectra, observing how it changes with the temperature of the light bulb. Recall that one of the laws of thermal radiation states that a higher-temperature object emits photons with higher average energy (Wien's law). This law is illustrated by the fact that for a higher temperature object, the graph peaks at __________. a shorter wavelength Click "show" for the emission line spectrum, then click "choose gases" and study the emission line spectrum for neon. The neon "OPEN" sign appears reddish-orange because __________. neon atoms emit many more yellow and red photons than blue and violet photons The absorption line spectrum shows what we see when we look at a hot light source (such as a star or light bulb) directly behind a cooler cloud of gas. Suppose instead that we are looking at the gas cloud but the light source is off to the side instead of directly behind it. In that case, the spectrum would __________. be an emission line spectrum What type of visible light spectrum does the Sun produce? Compared to red light, blue light has higher frequency and higher energy and shorter wavelength. Why is a sunflower yellow? it reflects yellow light radio waves are: a form of light Compared to an atom as a whole, an atomic nucleus: is very tiny but has most of the mass. Some nitrogen atoms have seven neutrons and some have eight neutrons; these two forms of nitrogen are: isotopes of each other The set of spectral lines that we see in a star's spectrum depends on the star's: chemical composition. A star whose spectrum peaks in the infrared is: cooler than our Sun. A spectral line that appears at a wavelength of 321 nm in the laboratory appears at a wavelength of 328 nm in the spectrum of a distant object. We say that the object's spectrum is: redshifted. How much greater is the light-collecting area of a 6-meter telescope than a 3-meter telescope? four times The Hubble Space Telescope obtains higher-resolution images than most ground-based telescopes because it is: above Earth's atmosphere MKTG 409 Quiz 1,3,5-7 Review MKTG 409 Exam 1 COMM 323 Exam 3 TAMU Classify each element as atomic or molecular. (a) chlorine Determine the oxidation number of each atom in the following compounds: sodium chloride and iron(ll) oxide . If a particle's position is given by $x = 4- 12t + 3t^2$ (where t is in seconds and x is in meters), what is it moving in the positive or negative direction of x just then? A soccer player takes a free kick from a spot that is $20 \mathrm{~m}$ from the goal. The ball leaves his foot at an angle of $32^{\circ}$, and it eventually hits the crossbar of the goal, which is $2.4 \mathrm{~m}$ from the ground. At what speed did the ball leave his foot? Chapter 2 study guide Chloe_Pretchuk Chapter 5 Strategic Management Audrey_Patterson7 Lecture 7 pathology bryyy11Plus International relations final abiheu13
CommonCrawl
Dynamics of delayed mosquitoes populations models with two different strategies of releasing sterile mosquitoes A stochastic model for water-vegetation systems and the effect of decreasing precipitation on semi-arid environments Quantifying the impact of early-stage contact tracing on controlling Ebola diffusion Narges Montazeri Shahtori , Tanvir Ferdousi , Caterina Scoglio , and Faryad Darabi Sahneh Department of Electrical and Computer Engineering, Kansas State University, Manhattan, KS 66506, USA * Corresponding author: Caterina Scoglio Received September 15, 2017 Revised January 27, 2018 Published May 2018 Fund Project: This article is based on work supported by the National Science Foundation grants SCH-1513639 and CIF-1423411. Recent experience of the Ebola outbreak in 2014 highlighted the importance of immediate response measure to impede transmission in the early stage. To this aim, efficient and effective allocation of limited resources is crucial. Among the standard interventions is the practice of following up with the recent physical contacts of the infected individuals -- known as contact tracing. In an effort to understand the effects of contact tracing protocols objectively, we explicitly develop a model of Ebola transmission incorporating contact tracing. Our modeling framework is individual-based, patient-centric, stochastic and parameterizable to suit early-stage Ebola transmission. Notably, we propose an activity driven network approach to contact tracing, and estimate the basic reproductive ratio of the epidemic growth in different scenarios. Exhaustive simulation experiments suggest that early contact tracing paired with rapid hospitalization can effectively impede the epidemic growth. Resource allocation needs to be carefully planned to enable early detection of the contacts and rapid hospitalization of the infected people. Keywords: Contact tracing, Ebola, compartmental model, activity-driven network, epidemic model. Mathematics Subject Classification: Primary: 92B05, 68U20; Secondary: 65C20. Citation: Narges Montazeri Shahtori, Tanvir Ferdousi, Caterina Scoglio, Faryad Darabi Sahneh. Quantifying the impact of early-stage contact tracing on controlling Ebola diffusion. Mathematical Biosciences & Engineering, 2018, 15 (5) : 1165-1180. doi: 10.3934/mbe.2018053 C. Browne, H. Gulbudak and G. Webb, Modeling contact tracing in outbreaks with application to Ebola, Journal of Theoretical Biology, 384 (2015), 33-49. doi: 10.1016/j.jtbi.2015.08.004. Google Scholar G. Chowell and C. Viboud, Is it growing exponentially fast?-Impact of assuming exponential growth for characterizing and forecasting epidemics with initial near-exponential growth dynamics, Infectious Disease Modelling, 1 (2016), 71-78. doi: 10.1016/j.idm.2016.07.004. Google Scholar O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio $R_{0}$ in models for infectious diseases in heterogeneous populations, Journal of Mathematical Biology, 28 (1990), 365-382. doi: 10.1007/BF00178324. Google Scholar M. G. Dixon and I. J. Schafer, Ebola Viral Disease Outbreak - West Africa, 2014, MMWR Morb Mortal Wkly Rep, 63 (2014), 548-551. Google Scholar K. T. Eames and M. J. Keeling, Contact tracing and disease control, Proceedings of the Royal Society of London B: Biological Sciences, 270 (2003), 2565-2571. doi: 10.1098/rspb.2003.2554. Google Scholar F. O. Fasina, A. Shittu, D. Lazarus, O. Tomori, L. Simonsen, C. Viboud and G. Chowell, Transmission dynamics and control of Ebola virus disease outbreak in Nigeria, July to September 2014, Eurosurveillance, 19 (2014), 20920. doi: 10.2807/1560-7917.ES2014.19.40.20920. Google Scholar M. Greiner, D. Pfeiffer and R. D. Smith, Principles and practical application of the receiver-operating characteristic analysis for diagnostic tests, Preventive Veterinary Medicine, 45 (2000), 23-41. doi: 10.1016/S0167-5877(00)00115-X. Google Scholar J. M. Heffernan, R. J. Smith and L. M. Wahl, Perspectives on the basic reproductive ratio, Journal of the Royal Society Interface, 2 (2005), 281-293. doi: 10.1098/rsif.2005.0042. Google Scholar M. J. Keeling and P. Rohani, Modeling Infectious Diseases in Humans and Animals, Princeton University Press, Princeton, NJ, 2008. Google Scholar I. Z. Kiss, D. M. Green and R. R. Kao, Disease contact tracing in random and clustered networks, Proceedings of the Royal Society of London B: Biological Sciences, 272 (2005), 1407-1414. doi: 10.1098/rspb.2005.3092. Google Scholar D. Klinkenberg, C. Fraser and H. Heesterbeek, The effectiveness of contact tracing in emerging epidemics, PloS One, 1 (2006), e12. doi: 10.1371/journal.pone.0000012. Google Scholar S. Liu, N. Perra, M. Karsai and A. Vespignani, Controlling contagion processes in activity driven networks, Physical review letters, 112 (2014), 118702. doi: 10.1103/PhysRevLett.112.118702. Google Scholar A. S. da Mata and R. Pastor-Satorras, Slow relaxation dynamics and aging in random walks on activity driven temporal networks, The European Physical Journal B, 88 (2015), Art. 38, 8 pp. doi: 10.1140/epjb/e2014-50801-1. Google Scholar N. Perra, B. Gonçalves, R. Pastor-Satorras and A. Vespignani, Activity Driven Modeling of Time Varying Networks, Scientific Reports, 2012. doi: 10.1038/srep00469. Google Scholar N. Perra, A. Baronchelli, D. Mocanu, B. Gonc'calves, R. Pastor-Satorras and A. Vespignani, Random walks and search in time-varying networks, Physical review letters, 109 (2012), 238701. doi: 10.1103/PhysRevLett.109.238701. Google Scholar A. Rizzo, B. Pedalino and M. Porfiri, A network model for Ebola spreading, Journal of theoretical biology, 394 (2016), 212-222. doi: 10.1016/j.jtbi.2016.01.015. Google Scholar A. Rizzo, M. Frasca and M. Porfiri, Effect of individual behavior on epidemic spreading in activity-driven networks, Physical Review E, 90 (2014), 042801. doi: 10.1103/PhysRevE.90.042801. Google Scholar A. Rizzo and M. Porfiri, Toward a realistic modeling of epidemic spreading with activity driven networks, in Temporal Network Epidemiology (N. Masuda and P. Holme), Springer, (2017), 317-319. Google Scholar N. M. Shahtori, C. Scoglio, A. Pourhabib and F. D. Sahneh, Sequential Monte Carlo filtering estimation of Ebola progression in West Africa, American Control Conference(ACC), (2016), 1277-1282. doi: 10.1109/ACC.2016.7525093. Google Scholar M. D. Shirley and S. P. Rushton, The impacts of network topology on disease spread, Ecological Complexity, 2 (2005), 287-299. doi: 10.1016/j.ecocom.2005.04.005. Google Scholar F. Shuaib, R. Gunnala, E. O. Musa, F. J. Mahoney, O. Oguntimehin, P. M. Nguku, S. B. Nyanti, N. Knight, N. S. Gwarzo, O. Idigbe, A. Nasidi and J. F. Vertefeuille, Ebola virus disease outbreak-Nigeria, July-September 2014, MMWR Morb Mortal Wkly Rep, 63 (2014), 867-872. Google Scholar M. Starnini and R. Pastor-Satorras, Temporal percolation in activity-driven networks, Physical Review E, 89 (2014), 032807. doi: 10.1103/PhysRevE.89.032807. Google Scholar M. Starnini and R. Pastor-Satorras, Topological properties of a time-integrated activity-driven network, Physical Review E, 87 (2013), 062807. doi: 10.1103/PhysRevE.87.062807. Google Scholar K. Sun, A. Baronchelli and N. Perra, Contrasting effects of strong ties on SIR and SIS processes in temporal networks, The European Physical Journal B, 88 (2015), Art. 326, 8 pp. doi: 10.1140/epjb/e2015-60568-4. Google Scholar L. Zino, A. Rizzo and M. Porfiri, Continuous-Time Discrete-Distribution Theory for Activity-Driven Networks, Physical Review Letters, 117 (2016), 228302. doi: 10.1103/PhysRevLett.117.228302. Google Scholar Cases of Ebola Diagnosed in the United States, 2014. Available from: https://www.cdc.gov/vhf/ebola/outbreaks/2014-west-africa/united-states-imported-case.html. Google Scholar Implementation and Management of Contact Tracing for Ebola Virus Disease, Emergency Guideline by the World Health Organization, 2015. Available from: https://www.cdc.gov/vhf/ebola/pdf/contact-tracing-guidelines.pdf. Google Scholar Figure 1. Schematic of the transition processes in the Ebola progression with contact tracing model. Figure 2. The epidemic attack ratio as a function of $\alpha^{-1}$. The results are the averages of $10,000$ simulations. Figure 3. The epidemic attack ratio as a function of $\gamma^{-1}$. The results are the averages of $10,000$ simulations. Figure 4. 3-D $ROC$ curve of contact tracing with $5$ identification delay implemented in three different scenarios. The results are the averages of $10,000$ simulations. Figure 5. 2-D $ROC$ curve of contact tracing implementation in three different scenarios. The area under the curve (AUC) values for contact tracing starting on day 1, day 9 and day 22 are respectively 0.6550, 0.4060 and 0.1207. The results are the averages of $10,000$ simulations. Figure 6. $R_0$ as a function of the identification delay, $\alpha^{-1}$ in three scenarios. The results are the averages of $10,000$ simulations. Figure 7. $R_0$ as a function of the hospitalization delay, $\gamma^{-1}$. The results are the averages of $10,000$ simulations. Table 1. Time-invariant parameters of Ebola contagion process Parameter Value Transmission probability($\beta$) $0.11$ Incubation rate ($\lambda$) $0.095$ Recovery/removal probability ($\delta$) $0.1$ Hospitalization probability in existence of contact tracing ($\gamma_T$) $0.9$ Hospitalization probability ($\gamma$) $0.33$ Table 2. Parameters of activity driven network generator Density function exponent ($c$) $2.2$ Links per active node ($m$) $7$ Scaling factor for susceptible ($\eta_S$) $2.2$ Scaling factor for infected ($\eta_I$) $1.1$ Scaling factor for hospitalized ($\eta_H$) $0.005$ Ellina Grigorieva, Evgenii Khailov. Determination of the optimal controls for an Ebola epidemic model. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1071-1101. doi: 10.3934/dcdss.2018062 Jia-Bing Wang, Shao-Xia Qiao, Chufen Wu. Wave phenomena in a compartmental epidemic model with nonlocal dispersal and relapse. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021152 Xi Huo. Modeling of contact tracing in epidemic populations structured by disease age. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1685-1713. doi: 10.3934/dcdsb.2015.20.1685 IvÁn Area, FaÏÇal NdaÏrou, Juan J. Nieto, Cristiana J. Silva, Delfim F. M. Torres. Ebola model and optimal control with vaccination constraints. Journal of Industrial & Management Optimization, 2018, 14 (2) : 427-446. doi: 10.3934/jimo.2017054 Arti Mishra, Benjamin Ambrosio, Sunita Gakkhar, M. A. Aziz-Alaoui. A network model for control of dengue epidemic using sterile insect technique. Mathematical Biosciences & Engineering, 2018, 15 (2) : 441-460. doi: 10.3934/mbe.2018020 Marie Levakova. Effect of spontaneous activity on stimulus detection in a simple neuronal model. Mathematical Biosciences & Engineering, 2016, 13 (3) : 551-568. doi: 10.3934/mbe.2016007 S. M. Crook, M. Dur-e-Ahmad, S. M. Baer. A model of activity-dependent changes in dendritic spine density and spine structure. Mathematical Biosciences & Engineering, 2007, 4 (4) : 617-631. doi: 10.3934/mbe.2007.4.617 Antonio DeSimone, Natalie Grunewald, Felix Otto. A new model for contact angle hysteresis. Networks & Heterogeneous Media, 2007, 2 (2) : 211-225. doi: 10.3934/nhm.2007.2.211 Shouying Huang, Jifa Jiang. Global stability of a network-based SIS epidemic model with a general nonlinear incidence rate. Mathematical Biosciences & Engineering, 2016, 13 (4) : 723-739. doi: 10.3934/mbe.2016016 Mark G. Burch, Karly A. Jacobsen, Joseph H. Tien, Grzegorz A. Rempała. Network-based analysis of a small Ebola outbreak. Mathematical Biosciences & Engineering, 2017, 14 (1) : 67-77. doi: 10.3934/mbe.2017005 Xiao-Qiang Zhao, Wendi Wang. Fisher waves in an epidemic model. Discrete & Continuous Dynamical Systems - B, 2004, 4 (4) : 1117-1128. doi: 10.3934/dcdsb.2004.4.1117 Haitao Song, Fang Liu, Feng Li, Xiaochun Cao, Hao Wang, Zhongwei Jia, Huaiping Zhu, Michael Y. Li, Wei Lin, Hong Yang, Jianghong Hu, Zhen Jin. Modeling the second outbreak of COVID-19 with isolation and contact tracing. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021294 David J. Aldous. A stochastic complex network model. Electronic Research Announcements, 2003, 9: 152-161. H. Thomas Banks, W. Clayton Thompson, Cristina Peligero, Sandra Giest, Jordi Argilaguet, Andreas Meyerhans. A division-dependent compartmental model for computing cell numbers in CFSE-based lymphocyte proliferation assays. Mathematical Biosciences & Engineering, 2012, 9 (4) : 699-736. doi: 10.3934/mbe.2012.9.699 Oanh Chau, R. Oujja, Mohamed Rochdi. A mathematical analysis of a dynamical frictional contact model in thermoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2008, 1 (1) : 61-70. doi: 10.3934/dcdss.2008.1.61 Jiangtao Mo, Liqun Qi, Zengxin Wei. A network simplex algorithm for simple manufacturing network model. Journal of Industrial & Management Optimization, 2005, 1 (2) : 251-273. doi: 10.3934/jimo.2005.1.251 Hongying Shu, Xiang-Sheng Wang. Global dynamics of a coupled epidemic model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1575-1585. doi: 10.3934/dcdsb.2017076 F. Berezovskaya, G. Karev, Baojun Song, Carlos Castillo-Chavez. A Simple Epidemic Model with Surprising Dynamics. Mathematical Biosciences & Engineering, 2005, 2 (1) : 133-152. doi: 10.3934/mbe.2005.2.133 Elisabeth Logak, Isabelle Passat. An epidemic model with nonlocal diffusion on networks. Networks & Heterogeneous Media, 2016, 11 (4) : 693-719. doi: 10.3934/nhm.2016014 Philippe Michel, Suman Kumar Tumuluri. A note on a neuron network model with diffusion. Discrete & Continuous Dynamical Systems - B, 2020, 25 (9) : 3659-3676. doi: 10.3934/dcdsb.2020085 Narges Montazeri Shahtori Tanvir Ferdousi Caterina Scoglio Faryad Darabi Sahneh
CommonCrawl
What Is Bond Yield? Understanding Bond Yield Bond Yield vs. Price Bond Equivalent Yield (BEY) Effective Annual Yield (EAY) Calculating a Bond's Yield Bond Yield FAQs Bond Yield: What It Is, Why It Matters, and How It's Calculated Adam Hayes Adam Hayes, Ph.D., CFA, is a financial writer with 15+ years Wall Street experience as a derivatives trader. Besides his extensive derivative trading expertise, Adam is an expert in economics and behavioral finance. Adam received his master's in economics from The New School for Social Research and his Ph.D. from the University of Wisconsin-Madison in sociology. He is a CFA charterholder as well as holding FINRA Series 7, 55 & 63 licenses. He currently researches and teaches economic sociology and the social studies of finance at the Hebrew University in Jerusalem. Cierra Murry Reviewed by Cierra Murry Cierra Murry is an expert in banking, credit cards, investing, loans, mortgages, and real estate. She is a banking consultant, loan signing agent, and arbitrator with more than 15 years of experience in financial analysis, underwriting, loan documentation, loan review, banking compliance, and credit risk management. Katrina Munichiello Fact checked by Katrina Munichiello Katrina Ávila Munichiello is an experienced editor, writer, fact-checker, and proofreader with more than fourteen years of experience working with print and online publications. Investopedia / Daniel Fishel Bond yield is the return an investor realizes on a bond and can be derived in different ways. The coupon rate is the annual interest rate established when the bond is issued. The current yield depends on the bond's price and its coupon, or interest payment. Additional calculations of a bond's yield include yield to maturity (YTM), bond equivalent yield (BEY), and effective annual yield (EAY). Bond yield is the return an investor realizes on an investment in a bond. A bond can be purchased for more than its face value, at a premium, or less than its face value, at a discount. The current yield is the bond's coupon rate divided by its market price. Price and yield are inversely related and as the price of a bond goes up, its yield goes down. Bond Yields: Current Yield And YTM Bonds are essentially a loan to bond issuers. Investors earn interest on a bond throughout the life of the bond and receive the face value of the bond upon maturity. A bond can be purchased for more than its face value, at a premium, or less than its face value, at a discount, which will change the yield an investor earns on the bond. Bonds are rated by services approved by the U.S. Securities and Exchange Commission and ratings range from "AAA" as investment grade with the lowest risk to "D," which are bonds in default, or junk bonds, with the highest risk. The simplest way to calculate a bond yield is to divide its coupon payment by the face value of the bond. This is called the coupon rate. Coupon Rate = Annual Coupon Payment Bond Face Value \text{Coupon Rate}=\frac{\text{Annual Coupon Payment}}{\text{Bond Face Value}} Coupon Rate=Bond Face ValueAnnual Coupon Payment​ If a bond has a face value of $1,000 and made interest or coupon payments of $100 per year, then its coupon rate is 10% ($100 / $1,000 = 10%). Price and yield are inversely related. As the price of a bond goes up, its yield goes down and as yield goes up, the price of the bond goes down. If an investor purchases a bond with a face value of $1000 that matures in five years with a 10% annual coupon rate, the bond pays 10%, or $100, in interest annually. If interest rates rise above 10%, the bond's price will fall if the investor decides to sell it. If the interest rate for similar investments rises to 12%, the original bond will still earn a coupon payment of $100, which would be unattractive to investors who can buy bonds that pay $120 as interest rates have risen. To sell the original $1000 bond, the price can be lowered so that the coupon payments and maturity value equal a yield of 12%. If interest rates fall, the bond's price would rise because its coupon payment is more attractive. The further rates fall, the higher the bond's price will rise. In either scenario, the coupon rate no longer has any meaning for a new investor. However, if the annual coupon payment is divided by the bond's price, the investor can calculate the current yield and get an estimate of the bond's true yield. Current Yield = Annual Coupon Payment Bond Price \text{Current Yield}=\frac{\text{Annual Coupon Payment}}{\text{Bond Price}} Current Yield=Bond PriceAnnual Coupon Payment​ The current yield and the coupon rate are incomplete calculations for a bond's yield because they do not account for the time value of money, maturity value, or payment frequency, and more complex calculations are required. A bond's yield to maturity (YTM) is equal to the interest rate that makes the present value of all a bond's future cash flows equal to its current price. These cash flows include all the coupon payments and maturity value. Solving for YTM is a trial and error process that can be done on a financial calculator, but the formula is as follows: Price = ∑ t − 1 T Cash Flows t ( 1 + YTM ) t where: YTM = Yield to maturity \begin{aligned} &\text{Price}=\sum^T_{t-1}\frac{\text{Cash Flows}_t}{(1+\text{YTM})^t}\\ &\textbf{where:}\\ &\text{YTM}=\text{ Yield to maturity} \end{aligned} ​Price=t−1∑T​(1+YTM)tCash Flowst​​where:YTM= Yield to maturity​ In the previous example, a bond with a $1,000 face value, five years to maturity, and $100 annual coupon payments is worth $927.90 to match a new YTM of 12%. The five coupon payments plus the $1,000 maturity value are the bond's six cash flows. Finding the present value of each of those six cash flows with an interest rate of 12% will determine what the bond's current price should be. Bond yields are quoted as a bond equivalent yield (BEY), which adjusts for the bond coupon paid in two semi-annual payments. In the previous example, the bonds' cash flows were annual, so the YTM is equal to the BEY. However, if the coupon payments were made every six months, the semi-annual YTM would be 5.979%. The BEY is a simple annualized version of the semi-annual YTM and is calculated by multiplying the YTM by two. In this example, the BEY of a bond that pays semi-annual coupon payments of $50 would be 11.958% (5.979% X 2 = 11.958%). The BEY does not account for the time value of money for the adjustment from a semi-annual YTM to an annual rate. Investors can define a more precise annual yield given the BEY for a bond when considering the time value of money in the calculation. In the case of a semi-annual coupon payment, the effective annual yield (EAY) would be calculated as follows: EAY = ( 1 + YTM 2 ) 2 − 1 where: EAY = Effective annual yield \begin{aligned} &\text{EAY} = \left ( 1 + \frac { \text{YTM} }{ 2 } \right ) ^ 2 - 1 \\ &\textbf{where:}\\ &\text{EAY} = \text{Effective annual yield} \\ \end{aligned} ​EAY=(1+2YTM​)2−1where:EAY=Effective annual yield​ If an investor knows that the semi-annual YTM was 5.979%, they could use the previous formula to find the EAY of 12.32%. Because the extra compounding period is included, the EAY will be higher than the BEY. A bond rating is a grade given to a bond and indicates its credit quality. The rating takes into consideration a bond issuer's financial strength or its ability to pay a bond's principal and interest in a timely fashion. There are three bond rating agencies in the United States that account for approximately 95% of all bond ratings and include Fitch Ratings, Standard & Poor's Global Ratings, and Moody's Investors Service. Some factors skew the calculations in determining a bond's yield. In the previous examples, it was assumed that the bond had exactly five years left to maturity when it was sold, which is rare. The fractional periods can be defined but the accrued interest is more difficult to calculate. Assume a bond has four years and eight months to maturity. The exponent in the yield calculations can be turned into a decimal to adjust for the partial year. However, this means that four months in the current coupon period have elapsed with two remaining, which requires an adjustment for accrued interest. A new bond buyer will be paid the full coupon, so the bond's price will be inflated slightly to compensate the seller for the four months in the current coupon period that have elapsed. Bonds can be quoted with a "clean price" that excludes the accrued interest or the "dirty price" that includes the amount owed to reconcile the accrued interest. When bonds are quoted in a system like a Bloomberg or Reuters terminal, the clean price is used. What Does a Bond's Yield Tell Investors? A bond's yield is the return to an investor from the bond's interest, or coupon, payments. It can be calculated as a simple coupon yield or using a more complex method like yield to maturity. Higher yields mean that bond investors are owed larger interest payments, but may also be a sign of greater risk. The riskier a borrower is, the more yield investors demand. Higher yields are often common with a longer maturity bond. Are High-Yield Bonds Better Investments Than Low-Yield Bonds? Bond investment depends on an investor's circumstances, goals, and risk tolerance. Low-yield bonds may be better for investors who want a virtually risk-free asset, or one who is hedging a mixed portfolio by keeping a portion of it in a low-risk asset. High-yield bonds may be better suited for investors who are willing to accept a degree of risk in return for a higher return. How Do Investors Utilize Bond Yields? In addition to evaluating the expected cash flows from individual bonds, yields are used for more sophisticated analyses. Traders may buy and sell bonds of different maturities to take advantage of the yield curve, which plots the interest rates of bonds having equal credit quality but differing maturity dates. The slope of the yield curve gives an idea of future interest rate changes and economic activity. They may also look to the difference in interest rates between different categories of bonds, holding some characteristics constant. A yield spread is a difference between yields on differing debt instruments of varying maturities, credit ratings, issuer, or risk level, calculated by deducting the yield of one instrument from the other such as the spread between AAA corporate bonds and U.S. Treasuries. This difference is most often expressed in basis points (bps) or percentage points. Bond yield is the amount of return an investor will realize on a bond. The coupon rate and Coupon Yield are basic yield concepts and calculations. A bond rating is a grade given to a bond and indicates its credit quality and often the level of risk to the investor in purchasing the bond. Forbes. "What Are Bond Rating Agencies?" U.S. Securities and Exchange Commission. "Bonds." U.S. Securities and Exchange Commission. "Investor Bulletin: Fixed Income Investments - When Interest Rates Go Up, Prices of Fixed-Rate Bonds Fall." How to Invest with Confidence Investing: An Introduction What Is the Stock Market, What Does It Do, and How Does It Work? A Look at Primary and Secondary Markets How to Buy and Sell Stocks for Your Account Trading Hours of the World's Major Stock Exchanges Getting to Know the Stock Exchanges How to Invest in Stocks: A Beginner's Guide What Owning a Stock Actually Means The Basics of Trading a Stock: Know Your Orders How to Reduce Risk With Optimal Position Size How Do I Place an Order to Buy or Sell Shares? Income, Value, and Growth Stocks How Do I Keep Commissions and Fees From Eating Trading Profits? What Type of Brokerage Account Is Right for You? Best Online Brokers and Trading Platforms Investing vs. Trading: What's the Difference? Stock vs. ETF: Which Should You Buy? Why Would Someone Choose a Mutual Fund Over a Stock? ETF vs. Mutual Fund: What's the Difference? Bond: Financial Meaning With Examples and How They Are Priced 4 Basic Things to Know About Bonds How to Buy Bonds: Corporate, Treasury, Municipal, or Foreign How to Invest in Corporate Bonds Introduction to Treasury Securities The Basics of Municipal Bonds What Are the Risks of Investing in a Bond? Options vs. Futures: What's the Difference? Options Trading for Beginners How to Calculate Your Portfolio's Investment Returns What Are Corporate Actions? Why Dividends Matter to Investors What Are Stock Fundamentals? 5 Essentials You Need to Know About Every Stock You Buy Sector Breakdown Definition and Stock Market Use How to Analyze a Company's Financial Position Technical Analysis: What It Is and How to Use It in Investing What Is Effective Yield? Definition, Calculation, and Example Effective yield is a bond yield that assumes coupon payments are reinvested after being received. Duration Definition and Its Use in Fixed Income Investing Duration indicates the years it takes to receive a bond's true cost, weighing in the present value of all future coupon and principal payments. Yield to Maturity (YTM): What It Is, Why It Matters, Formula Yield to maturity (YTM) is the total return expected on a bond if the bond is held until maturity. Macaulay Duration: Definition, Formula, Example, and How It Works The Macaulay duration is the weighted average term to maturity of the cash flows from a bond. Semi-Annual Bond Basis (SABB) Semi-annual bond basis (SABB) is a fixed-income comparison method for bonds with varying characteristics. A bond is a fixed-income investment that represents a loan made by an investor to a borrower, ususally corporate or governmental. Current Yield vs. Yield to Maturity Fixed Income Trading Strategy & Education Yield to Maturity vs. Holding Period Return: What's the Difference? How to Calculate Yield to Maturity of a Zero-Coupon Bond Macaulay Duration vs. Modified Duration: What's the Difference? Bond Yield Rate vs. Coupon Rate: What's the Difference?
CommonCrawl
Mosc. Math. J.: Mosc. Math. J., 2007, Volume 7, Number 2, Pages 219–242 (Mi mmj280) This article is cited in 34 scientific papers (total in 35 papers) Spaces of polytopes and cobordism of quasitoric manifolds V. M. Buchstabera, T. E. Panovb, N. Rayc a Steklov Mathematical Institute, Russian Academy of Sciences b M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics c University of Manchester, Department of Mathematics Abstract: Our aim is to bring the theory of analogous polytopes to bear on the study of quasitoric manifolds, in the context of stably complex manifolds with compatible torus action. By way of application, we give an explicit construction of a quasitoric representative for every complex cobordism class as the quotient of a free torus action on a real quadratic complete intersection. We suggest a systematic description for omnioriented quasitoric manifolds in terms of combinatorial data, and explain the relationship with non-singular projective toric varieties (otherwise known as toric manifolds). By expressing the first and third authors' approach to the representability of cobordism classes in these terms, we simplify and correct two of their original proofs concerning quotient polytopes; the first relates to framed embeddings in the positive cone, and the second involves modifying the operation of connected sum to take account of orientations. Analogous polytopes provide an informative setting for several of the details. Key words and phrases: Analogous polytopes, complex cobordism, connected sum, framing, omniorientation, quasitoric manifold, stable tangent bundle. DOI: https://doi.org/10.17323/1609-4514-2007-7-2-219-242 Full text: http://www.ams.org/.../abst7-2-2007.html References: PDF file HTML file MSC: 55N22, 52B20, 14M25 Citation: V. M. Buchstaber, T. E. Panov, N. Ray, "Spaces of polytopes and cobordism of quasitoric manifolds", Mosc. Math. J., 7:2 (2007), 219–242 \Bibitem{BucPanRay07} \by V.~M.~Buchstaber, T.~E.~Panov, N.~Ray \paper Spaces of polytopes and cobordism of quasitoric manifolds \jour Mosc. Math.~J. \mathnet{http://mi.mathnet.ru/mmj280} \crossref{https://doi.org/10.17323/1609-4514-2007-7-2-219-242} http://mi.mathnet.ru/eng/mmj280 http://mi.mathnet.ru/eng/mmj/v7/i2/p219 This publication is cited in the following articles: V. M. Buchstaber, N. Ray, "Universal equivariant genus and Krichever's formula", Russian Math. Surveys, 62:1 (2007), 178–180 M. Masuda, T. E. Panov, "Semifree circle actions, Bott towers and quasitoric manifolds", Sb. Math., 199:8 (2008), 1201–1223 T. E. Panov, "Toric Kempf–Ness Sets", Proc. Steklov Inst. Math., 263 (2008), 150–162 Masuda M., Suh D.Y., "Classification problems of toric manifolds via topology", Toric Topology, Contemporary Mathematics Series, 460, 2008, 273–286 A. A. Kustarev, "Equivariant almost complex structures on quasi-toric manifolds", Russian Math. Surveys, 64:1 (2009), 156–158 A. A. Kustarev, "Equivariant Almost Complex Structures on Quasitoric Manifolds", Proc. Steklov Inst. Math., 266 (2009), 133–141 Yu. M. Ustinovskii, "Doubling operation for polytopes and torus actions", Russian Math. Surveys, 64:5 (2009), 952–954 Bahri A., Bendersky M., Cohen F.R., Gitler S., "Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces", Proc. Natl. Acad. Sci. USA, 106:30 (2009), 12241–12244 Buchstaber V., Panov T., Ray N., "Toric Genera", Int. Math. Res. Not. IMRN, 2010, no. 16, 3207–3262 Bahri A., Bendersky M., Cohen F.R., Gitler S., "The polyhedral product functor: A method of decomposition for moment-angle complexes, arrangements and related spaces", Adv. Math., 225:3 (2010), 1634–1668 Choi Suyoung, Masuda Mikiya, Suh Dong Youp, "Quasitoric manifolds over a product of simplices", Osaka J. Math., 47:1 (2010), 109–129 V. M. Buchstaber, E. Yu. Bun'kova, "Krichever Formal Groups", Funct. Anal. Appl., 45:2 (2011), 99–116 N. Yu. Erokhovets, "Moment-angle manifolds of simple $n$-polytopes with $n+3$ facets", Russian Math. Surveys, 66:5 (2011), 1006–1008 A. A. Aizenberg, V. M. Buchstaber, "Nerve complexes and moment–angle spaces of convex polytopes", Proc. Steklov Inst. Math., 275 (2011), 15–46 Taras Panov, Yuri Ustinovsky, "Complex-analytic structures on moment-angle manifolds", Mosc. Math. J., 12:1 (2012), 149–172 V. M. Buchstaber, "Complex cobordism and formal groups", Russian Math. Surveys, 67:5 (2012), 891–950 A. A. Aizenberg, "Topological applications of Stanley-Reisner rings of simplicial complexes", Trans. Moscow Math. Soc., 73 (2012), 37–65 T. E. Panov, "Geometric structures on moment-angle manifolds", Russian Math. Surveys, 68:3 (2013), 503–568 A. M. Vershik, A. P. Veselov, A. A. Gaifullin, B. A. Dubrovin, A. B. Zhizhchenko, I. M. Krichever, A. A. Mal'tsev, D. V. Millionshchikov, S. P. Novikov, T. E. Panov, A. G. Sergeev, I. A. Taimanov, "Viktor Matveevich Buchstaber (on his 70th birthday)", Russian Math. Surveys, 68:3 (2013), 581–590 Hiroaki Ishida, Yukiko Fukukawa, Mikiya Masuda, "Topological toric manifolds", Mosc. Math. J., 13:1 (2013), 57–98 A. A. Aizenberg, "Substitutions of polytopes and of simplicial complexes, and multigraded Betti numbers", Trans. Moscow Math. Soc., 74 (2013), 175–202 Trans. Moscow Math. Soc., 74 (2013), 203–216 N. Yu. Erokhovets, "Buchstaber invariant theory of simplicial complexes and convex polytopes", Proc. Steklov Inst. Math., 286 (2014), 128–187 A. A. Aizenberg, M. Masuda, Seonjeong Park, Haozhi Zeng, "Toric origami structures on quasitoric manifolds", Proc. Steklov Inst. Math., 288 (2015), 10–28 V. M. Buchstaber, A. A. Kustarev, "Embedding theorems for quasi-toric manifolds given by combinatorial data", Izv. Math., 79:6 (2015), 1157–1183 Kuroki Sh., Masuda M., Yu L., "Small Covers, Infra-Solvmanifolds and Curvature", Forum Math., 27:5 (2015), 2981–3004 Darby A., "Torus Manifolds in Equivariant Complex Bordism", Topology Appl., 189 (2015), 31–64 G. D. Solomadin, Yu. M. Ustinovskiy, "Projective toric polynomial generators in the unitary cobordism ring", Sb. Math., 207:11 (2016), 1601–1624 Lu Zh., Wang W., "Examples of Quasitoric Manifolds as Special Unitary Manifolds", Math. Res. Lett., 23:5 (2016), 1453–1468 Lu Zh., Panov T., "on Toric Generators in the Unitary and Special Unitary Bordism Rings", Algebr. Geom. Topol., 16:5 (2016), 2865–2893 V. M. Buchstaber, N. Yu. Erokhovets, M. Masuda, T. E. Panov, S. Park, "Cohomological rigidity of manifolds defined by 3-dimensional polytopes", Russian Math. Surveys, 72:2 (2017), 199–256 Buchstaber V.M., Erokhovets N.Yu., "Fullerenes, Polytopes and Toric Topology", Combinatorial and Toric Homotopy: Introductory Lectures, Lecture Notes Series Institute For Mathematical Sciences National University of Singapore, 35, eds. Darby A., Grbic J., Lu Z., Wu J., World Scientific Publ Co Pte Ltd, 2018, 67–178 G. D. Solomadin, "Quasitoric Totally Normally Split Representatives in the Unitary Cobordism Ring", Math. Notes, 105:5 (2019), 763–780 I. Yu. Limonchenko, T. E. Panov, G. Chernykh, "$SU$-bordism: structure results and geometric representatives", Russian Math. Surveys, 74:3 (2019), 461–524 Li X., Wang G., "a Moment-Angle Manifold Whose Cohomology Has Torsion", Homol. Homotopy Appl., 21:2 (2019), 199–212 References: 42
CommonCrawl
Recent questions tagged transfer-function The loop transfer function of a negative feedback system is $G\left ( s \right )H\left ( s \right )=\frac{K(s+11)}{s(s+2)(s+8)}.$ The value of $K$, for which the system is marginally stable, is ___________. jothee asked in Network Solution Methods Feb 13, 2020 by jothee network-solution-methods transfer-function A system with transfer function $G\left ( s \right )=\dfrac{1}{\left ( s+1 \right )\left ( s+a \right )},\:\:a> 0$ is subjected to an input $5 \cos3t$. The steady state output of the system is $\dfrac{1}{\sqrt{10}}\cos\left ( 3t-1.892 \right )$. The value of $a$ is _______. The transfer function of a stable discrete-time $\text{LTI}$ system is $H\left ( z \right )=\dfrac{K\left ( z-\alpha \right )}{z+0.5}$, where $K$ and $\alpha$ are real numbers. The value of $\alpha$ (rounded off to one decimal place) with $\mid \alpha \mid > 1$, for which the magnitude response of the system is constant over all frequencies, is ___________. GATE ECE 2019 | Question: 5 Let $Y(s)$ be the unit-step response of a causal system having a transfer function $G(s)= \dfrac{3-s}{(s+1)(s+3)}$ that is ,$Y(s)=\dfrac{G(s)}{s}.$ The forced response of the system is $u(t)-2e^{-t}u(t)+e^{-3t}u(t)$ $2u(t)-2e^{-t}u(t)+e^{-3t}u(t)$ $2u(t)$ $u(t)$ Arjun asked in Network Solution Methods Feb 12, 2019 by Arjun signals-and-systems Consider a causal second-order system with the transfer function $G(s)=\dfrac{1}{1+2s+s^{2}}$ with a unit-step $R(s)=\dfrac{1}{s}$ as an input. Let $C(s)$ be the corresponding output. The time taken by the system output $c(t)$ to reach $94\%$ of its ... value $\underset{t\rightarrow \infty}{\lim}\:c(t),$ rounded off to two decimal places, is $5.25$ $4.50$ $3.89$ $2.81$ The block diagram of a system is illustrated in the figure shown, where $X(s)$ is the input and $Y(s)$ is the output. The transfer function $H(s)=\dfrac{Y(s)}{X(s)}$ is $H(s)=\frac{s^{2}+1}{s^{3}+s^{2}+s+1}$ $H(s)=\frac{s^{2}+1}{s^{3}+2s^{2}+s+1}$ $H(s)=\frac{s+1}{s^{2}+s+1}$ $H(s)=\frac{s^{2}+1}{2s^{2}+1}$ For the unity feedback control system shown in the figure, the open-loop transfer function $G(s)$ is given as $G(s) = \frac{2}{s(s+1)}$ The steady state error $e_{ss}$ due to a unit step input is $0$ $0.5$ $1.0$ $\infty$ Milicevic3306 asked in Control Systems Mar 28, 2018 control-systems A signal $2 \cos(\frac{2\pi}{3}t)-\cos(\pi t)$ is the input to an LTI system with the transfer function $H(s)=e^s+e^{-s}.$ If $C_k$ denotes the $k^{th}$ coefficient in the exponential Fourier series of the output signal, then $C_3$ is equal to $0$ $1$ $2$ $3$ Milicevic3306 asked in Continuous-time Signals Mar 28, 2018 continuous-time-signals linear-time-invariant-systems The forward-path transfer function and the feedback-path transfer function of a single loop negative feedback control system are given as $G(s)=\frac{K(s+2)}{s^2+2s+2}\;\text{and}\hspace{0.3cm}H(s)=1,$ respectively. If the variable parameter $K$ is real positive, then the location of the breakaway point on the root locus diagram of the system is _________ Milicevic3306 asked in Network Solution Methods Mar 28, 2018 bode-and-root-locus-plots A continuous-time filter with transfer function $H\left ( s \right )= \frac{2s+6}{s^{2}+6s+8}$ ... sampled at $2$ $Hz$, is identical at the sampling instants to the impulse response of the discrete time-filter. The value of $k$ is _________ The open-loop transfer function of a unity-feedback control system is $G(s)= \frac{K}{s^2+5s+5}$. The value of $K$ at the breakaway point of the feedback contol system's root-locus plot is _________ The open-loop transfer function of a unity feedback control system is given by $G(s)= \frac{K}{s(s+2)}$. For the peak overshoot of the closed-loop system to a unit step input to be $10 \%$, the value of $K$ is _________ The transfer function of a linear time invariant system is given by $H(s) = 2s^4 – 5s^3 + 5s -2$. The number of zeroes in the right half of the $s$-plane is _________ The transfer function of a first-order controller is given as $G_{C}(s) = \dfrac{K(s+a)}{s+b}$where $K,a$ and ܾ$b$ are positive real numbers. The condition for this controller to act as a phase lead compensator is $a<b$ $a>b$ $K<ab$ $K>ab$ A network is described by the state model as $\dot{x_{1}}=2x_{1}-x_{2}+3u \\ \dot{x_{2}}=-4x_{2}-u \\ y=3x_{1}-2x_{2}$ The transfer function $H(s)\left(=\dfrac{Y(s)}{U(s)}\right)$ is $\dfrac{11s+35}{(s-2)(s+4)} \\$ $\dfrac{11s-35}{(s-2)(s+4)} \\$ $\dfrac{11s+38}{(s-2)(s+4)} \\$ $\dfrac{11s-38}{(s-2)(s+4)}$ By performing cascading and/or summing/differencing operations using transfer function blocks $G_{1}(s )$ and $G_{2}(s),$ one CANNOT realize a transfer function of the form $G_{1}(s)G_{2}(s) \\$ $\dfrac{G_{1}(s)}{G_{2}(s)} \\$ $G_{1}(s)\left(\dfrac{1}{G_{1}(s)} + G_{2}(s)\right) \\$ $G_{1}(s)\left(\dfrac{1}{G_{1}(s)} - G_{2}(s)\right)$ A unity negative feedback system has an open-loop transfer function $G(S) = \dfrac{K}{s(s+10)}$. The gain $K$ for the system to have a damping ratio of $0.25$ is ________. The output of a standard second-order system for a unit step input is given as $y(t) = 1-\dfrac{2}{\sqrt{3}}e^{-t}\cos \left(\sqrt{3t}-\dfrac{\pi}{6}\right)$. The transfer function of the system is $\dfrac{2}{(s+2)(s+\sqrt{3})}$ $\dfrac{1}{s^{2}+2s+1}$ $\dfrac{3}{s^{2}+2s+3}$ $\dfrac{3}{s^{2}+2s+4}$ The transfer function of a mass-spring-damper system is given by $G(S) = \dfrac{1}{Ms^{2}+Bs+K}$ ... The unit step response of the system approaches a steady state value of ________. For the discrete-time system shown in the figure, the poles of the system transfer function are located at $2,3 \\$ $\frac{1}{2},3 \\$ $\frac{1}{2}, \frac{1}{3} \\$ $2, \frac{1}{3}$ The open-loop transfer function of a plant in a unity feedback configuration is given as $G(s) = \frac{K(s+4)}{(s+8)(s^2-9)}$. The value of the gain $K(>0)$ for which $-1+j2$ lies on the root locus is _________. A lead compensator network includes a parallel combination of $R$ and $C$ in the feed-forward path. If the transfer function of the compensator is $G_c(s)=\frac{s+2}{s+4}$, the value of $RC$ is ___________. A plant transfer function is given as $G(s)= \bigg( K_p+ \frac{K_1}{s} \bigg) \frac{1}{s(s+2)}$. When the plant operates in a unity feedback configuration, the condition for the stability of the closed loop system is $K_p>\frac{K_1}{2}>0 \\$ $2K_1>K_p>0 \\$ $2K_1<K_p \\$ $2K_1>K_p$ Consider a transfer function $G_p(s) = \frac{ps^2+3ps-2}{s^2+(3+p)s+(2-p)}$ with $p$ a positive real parameter. The maximum value of $p$ until which $G_p$ remains stable is ___________. The characteristic equation of a unity negative feedback system is $1+KG(s)=0$. The open loop transfer function $G(s)$ has one pole at $0$ and two poles at $-1$. The root locus of the system for varying $K$ is shown in the figure. The constant damping ... point A. The distance from the origin to point A is given as $0.5$. The value of $K$ at point A is ________. Consider the following block diagram in the figure. The transfer function $\frac{C(s)}{R(s)}$ is $\frac{G_{1}G_{2}}{1+G_{1}G_{2}}$ $G_{1}G_{2}+G_{1}+1$ $G_{1}G_{2}+G_{2}+1$ $\frac{G_{1}}{1+G_{1}G_{2}}$ The input $-3e^{2t}u(t),$ where $u(t)$ is the unit step function, is applied to a system with transfer function $\frac{s-2}{s+3}.$ If the initial value of the output is $-2$, then the value of the output at steady state is _______. Consider the building block called 'Network N' shown in the figure. Let $C= 100\mu F$ and $R= 10 k \Omega.$ Two such blocks are connected in cascade, as shown in the figure. The transfer function $\frac{V_{3}(s)}{V_{1}(s)}$ of the cascaded network is $\frac{s}{1+s} \\$ $\frac{s^{2}}{1+3s+s^{2}} \\$ $\left ( \frac{s}{1+s} \right )^{2} \\$ $\frac{s}{2+s}$ Let $h(t)$ denote the impulse response of a causal system with transfer function $\frac{1}{s+1}.$ Consider the following three statements. $S1$: The system is stable. $S2$: $\frac{h(t+1)}{h(t)}$ is independent of $t$ for $t > 0$. $S3$: A non-causal ... $S1$ and $S2$ are true only $S2$ and $S3$ are true only $S1$ and $S3$ are true $S1$, $S2$ and $S3$ are true For the following system, when $X_{1} (s) = 0$, the transfer function $\frac{Y(s)}{X_{2}(s)}$ is $\frac{s+1}{s^{2}}\\ $ $\frac{1}{s+1} \\$ $\frac{s+2}{s(s+1)} \\$ $\frac{s+1}{s(s+2)}$ The forward path transfer function of a unity negative feedback system is given by $G(s) = \frac{K}{(s+2)(s-1)}$. The value of $K$ which will place both the poles of the closed-loop system at the same location, is _______. The signal flow graph for a system is given below. The transfer function $\dfrac{Y(s)}{U(s)}$ for this system is $\frac{s+1}{5s^{2}+6s+2} \\$ $\frac{s+1}{s^{2}+6s+2} \\$ $\frac{s+1}{s^{2}+4s+2} \\$ $\frac{1}{5s^{2}+6s+2}$ signal-flow-graph The open-loop transfer function of a dc motor is given as $\dfrac{\omega(s)}{V_{a}(s)} = \dfrac{10}{1+10s}.$ When connected in feedback as shown below, the approximate value of $K_{a}$ that will reduce the time constant of the closed loop system by one hundred times as compared to that of the open-loop system is $1$ $5$ $10$ $100$ The transfer function of a compensator is given as $G_c(s)=\frac{s+a}{s+b}$ $G_c(s)$ is a lead compensator if $a=1,b=2$ $a=3,b=2$ $a=-3,b=-1$ $a=3,b=1$ The transfer function of a compensator is given as $G_c(s)=\frac{s+a}{s+b}$ The phase of the above lead compensator is maximum at $\sqrt{2}$ rad/s $\sqrt{3}$ rad/s $\sqrt{6}$ rad/s $\frac{1}{\sqrt{3}}$ rad/s A system with transfer function $G(s)=\frac{(s^2+9)(s+2)}{(s+1)(s+3)(s+4)}$ is excited by $\sin(\omega t)$. The steady-state output of the system is zero at $\omega=1\:rad/s$ $\omega=2\:rad/s$ $\omega=3\:rad/s$ $\omega=4\:rad/s$ The figure below shows the Bode magnitude and phase plots of a stable transfer function $G\left ( s \right )=\dfrac{n_{0}}{s^{3}+d_{2}s^{2}+d_{1}s+d_{0}}.$ Consider the negative unity feedback configuration with gain $k$ in the feedforward path. The closed loop is stable for $k < k_{0}.$ The maximum value of $k_{0}$ is _________. gatecse asked in Network Solution Methods Feb 19, 2018 by gatecse For a unity feedback control system with the forward path transfer function $G\left ( s \right )=\dfrac{K}{s\left ( s+2 \right )}$The peak resonant magnitude $M_{r}$ of the closed-loop frequency response is $2$. The corresponding value of the gain $\text{K}$ (correct to two decimal places) is _________. The transfer function of a causal LTI system is $H(s)=1/s$. If the input to the system is $x(t)=[\sin(t)/\pi t] u(t)$, where $u(t)$ is a unit step function, the system output $y(t)$ as $t\to \infty$ is ____________ admin asked in Control Systems Nov 25, 2017 A linear time invariant (LTI) system with the transfer function $G(s)=\frac{K(s^{2}+2s+2)}{(s_{2}-3s+2)}$ is connected in unity feedback configuration as shown in the figure. For the closed loop system shown, the root locus for $0< K < \infty$ ... $K>1.5$ $1<K<1.5$ $0<K<1$ no positive value of $K$
CommonCrawl
Search in book: Introduction to the A7000 Textbook Chapter 1 Science and the Universe: A Brief Tour 1.1 The Nature of Astronomy 1.2 The Nature of Science 1.3 The Laws of Nature 1.4 Numbers in Astronomy 1.5 Consequences of Light Travel Time 1.6 A Tour of the Universe 1.7 The Universe on the Large Scale 1.8 The Universe of the Very Small 1.9 A Conclusion and a Beginning 8.0 Thinking Ahead 8.1 The Global Perspective 8.2 Earth's Crust 8.3 Earth's Atmosphere 8.4 Life, Chemical Evolution, and Climate Change 8.5 Cosmic Influences on the Evolution of Earth Chapter 2 Observing the Sky: The Birth of Astronomy 2.1 The Sky Above 2.2 Ancient Astronomy Around the World 2.3 Astronomy of the First Nations of Canada 2.4 Ancient Babylonian, Greek and Roman Astronomy 2.5 Astrology and Astronomy 2.6 The Birth of Modern Astronomy – Copernicus and Galileo 2.7 For Further Exploration, Websites 2.8 Collaborative Group Activities 2.9 Questions and Exercises Chapter 3 Orbits and Gravity 3.1 The Laws of Planetary Motion 3.2 Newton's Great Synthesis 3.3 Newton's Universal Law of Gravitation 3.4 Orbits in the Solar System 3.5 Motions of Satellites and Spacecraft 3.6 Gravity with More Than Two Bodies 3.7 For Further Exploration Chapter 4 Earth, Moon, and Sky 4.1 Earth and Sky 4.2 The Seasons 4.3 Keeping Time 4.4 The Calendar 4.5 Phases and Motions of the Moon 4.6 Ocean Tides and the Moon 4.7 Eclipses of the Sun and Moon Chapter 7 Other Worlds: An Introduction to the Solar System 7.1 Overview of Our Planetary System 7.2 Composition and Structure of Planets 7.3 Dating Planetary Surfaces 7.4 Origin of the Solar System Chapter 9 Cratered Worlds 9.1 General Properties of the Moon 9.2 The Lunar Surface 9.3 Impact Craters 9.4 The Origin of the Moon 9.5 Mercury 9.6 Key Concepts and Summary, Further Explorations 9.7 Collaborative Group Activities and Exercises Chapter 10 Earthlike Planets: Venus and Mars 10.0 Thinking Ahead 10.1 The Nearest Planets: An Overview 10.2 The Geology of Venus 10.3 The Massive Atmosphere of Venus 10.4 The Geology of Mars 10.5 Water and Life on Mars 10.6 Divergent Planetary Evolution 10.7 Collaborative Group Activities and Exercises Chapter 11 The Giant Planets 11.1 Exploring the Outer Planets 11.2 The Giant Planets 11.3 Atmospheres of the Giant Planets Chapter 12 Rings, Moons, and Pluto 12.1 Ring and Moon Systems Introduced 12.2 The Galilean Moons of Jupiter 12.3 Titan and Triton 12.4 Pluto and Charon 12.5 Planetary Rings 12.6 Summary, Further Exploration, Websites Chapter 13 Comets and Asteroids: Debris of the Solar System 13.2 Asteroids and Planetary Defense 13.3 The "Long-Haired" Comets 13.4 The Origin and Fate of Comets and Related Objects 13.5 Key Concepts and Summary, Further Explorations, Websites 13.1 Asteroids Chapter 14 Cosmic Samples and the Origin of the Solar System 14.2 Meteorites: Stones from Heaven 14.3 Formation of the Solar System 14.4 Comparison with Other Planetary Systems 14.5 Planetary Evolution 14.6 Collaborative Activities, Questions and Exercises Chapter 15 The Sun: A Garden-Variety Star 15.1 The Structure and Composition of the Sun 15.2 The Solar Cycle 15.3 Solar Activity above the Photosphere 15.4 Space Weather Chapter 16 The Sun: A Nuclear Powerhouse 16.1 Sources of Sunshine: Thermal and Gravitational Energy 16.2 Mass, Energy, and the Theory of Relativity 16.3 The Solar Interior: Theory 16.4 The Solar Interior: Observations Chapter 17 Analyzing Starlight 17.1 The Brightness of Stars 17.2 Colors of Stars 17.3 The Spectra of Stars (and Brown Dwarfs) 17.4 Using Spectra to Measure Stellar Radius, Composition, and Motion Chapter 18 The Stars: A Celestial Census 18.1 A Stellar Census 18.2 Measuring Stellar Masses 18.3 Diameters of Stars 18.4 The H–R Diagram 18.5 Collaborative Group Activities, Questions and Exercises Chapter 19 Celestial Distances 19.2 Surveying the Stars 19.3 Variable Stars: One Key to Cosmic Distances 19.4 The H–R Diagram and Cosmic Distances Chapter 20 Between the Stars: Gas and Dust in Space 20.1 The Interstellar Medium 20.2 Interstellar Gas 20.3 Cosmic Dust 20.4 Cosmic Rays 20.5 The Life Cycle of Cosmic Material 20.6 Interstellar Matter around the Sun Chapter 21 The Birth of Stars and the Discovery of Planets outside the Solar System 21.1 Star Formation 21.2 The H–R Diagram and the Study of Stellar Evolution 21.3 Evidence That Planets Form around Other Stars 21.4 Planets beyond the Solar System: Search and Discovery 21.5 Exoplanets Everywhere: What We Are Learning 21.6 New Perspectives on Planet Formation Chapter 22 Stars from Adolescence to Old Age 22.1 Evolution from the Main Sequence to Red Giants 22.2 Star Clusters 22.3 Checking Out the Theory 22.4 Further Evolution of Stars 22.5 The Evolution of More Massive Stars Chapter 23 The Death of Stars 23.1 The Death of Low-Mass Stars 23.2 Evolution of Massive Stars: An Explosive Finish 23.3 Supernova Observations 23.4 Pulsars and the Discovery of Neutron Stars 23.5 The Evolution of Binary Star Systems 23.6 The Mystery of the Gamma-Ray Bursts Chapter 24 Black Holes and Curved Spacetime 24.1 Introducing General Relativity 24.2 Spacetime and Gravity 24.3 Tests of General Relativity 24.4 Time in General Relativity 24.5 Black Holes 24.6 Evidence for Black Holes 24.7 Gravitational Wave Astronomy Chapter 25 The Milky Way Galaxy 25.1 The Architecture of the Galaxy 25.2 Spiral Structure 25.3 The Mass of the Galaxy 25.4 The Center of the Galaxy 25.5 Stellar Populations in the Galaxy 25.6 The Formation of the Galaxy Chapter 26 Galaxies 26.1 The Discovery of Galaxies 26.2 Types of Galaxies 26.3 Properties of Galaxies 26.4 The Extragalactic Distance Scale 26.5 The Expanding Universe Chapter 27 Active Galaxies, Quasars, and Supermassive Black Holes 27.1 Quasars 27.2 Supermassive Black Holes: What Quasars Really Are 27.3 Quasars as Probes of Evolution in the Universe BCIT Astronomy 7000: A Survey of Astronomy By the end of this section, you will be able to: Give a brief history of how gamma-ray bursts were discovered and what instruments made the discovery possible Explain why astronomers think that gamma-ray bursts beam their energy rather than it radiating uniformly in all directions Describe how the radiation from a gamma-ray burst and its afterglow is produced Explain how short-duration gamma-ray bursts differ from longer ones, and describe the process that makes short-duration gamma-ray bursts Explain why gamma-ray bursts may help us understand the early universe Everybody loves a good mystery, and astronomers are no exception. The mystery we will discuss in this section was first discovered in the mid-1960s, not via astronomical research, but as a result of a search for the tell-tale signs of nuclear weapon explosions. The US Defense Department launched a series of Vela satellites to make sure that no country was violating a treaty that banned the detonation of nuclear weapons in space. Since nuclear explosions produce the most energetic form of electromagnetic waves called gamma rays (see Radiation and Spectra), the Vela satellites contained detectors to search for this type of radiation. The satellites did not detect any confirmed events from human activities, but they did—to everyone's surprise—detect short bursts of gamma rays coming from random directions in the sky. News of the discovery was first published in 1973; however, the origin of the bursts remained a mystery. No one knew what produced the brief flashes of gamma rays or how far away the sources were. From a Few Bursts to Thousands With the launch of the Compton Gamma-Ray Observatory by NASA in 1991, astronomers began to identify many more bursts and to learn more about them ([link]). Approximately once per day, the NASA satellite detected a flash of gamma rays somewhere in the sky that lasted from a fraction of a second to several hundred seconds. Before the Compton measurements, astronomers had expected that the most likely place for the bursts to come from was the main disk of our own (pancake-shaped) Galaxy. If this had been the case, however, more bursts would have been seen in the crowded plane of the Milky Way than above or below it. Instead, the sources of the bursts were distributed isotropically; that is, they could appear anywhere in the sky with no preference for one region over another. Almost never did a second burst come from the same location. Compton Detects Gamma-Ray Bursts. Figure 1. (a) In 1991, the Compton Gamma-Ray Observatory was deployed by the Space Shuttle Atlantis. Weighing more than 16 tons, it was one of the largest scientific payloads ever launched into space. (b) This map of gamma-ray burst positions measured by the Compton Gamma-Ray Observatory shows the isotropic (same in all directions), uniform distribution of bursts on the sky. The map is oriented so that the disk of the Milky Way would stretch across the center line (or equator) of the oval. Note that the bursts show no preference at all for the plane of the Milky Way, as many other types of objects in the sky do. Colors indicate the total energy in the burst: red dots indicate long-duration, bright bursts; blue and purple dots show short, weaker bursts. (credit a: modification of work by NASA; credit b: modification of work by NASA/GSFC) To get a good visual sense of the degree to which the bursts come from all over the sky, watch this short animated NASA video showing the location of the first 500 bursts found by the later Swift satellite. For several years, astronomers actively debated whether the burst sources were relatively nearby or very far away—the two possibilities for bursts that are isotropically distributed. Nearby locations might include the cloud of comets that surrounds the solar system or the halo of our Galaxy, which is large and spherical, and also surrounds us in all directions. If, on the other hand, the bursts occurred at very large distances, they could come from faraway galaxies, which are also distributed uniformly in all directions. Both the very local and the very distant hypotheses required something strange to be going on. If the bursts were coming from the cold outer reaches of our own solar system or from the halo of our Galaxy, then astronomers had to hypothesize some new kind of physical process that could produce unpredictable flashes of high-energy gamma rays in these otherwise-quiet regions of space. And if the bursts came from galaxies millions or billions of light-years away, then they must be extremely powerful to be observable at such large distances; indeed they had to be the among the biggest explosions in the universe. The First Afterglows The problem with trying to figure out the source of the gamma-ray bursts was that our instruments for detecting gamma rays could not pinpoint the exact place in the sky where the burst was happening. Early gamma-ray telescopes did not have sufficient resolution. This was frustrating because astronomers suspected that if they could pinpoint the exact position of one of these rapid bursts, then they would be able to identify a counterpart (such as a star or galaxy) at other wavelengths and learn much more about the burst, including where it came from. This would, however, require either major improvements in gamma-ray detector technology to provide better resolution or detection of the burst at some other wavelength. In the end, both techniques played a role. The breakthrough came with the launch of the Italian Dutch BeppoSAX satellite in 1996. BeppoSAX included a new type of gamma-ray telescope capable of identifying the position of a source much more accurately than previous instruments, to within a few minutes of arc on the sky. By itself, however, it was still not sophisticated enough to determine the exact source of the gamma-ray burst. After all, a box a few minutes of arc on a side could still contain many stars or other celestial objects. However, the angular resolution of BeppoSAX was good enough to tell astronomers where to point other, more precise telescopes in the hopes of detecting longer-lived electromagnetic emission from the bursts at other wavelengths. Detection of a burst at visible-light or radio wavelengths could provide a position accurate to a few seconds of arc and allow the position to be pinpointed to an individual star or galaxy. BeppoSAX carried its own X-ray telescope onboard the spacecraft to look for such a counterpart, and astronomers using visible-light and radio facilities on the ground were eager to search those wavelengths as well. Two crucial BeppoSAX burst observations in 1997 helped to resolve the mystery of the gamma-ray bursts. The first burst came in February from the direction of the constellation Orion. Within 8 hours, astronomers working with the satellite had identified the position of the burst, and reoriented the spacecraft to focus BeppoSAX's X-ray detector on the source. To their excitement, they detected a slowly fading X-ray source 8 hours after the event—the first successful detection of an afterglow from a gamma-ray burst. This provided an even-better location of the burst (accurate to about 40 seconds of arc), which was then distributed to astronomers across the world to try to detect it at even longer wavelengths. That very night, the 4.2-meter William Herschel Telescope on the Canary Islands found a fading visible-light source at the same position as the X-ray afterglow, confirming that such an afterglow could be detected in visible light as well. Eventually, the afterglow faded away, but left behind at the location of the original gamma-ray burst was a faint, fuzzy source right where the fading point of light had been—a distant galaxy ([link]). This was the first piece of evidence that gamma-ray bursts were indeed very energetic objects from very far away. However, it also remained possible that the burst source was much closer to us and just happened to align with a more distant galaxy, so this one observation alone was not a conclusive demonstration of the extragalactic origin of gamma-ray bursts. Gamma-Ray Burst. Figure 2. This false-color Hubble Space Telescope image, taken in September 1997, shows the fading afterglow of the gamma-ray burst of February 28, 1997 and the host galaxy in which the burst originated. The left view shows the region of the burst. The enlargement shows the burst source and what appears to be its host galaxy. Note that the gamma-ray source is not in the center of the galaxy. (credit: modification of work by Andrew Fruchter (STScI), Elena Pian (ITSRE-CNR), and NASA, ESA) On May 8 of the same year, a burst came from the direction of the constellation Camelopardalis. In a coordinated international effort, BeppoSAX again fixed a reasonably precise position, and almost immediately a telescope on Kitt Peak in Arizona was able to catch the visible-light afterglow. Within 2 days, the largest telescope in the world (the Keck in Hawaii) collected enough light to record a spectrum of the burst. The May gamma-ray burst afterglow spectrum showed absorption features from a fuzzy object that was 4 billion light-years from the Sun, meaning that the location of the burst had to be at least this far away—and possibly even farther. (How astronomers can get the distance of such an object from the Doppler shift in the spectrum is something we will discuss in Galaxies.) What that spectrum showed was clear evidence that the gamma-ray burst had taken place in a distant galaxy. Networking to Catch More Bursts After initial observations showed that the precise locations and afterglows of gamma-ray bursts could be found, astronomers set up a system to catch and pinpoint bursts on a regular basis. But to respond as quickly as needed to obtain usable results, astronomers realized that they needed to rely on automated systems rather than human observers happening to be in the right place at the right time. Now, when an orbiting high-energy telescope discovers a burst, its rough location is immediately transmitted to a Gamma-Ray Coordinates Network based at NASA's Goddard Space Flight Center, alerting observers on the ground within a few seconds to look for the visible-light afterglow. The first major success with this system was achieved by a team of astronomers from the University of Michigan, Lawrence Livermore National Laboratory, and Los Alamos National Laboratories, who designed an automated device they called the Robotic Optical Transient Search Experiment (ROTSE), which detected a very bright visible-light counterpart in 1999. At peak, the burst was almost as bright as Neptune—despite a distance (measured later by spectra from larger telescopes) of 9 billion light-years. More recently, astronomers have been able to take this a step further, using wide-field-of-view telescopes to stare at large fractions of the sky in the hope that a gamma-ray burst will occur at the right place and time, and be recorded by the telescope's camera. These wide-field telescopes are not sensitive to faint sources, but ROTSE showed that gamma-ray burst afterglows could sometimes be very bright. Astronomers' hopes were vindicated in March 2008, when an extremely bright gamma-ray burst occurred and its light was captured by two wide-field camera systems in Chile: the Polish "Pi of the Sky" and the Russian-Italian TORTORA [Telescopio Ottimizzato per la Ricerca dei Transienti Ottici Rapidi (Italian for Telescope Optimized for the Research of Rapid Optical Transients)] (see [link]). According to the data taken by these telescopes, for a period of about 30 seconds, the light from the gamma-ray burst was bright enough that it could have been seen by the unaided eye had a person been looking in the right place at the right time. Adding to our amazement, later observations by larger telescopes demonstrated that the burst occurred at a distance of 8 billion light-years from Earth! Gamma-Ray Burst Observed in March 2008. Figure 3. The extremely luminous afterglow of GRB 080319B was imaged by the Swift Observatory in X-rays (left) and visible light/ultraviolet (right). (credit: modification of work by NASA/Swift/Stefan Immler, et al.) To Beam or Not to Beam The enormous distances to these events meant they had to have been astoundingly energetic to appear as bright as they were across such an enormous distance. In fact, they required so much energy that it posed a problem for gamma-ray burst models: if the source was radiating energy in all directions, then the energy released in gamma rays alone during a bright burst (such as the 1999 or 2008 events) would have been equivalent to the energy produced if the entire mass of a Sun-like star were suddenly converted into pure radiation. For a source to produce this much energy this quickly (in a burst) is a real challenge. Even if the star producing the gamma-ray burst was much more massive than the Sun (as is probably the case), there is no known means of converting so much mass into radiation within a matter of seconds. However, there is one way to reduce the power required of the "mechanism" that makes gamma-ray bursts. So far, our discussion has assumed that the source of the gamma rays gives off the same amount of energy in all directions, like an incandescent light bulb. But as we discuss in Pulsars and the Discovery of Neutron Stars, not all sources of radiation in the universe are like this. Some produce thin beams of radiation that are concentrated into only one or two directions. A laser pointer and a lighthouse on the ocean are examples of such beamed sources on Earth ([link]). If, when a burst occurs, the gamma rays come out in only one or two narrow beams, then our estimates of the luminosity of the source can be reduced, and the bursts may be easier to explain. In that case, however, the beam has to point toward Earth for us to be able to see the burst. This, in turn, would imply that for every burst we see from Earth, there are probably many others that we never detect because their beams point in other directions. Burst That Is Beamed. Figure 4. This artist's conception shows an illustration of one kind of gamma-ray burst. The collapse of the core of a massive star into a black hole has produced two bright beams of light originating from the star's poles, which an observer pointed along one of these axes would see as a gamma-ray burst. The hot blue stars and gas clouds in the vicinity are meant to show that the event happened in an active star-forming region. (credit: NASA/Swift/Mary Pat Hrybyk-Keith and John Jones) Long-Duration Gamma-Ray Bursts: Exploding Stars After identifying and following large numbers of gamma-ray bursts, astronomers began to piece together clues about what kind of event is thought to be responsible for producing the gamma-ray burst. Or, rather, what kind of events, because there are at least two distinct types of gamma-ray bursts. The two—like the different types of supernovae—are produced in completely different ways. Observationally, the crucial distinction is how long the burst lasts. Astronomers now divide gamma-ray bursts into two categories: short-duration ones (defined as lasting less than 2 seconds, but typically a fraction of a second) and long-duration ones (defined as lasting more than 2 seconds, but typically about a minute). All of the examples we have discussed so far concern the long-duration gamma-ray bursts. These constitute most of the gamma-ray bursts that our satellites detect, and they are also brighter and easier to pinpoint. Many hundreds of long-duration gamma-ray bursts, and the properties of the galaxies in which they occurred, have now been studied in detail. Long-duration gamma-ray bursts are universally observed to come from distant galaxies that are still actively making stars. They are usually found to be located in regions of the galaxy with strong star-formation activity (such as spiral arms). Recall that the more massive a star is, the less time it spends in each stage of its life. This suggests that the bursts come from a young and short-lived, and therefore massive type of star. Furthermore, in several cases when a burst has occurred in a galaxy relatively close to Earth (within a few billion light-years), it has been possible to search for a supernova at the same position—and in nearly all of these cases, astronomers have found evidence of a supernova of type Ic going off. A type Ic is a particular type of supernova, which we did not discuss in the earlier parts of this chapter; these are produced by a massive star that has been stripped of its outer hydrogen layer. However, only a tiny fraction of type Ic supernovae produce gamma-ray bursts. Why would a massive star with its outer layers missing sometimes produce a gamma-ray burst at the same time that it explodes as a supernova? The explanation astronomers have in mind for the extra energy is the collapse of the star's core to form a spinning, magnetic black hole or neutron star. Because the star corpse is both magnetic and spinning rapidly, its sudden collapse is complex and can produce swirling jets of particles and powerful beams of radiation—just like in a quasar or active galactic nucleus (objects you will learn about Active Galaxies, Quasars, and Supermassive Black Holes), but on a much faster timescale. A small amount of the infalling mass is ejected in a narrow beam, moving at speeds close to that of light. Collisions among the particles in the beam can produce intense bursts of energy that we see as a gamma-ray burst. Within a few minutes, the expanding blast from the fireball plows into the interstellar matter in the dying star's neighborhood. This matter might have been ejected from the star itself at earlier stages in its evolution. Alternatively, it could be the gas out of which the massive star and its neighbors formed. As the high-speed particles from the blast are slowed, they transfer their energy to the surrounding matter in the form of a shock wave. That shocked material emits radiation at longer wavelengths. This accounts for the afterglow of X-rays, visible light, and radio waves—the glow comes at longer and longer wavelengths as the blast continues to lose energy. Short-Duration Gamma-Ray Bursts: Colliding Stellar Corpses What about the shorter gamma-ray bursts? The gamma-ray emission from these events lasts less than 2 seconds, and in some cases may last only milliseconds—an amazingly short time. Such a timescale is difficult to achieve if they are produced in the same way as long-duration gamma-ray bursts, since the collapse of the stellar interior onto the black hole should take at least a few seconds. Astronomers looked fruitlessly for afterglows from short-duration gamma-ray bursts found by BeppoSAX and other satellites. Evidently, the afterglows fade away too quickly. Fast-responding visible-light telescopes like ROTSE were not helpful either: no matter how fast these telescopes responded, the bursts were not bright enough at visible wavelengths to be detected by these small telescopes. Once again, it took a new satellite to clear up the mystery. In this case, it was the Swift Gamma-Ray Burst Satellite, launched in 2004 by a collaboration between NASA and the Italian and UK space agencies ([link]). The design of Swift is similar to that of BeppoSAX. However, Swift is much more agile and flexible: after a gamma-ray burst occurs, the X-ray and UV telescopes can be repointed automatically within a few minutes (rather than a few hours). Thus, astronomers can observe the afterglow much earlier, when it is expected to be much brighter. Furthermore, the X-ray telescope is far more sensitive and can provide positions that are 30 times more precise than those provided by BeppoSAX, allowing bursts to be identified even without visible-light or radio observations. Artist's Illustration of Swift. Figure 5. The US/UK/Italian spacecraft Swift contains on-board gamma-ray, X-ray, and ultraviolet detectors, and has the ability to automatically reorient itself to a gamma-ray burst detected by the gamma-ray instrument. Since its launch in 2005, Swift has detected and observed over a thousand bursts, including dozens of short-duration bursts. (credit: NASA, Spectrum Astro) On May 9, 2005, Swift detected a flash of gamma rays lasting 0.13 seconds in duration, originating from the constellation Coma Berenices. Remarkably, the galaxy at the X-ray position looked completely different from any galaxy in which a long-duration burst had been seen to occur. The afterglow originated from the halo of a giant elliptical galaxy 2.7 billion light-years away, with no signs of any young, massive stars in its spectrum. Furthermore, no supernova was ever detected after the burst, despite extensive searching. What could produce a burst less than a second long, originating from a region with no star formation? The leading model involves the merger of two compact stellar corpses: two neutron stars, or perhaps a neutron star and a black hole. Since many stars come in binary or multiple systems, it's possible to have systems where two such star corpses orbit one another. According to general relativity (which will be discussed in Black Holes and Curved Spacetime), the orbits of a binary star system composed of such objects should slowly decay with time, eventually (after millions or billions of years) causing the two objects to slam together in a violent but brief explosion. Because the decay of the binary orbit is so slow, we would expect more of these mergers to occur in old galaxies in which star formation has long since stopped. To learn more about the merger of two neutron stars and how they can produce a burst that lasts less than a second, check out this computer simulation by NASA. While it was impossible to be sure of this model based on only a single event (it is possible this burst actually came from a background galaxy and lined up with the giant elliptical only by chance), several dozen more short-duration gamma-ray bursts have since been located by Swift, many of which also originate from galaxies with very low star-formation rates. This has given astronomers greater confidence that this model is the correct one. Still, to be fully convinced, astronomers are searching for a "smoking gun" signature for the merger of two ultra-dense stellar remnants. There are two examples we can think of that would provide more direct evidence. One is a very special kind of explosion, produced when neutrons stripped from the neutron stars during the violent final phase of the merger fuse together into heavy elements and then release heat due to radioactivity, producing a short-lived but red supernova sometimes called a kilonova. (The term is used because it is about a thousand times brighter than an ordinary nova, but not quite as "super" as a traditional supernova.) Hubble observations of one short-duration gamma-ray burst in 2013 show suggestive evidence of such a signature, but need to be confirmed by future observations. The second "smoking gun" has been even more exciting to see: the detection of gravitational waves. As will be discussed in Black Holes and Curved Spacetime, gravitational waves are ripples in the fabric of spacetime that general relativity predicts should be produced by the acceleration of extremely massive and dense objects—such as two neutron stars or black holes spiraling toward each other and colliding. The first example of gravitational waves has been observed recently from the merger of two large black holes. If a gravitational wave is observed one day to be coincident in time and space with a gamma-ray burst, this will not only confirm our theories of the origin of short gamma-ray bursts but would also be among the most spectacular demonstrations yet of Einstein's theory of general relativity. Probing the Universe with Gamma-Ray Bursts The story of how astronomers came to explain the origin of the different kinds of bursts is a good example of how the scientific process sometimes resembles good detective work. While the mystery of short-duration gamma-ray bursts is still being unraveled, the focus of studies for long-duration gamma-ray bursts has begun to change from understanding the origin of the bursts themselves (which is now fairly well-established) to using them as tools to understand the broader universe. The reason that long-duration gamma-ray bursts are useful has to do with their extreme luminosities, if only for a short time. In fact, long-duration gamma-ray bursts are so bright that they could easily be seen at distances that correspond to a few hundred million years after the expansion of the universe began, which is when theorists think that the first generation of stars formed. Some theories predict that the first stars are likely to be massive and complete their evolution in only a million years or so. If this turns out to be the case, then gamma-ray bursts (which signal the death of some of these stars) may provide us with the best way of probing the universe when stars and galaxies first began to form. So far, the most distant gamma-ray burst found (on April 29, 2009) originated a remarkable 13.2 billion light-years away—meaning it happened only 600 million years after the Big Bang itself. This is comparable to the earliest and most distant galaxies found by the Hubble Space Telescope. It is not quite old enough to expect that it formed from the first generation of stars, but its appearance at this distance still gives us useful information about the production of stars in the early universe. Astronomers continue to scan the skies, looking for even more distant events signaling the deaths of stars from even further back in time. Key Concepts and Summary Gamma-ray bursts last from a fraction of a second to a few minutes. They come from all directions and are now known to be associated with very distant objects. The energy is most likely beamed, and, for the ones we can detect, Earth lies in the direction of the beam. Long-duration bursts (lasting more than a few seconds) come from massive stars with their outer hydrogen layers missing that explode as supernovae. Short-duration bursts are believed to be mergers of stellar corpses (neutron stars or black holes). For Further Exploration Death of Stars Hillebrandt, W., et al. "How To Blow Up a Star." Scientific American (October 2006): 42. On supernova mechanisms. Irion, R. "Pursuing the Most Extreme Stars." Astronomy (January 1999): 48. On pulsars. Kalirai, J. "New Light on Our Sun's Fate." Astronomy (February 2014): 44. What will happen to stars like our Sun between the main sequence and the white dwarf stages. Kirshner, R. "Supernova 1987A: The First Ten Years." Sky & Telescope (February 1997): 35. Maurer, S. "Taking the Pulse of Neutron Stars." Sky & Telescope (August 2001): 32. Review of recent ideas and observations of pulsars. Zimmerman, R. "Into the Maelstrom." Astronomy (November 1998): 44. About the Crab Nebula. Gamma-Ray Bursts Fox, D. & Racusin, J. "The Brightest Burst." Sky & Telescope (January 2009): 34. Nice summary of the brightest burst observed so far, and what we have learned from it. Nadis, S. "Do Cosmic Flashes Reveal Secrets of the Infant Universe?" Astronomy (June 2008): 34. On different types of gamma-ray bursts and what we can learn from them. Naeye, R. "Dissecting the Bursts of Doom." Sky & Telescope (August 2006): 30. Excellent review of gamma-ray bursts—how we discovered them, what they might be, and what they can be used for in probing the universe. Zimmerman, R. "Speed Matters." Astronomy (May 2000): 36. On the quick-alert networks for finding afterglows. Zimmerman, R. "Witness to Cosmic Collisions." Astronomy (July 2006): 44. On the Swift mission and what it is teaching astronomers about gamma-ray bursts. Crab Nebula: http://chandra.harvard.edu/xray_sources/crab/crab.html. A short, colorfully written introduction to the history and science involving the best-known supernova remant. Introduction to Neutron Stars: https://www.astro.umd.edu/~miller/nstar.html. Coleman Miller of the University of Maryland maintains this site, which goes from easy to hard as you get into it, but it has lots of good information about corpses of massive stars. Introduction to Pulsars (by Maryam Hobbs at the Australia National Telescope Facility): http://www.atnf.csiro.au/outreach/education/everyone/pulsars/index.html. Magnetars, Soft Gamma Repeaters, and Very Strong Magnetic Fields: http://solomon.as.utexas.edu/magnetar.html. Robert Duncan, one of the originators of the idea of magnetars, assembled this site some years ago. Brief Intro to Gamma-Ray Bursts (from PBS' Seeing in the Dark): http://www.pbs.org/seeinginthedark/astronomy-topics/gamma-ray-bursts.html. Discovery of Gamma-ray Bursts: http://science.nasa.gov/science-news/science-at-nasa/1997/ast19sep97_2/. Gamma-Ray Bursts: Introduction to a Mystery (at NASA's Imagine the Universe site): http://imagine.gsfc.nasa.gov/docs/science/know_l1/bursts.html. Introduction from the Swift Satellite Site: http://swift.sonoma.edu/about_swift/grbs.html. Missions to Detect and Learn More about Gamma-ray Bursts: Fermi Space Telescope: http://fermi.gsfc.nasa.gov/public/. INTEGRAL Spacecraft: http://www.esa.int/science/integral. SWIFT Spacecraft: http://swift.sonoma.edu/. BBC interview with Antony Hewish: http://www.bbc.co.uk/archive/scientists/10608.shtml. (40:54). Black Widow Pulsars: The Vengeful Corpses of Stars: https://www.youtube.com/watch?v=Fn-3G_N0hy4. A public talk in the Silicon Valley Astronomy Lecture Series by Dr. Roger Romani (Stanford University) (1:01:47). Hubblecast 64: It all ends with a bang!: http://www.spacetelescope.org/videos/hubblecast64a/. HubbleCast Program introducing Supernovae with Dr. Joe Liske (9:48). Space Movie Reveals Shocking Secrets of the Crab Pulsar: http://hubblesite.org/newscenter/archive/releases/2002/24/video/c/. A sequence of Hubble and Chandra Space Telescope images of the central regions of the Crab Nebula have been assembled into a very brief movie accompanied by animation showing how the pulsar affects its environment; it comes with some useful background material (40:06). Gamma-Ray Bursts: The Biggest Explosions Since the Big Bang!: https://www.youtube.com/watch?v=ePo_EdgV764. Edo Berge in a popular-level lecture at Harvard (58:50). Gamma-Ray Bursts: Flashes in the Sky: https://www.youtube.com/watch?v=23EhcAP3O8Q. American Museum of Natural History Science Bulletin on the Swift satellite (5:59). Overview Animation of Gamma-Ray Burst: http://news.psu.edu/video/296729/2013/11/27/overview-animation-gamma-ray-burst. Brief Animation of what causes a long-duration gamma-ray burst (0:55). Collaborative Group Activities Someone in your group uses a large telescope to observe an expanding shell of gas. Discuss what measurements you could make to determine whether you have discovered a planetary nebula or the remnant of a supernova explosion. The star Sirius (the brightest star in our northern skies) has a white-dwarf companion. Sirius has a mass of about 2 MSun and is still on the main sequence, while its companion is already a star corpse. Remember that a white dwarf can't have a mass greater than 1.4 MSun. Assuming that the two stars formed at the same time, your group should discuss how Sirius could have a white-dwarf companion. Hint: Was the initial mass of the white-dwarf star larger or smaller than that of Sirius? Discuss with your group what people today would do if a brilliant star suddenly became visible during the daytime? What kind of fear and superstition might result from a supernova that was really bright in our skies? Have your group invent some headlines that the tabloid newspapers and the less responsible web news outlets would feature. Suppose a supernova exploded only 40 light-years from Earth. Have your group discuss what effects there may be on Earth when the radiation reaches us and later when the particles reach us. Would there be any way to protect people from the supernova effects? When pulsars were discovered, the astronomers involved with the discovery talked about finding "little green men." If you had been in their shoes, what tests would you have performed to see whether such a pulsating source of radio waves was natural or the result of an alien intelligence? Today, several groups around the world are actively searching for possible radio signals from intelligent civilizations. How might you expect such signals to differ from pulsar signals? Your little brother, who has not had the benefit of an astronomy course, reads about white dwarfs and neutron stars in a magazine and decides it would be fun to go near them or even try to land on them. Is this a good idea for future tourism? Have your group make a list of reasons it would not be safe for children (or adults) to go near a white dwarf and a neutron star. A lot of astronomers' time and many instruments have been devoted to figuring out the nature of gamma-ray bursts. Does your group share the excitement that astronomers feel about these mysterious high-energy events? What are some reasons that people outside of astronomy might care about learning about gamma-ray bursts? 1: How does a white dwarf differ from a neutron star? How does each form? What keeps each from collapsing under its own weight? 2: Describe the evolution of a star with a mass like that of the Sun, from the main-sequence phase of its evolution until it becomes a white dwarf. 3: Describe the evolution of a massive star (say, 20 times the mass of the Sun) up to the point at which it becomes a supernova. How does the evolution of a massive star differ from that of the Sun? Why? 4: How do the two types of supernovae discussed in this chapter differ? What kind of star gives rise to each type? 5: A star begins its life with a mass of 5 MSun but ends its life as a white dwarf with a mass of 0.8 MSun. List the stages in the star's life during which it most likely lost some of the mass it started with. How did mass loss occur in each stage? 6: If the formation of a neutron star leads to a supernova explosion, explain why only three of the hundreds of known pulsars are found in supernova remnants. 7: How can the Crab Nebula shine with the energy of something like 100,000 Suns when the star that formed the nebula exploded almost 1000 years ago? Who "pays the bills" for much of the radiation we see coming from the nebula? 8: How is a nova different from a type Ia supernova? How does it differ from a type II supernova? 9: Apart from the masses, how are binary systems with a neutron star different from binary systems with a white dwarf? 10: What observations from SN 1987A helped confirm theories about supernovae? 11: Describe the evolution of a white dwarf over time, in particular how the luminosity, temperature, and radius change. 12: Describe the evolution of a pulsar over time, in particular how the rotation and pulse signal changes over time. 13: How would a white dwarf that formed from a star that had an initial mass of 1 MSun be different from a white dwarf that formed from a star that had an initial mass of 9 MSun? 14: What do astronomers think are the causes of longer-duration gamma-ray bursts and shorter-duration gamma-ray bursts? 15: How did astronomers finally solve the mystery of what gamma-ray bursts were? What instruments were required to find the solution? Thought Questions 16: Arrange the following stars in order of their evolution: A star with no nuclear reactions going on in the core, which is made primarily of carbon and oxygen. A star of uniform composition from center to surface; it contains hydrogen but has no nuclear reactions going on in the core. A star that is fusing hydrogen to form helium in its core. A star that is fusing helium to carbon in the core and hydrogen to helium in a shell around the core. A star that has no nuclear reactions going on in the core but is fusing hydrogen to form helium in a shell around the core. 17: Would you expect to find any white dwarfs in the Orion Nebula? (See The Birth of Stars and the Discovery of Planets outside the Solar System to remind yourself of its characteristics.) Why or why not? 18: Suppose no stars more massive than about 2 MSun had ever formed. Would life as we know it have been able to develop? Why or why not? 19: Would you be more likely to observe a type II supernova (the explosion of a massive star) in a globular cluster or in an open cluster? Why? 20: Astronomers believe there are something like 100 million neutron stars in the Galaxy, yet we have only found about 2000 pulsars in the Milky Way. Give several reasons these numbers are so different. Explain each reason. 21: Would you expect to observe every supernova in our own Galaxy? Why or why not? 22: The Large Magellanic Cloud has about one-tenth the number of stars found in our own Galaxy. Suppose the mix of high- and low-mass stars is exactly the same in both galaxies. Approximately how often does a supernova occur in the Large Magellanic Cloud? 23: Look at the list of the nearest stars in Appendix I. Would you expect any of these to become supernovae? Why or why not? 24: If most stars become white dwarfs at the ends of their lives and the formation of white dwarfs is accompanied by the production of a planetary nebula, why are there more white dwarfs than planetary nebulae in the Galaxy? 25: If a 3 and 8 MSun star formed together in a binary system, which star would: Evolve off the main sequence first? Form a carbon- and oxygen-rich white dwarf? Be the location for a nova explosion? 26: You have discovered two star clusters. The first cluster contains mainly main-sequence stars, along with some red giant stars and a few white dwarfs. The second cluster also contains mainly main-sequence stars, along with some red giant stars, and a few neutron stars—but no white dwarf stars. What are the relative ages of the clusters? How did you determine your answer? 27: A supernova remnant was recently discovered and found to be approximately 150 years old. Provide possible reasons that this supernova explosion escaped detection. 28: Based upon the evolution of stars, place the following elements in order of least to most common in the Galaxy: gold, carbon, neon. What aspects of stellar evolution formed the basis for how you ordered the elements? 29: What observations or types of telescopes would you use to distinguish a binary system that includes a main-sequence star and a white dwarf star from one containing a main-sequence star and a neutron star? 30: How would the spectra of a type II supernova be different from a type Ia supernova? Hint: Consider the characteristics of the objects that are their source. Figuring for Yourself 31: The ring around SN 1987A ([link]) initially became illuminated when energetic photons from the supernova interacted with the material in the ring. The radius of the ring is approximately 0.75 light-year from the supernova location. How long after the supernova did the ring become illuminated? 32: What is the acceleration of gravity (g) at the surface of the Sun? (See Appendix E for the Sun's key characteristics.) How much greater is this than g at the surface of Earth? Calculate what you would weigh on the surface of the Sun. Your weight would be your Earth weight multiplied by the ratio of the acceleration of gravity on the Sun to the acceleration of gravity on Earth. (Okay, we know that the Sun does not have a solid surface to stand on and that you would be vaporized if you were at the Sun's photosphere. Humor us for the sake of doing these calculations.) 33: What is the escape velocity from the Sun? How much greater is it than the escape velocity from Earth? 34: What is the average density of the Sun? How does it compare to the average density of Earth? 35: Say that a particular white dwarf has the mass of the Sun but the radius of Earth. What is the acceleration of gravity at the surface of the white dwarf? How much greater is this than g at the surface of Earth? What would you weigh at the surface of the white dwarf (again granting us the dubious notion that you could survive there)? 36: What is the escape velocity from the white dwarf in [link]? How much greater is it than the escape velocity from Earth? 37: What is the average density of the white dwarf in [link]? How does it compare to the average density of Earth? 38: Now take a neutron star that has twice the mass of the Sun but a radius of 10 km. What is the acceleration of gravity at the surface of the neutron star? How much greater is this than g at the surface of Earth? What would you weigh at the surface of the neutron star (provided you could somehow not become a puddle of protoplasm)? 39: What is the escape velocity from the neutron star in [link]? How much greater is it than the escape velocity from Earth? 40: What is the average density of the neutron star in [link]? How does it compare to the average density of Earth? 41: One way to calculate the radius of a star is to use its luminosity and temperature and assume that the star radiates approximately like a blackbody. Astronomers have measured the characteristics of central stars of planetary nebulae and have found that a typical central star is 16 times as luminous and 20 times as hot (about 110,000 K) as the Sun. Find the radius in terms of the Sun's. How does this radius compare with that of a typical white dwarf? 42: According to a model described in the text, a neutron star has a radius of about 10 km. Assume that the pulses occur once per rotation. According to Einstein's theory of relatively, nothing can move faster than the speed of light. Check to make sure that this pulsar model does not violate relativity. Calculate the rotation speed of the Crab Nebula pulsar at its equator, given its period of 0.033 s. (Remember that distance equals velocity × time and that the circumference of a circle is given by 2πR). 43: Do the same calculations as in [link] but for a pulsar that rotates 1000 times per second. 44: If the Sun were replaced by a white dwarf with a surface temperature of 10,000 K and a radius equal to Earth's, how would its luminosity compare to that of the Sun? 45: A supernova can eject material at a velocity of 10,000 km/s. How long would it take a supernova remnant to expand to a radius of 1 AU? How long would it take to expand to a radius of 1 light-years? Assume that the expansion velocity remains constant and use the relationship: $\text{expansion time}=\frac{\text{distance}}{\text{expansion velocity}}.$ 46: A supernova remnant was observed in 2007 to be expanding at a velocity of 14,000 km/s and had a radius of 6.5 light-years. Assuming a constant expansion velocity, in what year did this supernova occur? 47: The ring around SN 1987A ([link]) started interacting with material propelled by the shockwave from the supernova beginning in 1997 (10 years after the explosion). The radius of the ring is approximately 0.75 light-year from the supernova location. How fast is the supernova material moving, assume a constant rate of motion in km/s? 48: Before the star that became SN 1987A exploded, it evolved from a red supergiant to a blue supergiant while remaining at the same luminosity. As a red supergiant, its surface temperature would have been approximately 4000 K, while as a blue supergiant, its surface temperature was 16,000 K. How much did the radius change as it evolved from a red to a blue supergiant? 49: What is the radius of the progenitor star that became SN 1987A? Its luminosity was 100,000 times that of the Sun, and it had a surface temperature of 16,000 K. 50: What is the acceleration of gravity at the surface of the star that became SN 1987A? How does this g compare to that at the surface of Earth? The mass was 20 times that of the Sun and the radius was 41 times that of the Sun. 51: What was the escape velocity from the surface of the SN 1987A progenitor star? How much greater is it than the escape velocity from Earth? The mass was 20 times that of the Sun and the radius was 41 times that of the Sun. 52: What was the average density of the star that became SN 1987A? How does it compare to the average density of Earth? The mass was 20 times that of the Sun and the radius was 41 times that of the Sun. 53: If the pulsar shown in [link] is rotating 100 times per second, how many pulses would be detected in one minute? The two beams are located along the pulsar's equator, which is aligned with Earth. Previous: 23.5 The Evolution of Binary Star Systems Next: 24.0 Thinking Ahead BCIT Astronomy 7000: A Survey of Astronomy by OpenStax is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted. Powered by Pressbooks Pressbooks on YouTube Pressbooks on Twitter
CommonCrawl
How a NTM reject an input / How a TM simulates a particular NTM I understood that a NTM can say only yes but it doesn't know if the input is a NO instance with a single execution. Furthermore, the NTM can diverges and, at the same time, can decide a language (source: Papadimitrou - Computational Complexity, proposition 7.1). While a TM must converges in a "yes" or "no" answer to decide the same language. Now, we know that for a NTM that decides a language L we can create a TM that decides L (theorem 2.6). But suppose that the NTM diverges for some computation. How can the TM treat this case? [EDIT] Definition (source Papadimitrou, pg 45): A non-deterministc Turing Machine decides a language L if for any input x, the following is true: x belongs to L iff there is an acceptance path. This definition admits machines that, instead of rejecting an input, diverge and at the same time decide a language. Theorem 2.6 (pg 47): Suppose that language L is decided by a nondeterministic Turing machine N. Then it is decided by a deterministic Turing machine M. Proposition 7.1 (pg 141): Suppose that a (deterministic or nondeterministic) Turing machine M decides a language L within time (or space) f(n), where f is a proper function. Then there is a precise Tring machine M', which decides the same language in time (or space, respectively). O(f(n)). This last proposition confirms that there is a Turing machine that decides a language and that diverges for some path. complexity-theory edited Sep 2 '18 at 8:04 Daniele Cuomo Daniele CuomoDaniele Cuomo If we're only interested in computability, and not in the resources used by the computation (i.e., complexity), then we can simulate an NTM as follows. First, simulate every possible sequence of nondeterministic choices for one step. Then, start again and do them all for two steps. Then, start again and do them all for three steps, and so on. There are three possible results of this simulation. If any of the simulated computations accepts (so, in particular, it terminates), then we accept. In this case, it doesn't matter if some other computations would diverge. If no computation accepts but, after some number of steps, all computations have rejected, we reject. If no computation accepts and at least one computation diverges, then the simulation will also diverge. This is a faithful simulation of the nondeterministic machine. If some computation path accepts, we accept, and this is the definition of acceptance for nondeterministic machines. If every computation path halts and rejects, we reject, as we should. Otherwise, we diverge, as does the nondeterministic machine. However, the case you're looking at does include resource bounds. Papadimitriou is essentially comparing two different notions of the resources used by an NTM and showing that they're both the same. I'll talk about time for concreteness, but the same arguments apply to space. One way of defining $\mathrm{NTIME}(f(n))$ would be to say that every possible computation path must run within $f(n)$ steps, and the other would be to say that only the accepting paths need to. However, because we're only interested in constructible functions $f$, we can use the following idea to make the two notions identical. First, the machine deterministically computes $f(n)$ and then it does the nondeterministic computation, counting off the steps. If the step counter reaches $f(n)$, and the machine hasn't accepted, we must be either on a rejecting path or a diverging one. In either case, the path won't accept, so we can reject. Therefore, any language that's accepted by an NTM where every accepting path uses at most $f(n)$ steps is also decided by an NTM where every path uses at most $f(n)$ steps and no path diverges. Technically, I've only shown that we can do it in something like $3f(n)$ steps, but the linear speed-up theorem says we can get rid of the constant $3$. David RicherbyDavid Richerby $\begingroup$ Thank you for answer! I agree with this reasoning but let's focus on the third case: an NTM that diverges for some input still can decides L. But when we going to simulate it, the TM accepts the language instead deciding it, contraddicting the statement (theorem 2.6). $\endgroup$ – Daniele Cuomo Sep 2 '18 at 7:41 $\begingroup$ I dispute that a machine that diverges in this sense can be said to decide any language. I don't have my copy of Papadimitriou's book with me at the moment but I believe it's generally accurate, so I suspect you've misunderstood something. Could you include the relevant theorem statements, definitions, etc. in your question? $\endgroup$ – David Richerby Sep 2 '18 at 7:43 $\begingroup$ I suspect too, anyway I included the statements. $\endgroup$ – Daniele Cuomo Sep 2 '18 at 8:09 $\begingroup$ @Daniel-san Does my edit clear things up? $\endgroup$ – David Richerby Sep 2 '18 at 8:25 $\begingroup$ Yes, partially. But in the proof of theorem 2.6 the book doesn't use a counter (it also affirms that it has no knowledge of the bounds f (n)). Towards the end of the proof, it explains how he handles the rejection case, but it assumes that all computations halt in time f (n), which is not always true. If instead we assume that it is always true, we could use a DFS simulation instead of BFS, but, again, it affirms that it must be done necessarily with a BFS approach. $\endgroup$ – Daniele Cuomo Sep 2 '18 at 9:58 A nondeterministic Turing machine accepts an input iff there is at least one accepting path, that is, one set of guesses for which the machine accepts. A path on which the machine diverges is not an accepting path. We are often interested in time-constraint nondeterministic Turing machines. Such Turing machines always halt, on every computation path. This is the case for nondeterministic polytime Turing machines, the machine model corresponding to the famous complexity class $\mathsf{NP}$. Yuval FilmusYuval Filmus Not the answer you're looking for? Browse other questions tagged complexity-theory or ask your own question. Relation of Space and Time in Complexity? Is there a limit on the length of queries for the oracle? Why is universal turing machine considered with only one head? Why can't we simulate an NP oracle with an NP machine? Proving that a Turing machine decides a language in NP Does polynomial time reduction from CNFFAL to CNFSAT is also polynomial time reduction from CNFSAT to CNFFAL? why is $\Pi_2$ smaller than $NP\cap coNP$
CommonCrawl
For the sake of organizing the review, we have divided the literature according to the general type of cognitive process being studied, with sections devoted to learning and to various kinds of executive function. Executive function is a broad and, some might say, vague concept that encompasses the processes by which individual perceptual, motoric, and mnemonic abilities are coordinated to enable appropriate, flexible task performance, especially in the face of distracting stimuli or alternative competing responses. Two major aspects of executive function are working memory and cognitive control, responsible for the maintenance of information in a short-term active state for guiding task performance and responsible for inhibition of irrelevant information or responses, respectively. A large enough literature exists on the effects of stimulants on these two executive abilities that separate sections are devoted to each. In addition, a final section includes studies of miscellaneous executive abilities including planning, fluency, and reasoning that have also been the subjects of published studies. l-theanine (Examine.com) is occasionally mentioned on Reddit or Imminst or LessWrong32 but is rarely a top-level post or article; this is probably because theanine was discovered a very long time ago (>61 years ago), and it's a pretty straightforward substance. It's a weak relaxant/anxiolytic (Google Scholar) which is possibly responsible for a few of the health benefits of tea, and which works synergistically with caffeine (and is probably why caffeine delivered through coffee feels different from the same amount consumed in tea - in one study, separate caffeine and theanine were a mixed bag, but the combination beat placebo on all measurements). The half-life in humans seems to be pretty short, with van der Pijl 2010 putting it ~60 minutes. This suggests to me that regular tea consumption over a day is best, or at least that one should lower caffeine use - combining caffeine and theanine into a single-dose pill has the problem of caffeine's half-life being much longer so the caffeine will be acting after the theanine has been largely eliminated. The problem with getting it via tea is that teas can vary widely in their theanine levels and the variations don't seem to be consistent either, nor is it clear how to estimate them. (If you take a large dose in theanine like 400mg in water, you can taste the sweetness, but it's subtle enough I doubt anyone can actually distinguish the theanine levels of tea; incidentally, r-theanine - the useless racemic other version - anecdotally tastes weaker and less sweet than l-theanine.) Amongst the brain focus supplements that are currently available in the nootropic drug market, Modafinil is probably the most common focus drug or one of the best focus pills used by people, and it's praised to be the best nootropic available today. It is a powerful cognitive enhancer that is great for boosting your overall alertness with least side effects. However, to get your hands on this drug, you would require a prescription. Stayed up with the purpose of finishing my work for a contest. This time, instead of taking the pill as a single large dose (I feel that after 3 times, I understand what it's like), I will take 4 doses over the new day. I took the first quarter at 1 AM, when I was starting to feel a little foggy but not majorly impaired. Second dose, 5:30 AM; feeling a little impaired. 8:20 AM, third dose; as usual, I feel physically a bit off and mentally tired - but still mentally sharp when I actually do something. Early on, my heart rate seemed a bit high and my limbs trembling, but it's pretty clear now that that was the caffeine or piracetam. It may be that the other day, it was the caffeine's fault as I suspected. The final dose was around noon. The afternoon crash wasn't so pronounced this time, although motivation remains a problem. I put everything into finishing up the spaced repetition literature review, and didn't do any n-backing until 11:30 PM: 32/34/31/54/40%. Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners. Another common working memory task is the n-back task, which requires the subject to view a series of items (usually letters) and decide whether the current item is identical to the one presented n items back. This task taxes working memory because the previous items must be held in working memory to be compared with the current item. The easiest version of this is a 1-back task, which is also called a double continuous performance task (CPT) because the subject is continuously monitoring for a repeat or double. Three studies examined the effects of MPH on working memory ability as measured by the 1-back task, and all found enhancement of performance in the form of reduced errors of omission (Cooper et al., 2005; Klorman et al., 1984; Strauss et al., 1984). Fleming et al. (1995) tested the effects of d-AMP on a 5-min CPT and found a decrease in reaction time, but did not specify which version of the CPT was used. The title question, whether prescription stimulants are smart pills, does not find a unanimous answer in the literature. The preponderance of evidence is consistent with enhanced consolidation of long-term declarative memory. For executive function, the overall pattern of evidence is much less clear. Over a third of the findings show no effect on the cognitive processes of healthy nonelderly adults. Of the rest, most show enhancement, although impairment has been reported (e.g., Rogers et al., 1999), and certain subsets of participants may experience impairment (e.g., higher performing participants and/or those homozygous for the met allele of the COMT gene performed worse on drug than placebo; Mattay et al., 2000, 2003). Whereas the overall trend is toward enhancement of executive function, the literature contains many exceptions to this trend. Furthermore, publication bias may lead to underreporting of these exceptions. This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit. Barbaresi WJ, Katusic SK, Colligan RC, Weaver AL, Jacobsen SJ. Modifiers of long-term school outcomes for children with attention-deficit/hyperactivity disorder: Does treatment with stimulant medication make a difference? Results from a population-based study. Journal of Developmental and Behavioral Pediatrics. 2007;28:274–287. doi: 10.1097/DBP.0b013e3180cabc28. [PubMed] [CrossRef] As with any thesis, there are exceptions to this general practice. For example, theanine for dogs is sold under the brand Anxitane is sold at almost a dollar a pill, and apparently a month's supply costs $50+ vs $13 for human-branded theanine; on the other hand, this thesis predicts downgrading if the market priced pet versions higher than human versions, and that Reddit poster appears to be doing just that with her dog.↩ Analyzing the results is a little tricky because I was simultaneously running the first magnesium citrate self-experiment, which turned out to cause a quite complex result which looks like a gradually-accumulating overdose negating an initial benefit for net harm, and also toying with LLLT, which turned out to have a strong correlation with benefits. So for the potential small Noopept effect to not be swamped, I need to include those in the analysis. I designed the experiment to try to find the best dose level, so I want to look at an average Noopept effect but also the estimated effect at each dose size in case some are negative (especially in the case of 5-pills/60mg); I included the pilot experiment data as 10mg doses since they were also blind & randomized. Finally, missingness affects analysis: because not every variable is recorded for each date (what was the value of the variable for the blind randomized magnesium citrate before and after I finished that experiment? what value do you assign the Magtein variable before I bought it and after I used it all up?), just running a linear regression may not work exactly as one expects as various days get omitted because part of the data was missing. Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.) I have also tried to get in contact with senior executives who have experience with these drugs (either themselves or in their firms), but without success. I have to wonder: Are they completely unaware of the drugs' existence? Or are they actively suppressing the issue? For now, companies can ignore the use of smart drugs. And executives can pretend as if these drugs don't exist in their workplaces. But they can't do it forever. Too much caffeine may be bad for bone health because it can deplete calcium. Overdoing the caffeine also may affect the vitamin D in your body, which plays a critical role in your body's bone metabolism. However, the roles of vitamin D as well as caffeine in the development of osteoporosis continue to be a source of debate. Significance: Caffeine may interfere with your body's metabolism of vitamin D, according to a 2007 Journal of Steroid Biochemistry & Molecular Biology study. You have vitamin D receptors, or VDRs, in your osteoblast cells. These large cells are responsible for the mineralization and synthesis of bone in your body. They create a sheet on the surface of your bones. The D receptors are nuclear hormone receptors that control the action of vitamin D-3 by controlling hormone-sensitive gene expression. These receptors are critical to good bone health. For example, a vitamin D metabolism disorder in which these receptors don't work properly causes rickets. No. There are mission essential jobs that require you to live on base sometimes. Or a first term person that is required to live on base. Or if you have proven to not be as responsible with rent off base as you should be so your commander requires you to live on base. Or you're at an installation that requires you to live on base during your stay. Or the only affordable housing off base puts you an hour away from where you work. It isn't simple. The fact that you think it is tells me you are one of the "dumb@$$es" you are referring to above. More recently, the drug modafinil (brand name: Provigil) has become the brain-booster of choice for a growing number of Americans. According to the FDA, modafinil is intended to bolster "wakefulness" in people with narcolepsy, obstructive sleep apnea or shift work disorder. But when people without those conditions take it, it has been linked with improvements in alertness, energy, focus and decision-making. A 2017 study found evidence that modafinil may enhance some aspects of brain connectivity, which could explain these benefits. It is not because of the few thousand francs which would have to be spent to put a roof [!] over the third-class carriages or to upholster the third-class seats that some company or other has open carriages with wooden benches. What the company is trying to do is to prevent the passengers who can pay the second class fare from traveling third class; it hits the poor, not because it wants to hurt them, but to frighten the rich. And it is again for the same reason that the companies, having proved almost cruel to the third-class passengers and mean to the second-class ones, become lavish in dealing with first-class passengers. Having refused the poor what is necessary, they give the rich what is superfluous. One often-cited study published in the British Journal of Pharmacology looked at cognitive function in the elderly and showed that racetam helped to improve their brain function.19 Another study, which was published in Psychopharmacology, looked at adult volunteers (including those who are generally healthy) and found that piracetam helped improve their memory.20 One symptom of Alzheimer's disease is a reduced brain level of the neurotransmitter called acetylcholine. It is thought that an effective treatment for Alzheimer's disease might be to increase brain levels of acetylcholine. Another possible treatment would be to slow the death of neurons that contain acetylcholine. Two drugs, Tacrine and Donepezil, are both inhibitors of the enzyme (acetylcholinesterase) that breaks down acetylcholine. These drugs are approved in the US for treatment of Alzheimer's disease. And as before, around 9 AM I began to feel the peculiar feeling that I was mentally able and apathetic (in a sort of aboulia way); so I decided to try what helped last time, a short nap. But this time, though I took a full hour, I slept not a wink and my Zeo recorded only 2 transient episodes of light sleep! A back-handed sort of proof of alertness, I suppose. I didn't bother trying again. The rest of the day was mediocre, and I wound up spending much of it on chores and whatnot out of my control. Mentally, I felt better past 3 PM. Intrigued by old scientific results & many positive anecdotes since, I experimented with microdosing LSD - taking doses ~10μg, far below the level at which it causes its famous effects. At this level, the anecdotes claim the usual broad spectrum of positive effects on mood, depression, ability to do work, etc. After researching the matter a bit, I discovered that as far as I could tell, since the original experiment in the 1960s, no one had ever done a blind or even a randomized self-experiment on it. Looking at the prices, the overwhelming expense is for modafinil. It's a powerful stimulant - possibly the single most effective ingredient in the list - but dang expensive. Worse, there's anecdotal evidence that one can develop tolerance to modafinil, so we might be wasting a great deal of money on it. (And for me, modafinil isn't even very useful in the daytime: I can't even notice it.) If we drop it, the cost drops by a full $800 from $1761 to $961 (almost halving) and to $0.96 per day. A remarkable difference, and if one were genetically insensitive to modafinil, one would definitely want to remove it. Sounds too good to be true? Welcome to the world of 'Nootropics' popularly known as 'Smart Drugs' that can help boost your brain's power. Do you recall the scene from the movie Limitless, where Bradley Cooper's character uses a smart drug that makes him brilliant? Yes! The effect of Nootropics on your brain is such that the results come as a no-brainer. The price is not as good as multivitamins or melatonin. The studies showing effects generally use pretty high dosages, 1-4g daily. I took 4 capsules a day for roughly 4g of omega acids. The jar of 400 is 100 days' worth, and costs ~$17, or around 17¢ a day. The general health benefits push me over the edge of favoring its indefinite use, but looking to economize. Usually, small amounts of packaged substances are more expensive than bulk unprocessed, so I looked at fish oil fluid products; and unsurprisingly, liquid is more cost-effective than pills (but like with the powders, straight fish oil isn't very appetizing) in lieu of membership somewhere or some other price-break. I bought 4 bottles (16 fluid ounces each) for $53.31 total (thanks to coupons & sales), and each bottle lasts around a month and a half for perhaps half a year, or ~$100 for a year's supply. (As it turned out, the 4 bottles lasted from 4 December 2010 to 17 June 2011, or 195 days.) My next batch lasted 19 August 2011-20 February 2012, and cost $58.27. Since I needed to buy empty 00 capsules (for my lithium experiment) and a book (Stanovich 2010, for SIAI work) from Amazon, I bought 4 more bottles of 16fl oz Nature's Answer (lemon-lime) at $48.44, which I began using 27 February 2012. So call it ~$70 a year. But there are some potential side effects, including headaches, anxiety and insomnia. Part of the way modafinil works is by shifting the brain's levels of norepinephrine, dopamine, serotonin and other neurotransmitters; it's not clear what effects these shifts may have on a person's health in the long run, and some research on young people who use modafinil has found changes in brain plasticity that are associated with poorer cognitive function. Following up on the promising but unrandomized pilot, I began randomizing my LLLT usage since I worried that more productive days were causing use rather than vice-versa. I began on 2 August 2014, and the last day was 3 March 2015 (n=167); this was twice the sample size I thought I needed, and I stopped, as before, as part of cleaning up (I wanted to know whether to get rid of it or not). The procedure was simple: by noon, I flipped a bit and either did or did not use my LED device; if I was distracted or didn't get around to randomization by noon, I skipped the day. This was an unblinded experiment because finding a randomized on/off switch is tricky/expensive and it was easier to just start the experiment already. The question is simple too: controlling for the simultaneous blind magnesium experiment & my rare nicotine use (I did not use modafinil during this period or anything else I expect to have major influence), is the pilot correlation of d=0.455 on my daily self-ratings borne out by the experiment? An entirely different set of questions concerns cognitive enhancement in younger students, including elementary school and even preschool children. Some children can function adequately in school without stimulants but perform better with them; medicating such children could be considered a form of cognitive enhancement. How often does this occur? What are the roles and motives of parents, teachers, and pediatricians in these cases? These questions have been discussed elsewhere and deserve continued attention (Diller, 1996; Singh & Keller, 2010). If you could take a drug to boost your brainpower, would you? This question, faced by Bradley Cooper's character in the big-budget movie Limitless, is now facing students who are frantically revising for exams. Although they are nowhere near the strength of the drug shown in the film, mind-enhancing drugs are already on the pharmacy shelves, and many people are finding the promise of sharper thinking through chemistry highly seductive. L-Alpha glycerylphosphorylcholine or choline alfoscerate, also known as Alpha GPC is a natural nootropic which works both on its own and also in combination with other nootropics. It can be found in the human body naturally in small amounts. It's also present in some dairy products, wheat germ, and in organic meats. However, these dietary sources contain small quantities of GPC, which is why people prefer taking it through supplements. Kratom (Erowid, Reddit) is a tree leaf from Southeast Asia; it's addictive to some degree (like caffeine and nicotine), and so it is regulated/banned in Thailand, Malaysia, Myanmar, and Bhutan among others - but not the USA. (One might think that kratom's common use there indicates how very addictive it must be, except it literally grows on trees so it can't be too hard to get.) Kratom is not particularly well-studied (and what has been studied is not necessarily relevant - I'm not addicted to any opiates!), and it suffers the usual herbal problem of being an endlessly variable food product and not a specific chemical with the fun risks of perhaps being poisonous, but in my reading it doesn't seem to be particularly dangerous or have serious side-effects. Scientists found that the drug can disrupt the way memories are stored. This ability could be invaluable in treating trauma victims to prevent associated stress disorders. The research has also triggered suggestions that licensing these memory-blocking drugs may lead to healthy people using them to erase memories of awkward conversations, embarrassing blunders and any feelings for that devious ex-girlfriend. With something like creatine, you'd know if it helps you pump out another rep at the gym on a sustainable basis. With nootropics, you can easily trick yourself into believing they help your mindset. The ideal is to do a trial on yourself. Take identical looking nootropic pills and placebo pills for a couple weeks each, then see what the difference is. With only a third party knowing the difference, of course. …The Fate of Nicotine in the Body also describes Battelle's animal work on nicotine absorption. Using C14-labeled nicotine in rabbits, the Battelle scientists compared gastric absorption with pulmonary absorption. Gastric absorption was slow, and first pass removal of nicotine by the liver (which transforms nicotine into inactive metabolites) was demonstrated following gastric administration, with consequently low systemic nicotine levels. In contrast, absorption from the lungs was rapid and led to widespread distribution. These results show that nicotine absorbed from the stomach is largely metabolized by the liver before it has a chance to get to the brain. That is why tobacco products have to be puffed, smoked or sucked on, or absorbed directly into the bloodstream (i.e., via a nicotine patch). A nicotine pill would not work because the nicotine would be inactivated before it reached the brain. There is no shortage of nootropics available for purchase online that can be shipped to you nearly anywhere in the world. Yet, many of these supplements and drugs have very little studies, particularly human studies, confirming their results. While this lack of research may not scare away more adventurous neurohackers, many people would prefer to […] Nature magazine conducted a poll asking its readers about their cognitive-enhancement practices and their attitudes toward cognitive enhancement. Hundreds of college faculty and other professionals responded, and approximately one fifth reported using drugs for cognitive enhancement, with Ritalin being the most frequently named (Maher, 2008). However, the nature of the sample—readers choosing to answer a poll on cognitive enhancement—is not representative of the academic or general population, making the results of the poll difficult to interpret. By analogy, a poll on Vermont vacations, asking whether people vacation in Vermont, what they think about Vermont, and what they do if and when they visit, would undoubtedly not yield an accurate estimate of the fraction of the population that takes its vacations in Vermont. A television advertisement goes: "It's time to let Focus Factor be your memory-fog lifter." But is this supplement up to task? Focus Factor wastes no time, whether paid airtime or free online presence: it claims to be America's #1 selling brain health supplement with more than 4 million bottles sold and millions across the country actively caring for their brain health. It deems itself instrumental in helping anyone stay focused and on top of his game at home, work, or school. Learn More... In most cases, cognitive enhancers have been used to treat people with neurological or mental disorders, but there is a growing number of healthy, "normal" people who use these substances in hopes of getting smarter. Although there are many companies that make "smart" drinks, smart power bars and diet supplements containing certain "smart" chemicals, there is little evidence to suggest that these products really work. Results from different laboratories show mixed results; some labs show positive effects on memory and learning; other labs show no effects. There are very few well-designed studies using normal healthy people. Smart pills have huge potential and several important applications, particularly in diagnosis. Smart pills are growing as a highly effective method of endoscopy, particularly for gastrointestinal diseases. Urbanization and rapid lifestyle changes leaning toward unhealthy diets and poor eating habits have led to distinctive increasing lifestyle disorders such as gastroesophageal reflux disease (GERD), obesity, and gastric ulcers. Fish oil (Examine.com, buyer's guide) provides benefits relating to general mood (eg. inflammation & anxiety; see later on anxiety) and anti-schizophrenia; it is one of the better supplements one can take. (The known risks are a higher rate of prostate cancer and internal bleeding, but are outweighed by the cardiac benefits - assuming those benefits exist, anyway, which may not be true.) The benefits of omega acids are well-researched. But there would also be significant downsides. Amphetamines are structurally similar to crystal meth – a potent, highly addictive recreational drug which has ruined countless lives and can be fatal. Both Adderall and Ritalin are known to be addictive, and there are already numerous reports of workers who struggled to give them up. There are also side effects, such as nervousness, anxiety, insomnia, stomach pains, and even hair loss, among others. Despite decades of study, a full picture has yet to emerge of the cognitive effects of the classic psychostimulants and modafinil. Part of the problem is that getting rats, or indeed students, to do puzzles in laboratories may not be a reliable guide to the drugs' effects in the wider world. Drugs have complicated effects on individuals living complicated lives. Determining that methylphenidate enhances cognition in rats by acting on their prefrontal cortex doesn't tell you the potential impact that its effects on mood or motivation may have on human cognition. However, when I didn't stack it with Choline, I would get what users call "racetam headaches." Choline, as Patel explains, is not a true nootropic, but it's still a pro-cognitive compound that many take with other nootropics in a stack. It's an essential nutrient that humans need for functions like memory and muscle control, but we can't produce it, and many Americans don't get enough of it. The headaches I got weren't terribly painful, but they were uncomfortable enough that I stopped taking Piracetam on its own. Even without the headache, though, I didn't really like the level of focus Piracetam gave me. I didn't feel present when I used it, even when I tried to mix in caffeine and L-theanine. And while it seemed like I could focus and do my work faster, I was making more small mistakes in my writing, like skipping words. Essentially, it felt like my brain was moving faster than I could. Another important epidemiological question about the use of prescription stimulants for cognitive enhancement concerns the risk of dependence. MPH and d-AMP both have high potential for abuse and addiction related to their effects on brain systems involved in motivation. On the basis of their reanalysis of NSDUH data sets from 2000 to 2002, Kroutil and colleagues (2006) estimated that almost one in 20 nonmedical users of prescription ADHD medications meets criteria for dependence or abuse. This sobering estimate is based on a survey of all nonmedical users. The immediate and long-term risks to individuals seeking cognitive enhancement remain unknown. A provisional conclusion about the effects of stimulants on learning is that they do help with the consolidation of declarative learning, with effect sizes varying widely from small to large depending on the task and individual study. Indeed, as a practical matter, stimulants may be more helpful than many of the laboratory tasks indicate, given the apparent dependence of enhancement on length of delay before testing. Although, as a matter of convenience, experimenters tend to test memory for learned material soon after the learning, this method has not generally demonstrated stimulant-enhanced learning. However, when longer periods intervene between learning and test, a more robust enhancement effect can be seen. Note that the persistence of the enhancement effect well past the time of drug action implies that state-dependent learning is not responsible. In general, long-term effects on learning are of greater practical value to people. Even students cramming for exams need to retain information for more than an hour or two. We therefore conclude that stimulant medication does enhance learning in ways that may be useful in the real world. ^ Sattler, Sebastian; Mehlkop, Guido; Graeff, Peter; Sauer, Carsten (February 1, 2014). "Evaluating the drivers of and obstacles to the willingness to use cognitive enhancement drugs: the influence of drug characteristics, social environment, and personal characteristics". Substance Abuse Treatment, Prevention, and Policy. 9 (1): 8. doi:10.1186/1747-597X-9-8. ISSN 1747-597X. PMC 3928621. PMID 24484640.
CommonCrawl
<< Thursday, February 28, 2019 >> Sponsor: Lawrence Hall of Science (LHS) This winter, visit the Hall for interactive exhibits, special hands-on activities, intriguing Planetarium shows, and more! Through the Learner's Lens: A Student Learning Center Photo Contest Miscellaneous | February 25 – March 22, 2019 every day | César E. Chávez Student Center Sponsor: Student Learning Center The UC Berkeley Student Learning Center is excited to invite submissions for our first ever photo contest! We're calling on the creativity of our campus community to build a collection of images that showcase the diverse ways learning takes place in and through the Student Learning Center. We invite you to share moments in the learning process that excite you, challenge you, and encapsulate the... More > Study Abroad Office Hours at Educational Opportunity Program Information Session | February 14 – May 9, 2019 every Thursday with exceptions | 10 a.m.-12 p.m. | César E. Chávez Student Center, Educational Opportunity Program- MLK BNorth CE3 Interested in studying abroad? A Berkeley Study Abroad Peer Adviser will hold office hours at Educational Opportunity Program on Thursdays from 10 am-noon this spring to answer questions about program options, scholarships, how to apply and additional services offered! Sign-up on http://tinyurl.com/eopstudyabroad. Power and Energy Seminar and IEEE SSCS Distinguished Lecture Series Lecture | February 28 | 10-11 a.m. | Cory Hall, 540 AB Speaker/Performer: Yogesh Ramadass, Director, Power Management, Kilby Labs at Texas Instruments Sponsor: Electrical Engineering and Computer Sciences (EECS) Power electronics can be found in everything from cellphones and laptops to gasoline/electric vehicles, industrial motors and inverters that connect solar panels to the electric grid. With close to 80% of electrical energy consumption in the US expected to flow through a power converter by 2030, innovative circuits, devices and systems solutions are required to tackle key issues related to... More > Mechanics of Behavior in Non-Neuronal Systems and Other Puzzles from the Depths of the Ocean Seminar | February 28 | 11 a.m.-12 p.m. | 3110 Etcheverry Hall Speaker/Performer: Manu Prakash, Department of Bioengineering, Stanford University Sponsor: QB3 - California Institute for Quantitative Biosciences Remarkable behavioral complexity in metazoans can be attributed to the evolution of the neuron and the nervous system. But for such a system to arise in the pre-cambrian era, precursors of animals without neurons must have roamed the earth at some point of time. Soft-bodied animals make for a very poor representation in our fossil records, shutting the door to inferring origins of behavioral... More > Econ 235, Financial Economics Seminar: Markets, Banks, and Shadow Banks Seminar | February 28 | 11:10 a.m.-12:30 p.m. | C330 Haas School of Business Speaker/Performer: David Martinez-Miera, Carlos III Sponsor: Department of Economics Joint with Haas Finance Seminar Performing the Unimaginable: Theater of War with Peter Glazer and Guests Presentation | February 28 | 12 p.m. | Berkeley Art Museum and Pacific Film Archive Theater director and playwright Peter Glazer will discuss attempts to theatricalize war and military conflict, engaging with Akram Khan's XENOS, which will be presented by Cal Performances at Zellerbach Hall on March 2 and 3. Cocurator of this semester's Arts + Design Thursdays series, Glazer teaches in the Department of Theater, Dance, and Performance Studies at UC Berkeley. His plays,... More > Mindful Awareness Guided Meditation Miscellaneous | February 21 – May 2, 2019 every Thursday | 12-1 p.m. | 5400 Berkeley Way West Moderator: Jeffrey Oxendine Sponsor: Institute of Personality and Social Research Focus the mind. Foster creativity, resilience, and well-being. These meetings are free and open to faculty, staff, and students. You are invited to drop in at noon any Thursday. Oliver E. Williamson Seminar: "Big Push Policies and Firm-Level Barriers to Employing Women: Evidence from Saudi Arabia" Seminar | February 28 | 12-1:30 p.m. | C325 Haas School of Business Speaker: Conrad Miller, UC Berkeley The Oliver E. Williamson Seminar on Institutional Analysis, named after our esteemed colleague who founded the seminar, features current research by faculty, from UCB and elsewhere, and by advanced doctoral students. The research investigates governance and its links with economic and political forces. Markets, hierarchies, hybrids, and the supporting institutions of law and politics all come... More > Performing the Unimaginable: Theater of War with Peter Glazer, Rob Bailis, and Akram Khan Lecture | February 28 | 12-1:30 p.m. | Berkeley Art Museum and Pacific Film Archive, Osher Theater Speaker/Performer: Peter Glazer Peter Glazer, the co-curator of this series, teaches in the Department of Theater, Dance, and Performance Studies at UC Berkeley. He is a professional director and playwright whose plays, adaptations, collaborations and directing projects include Woody Guthrie's American Song (Bay Area Drama Critics award, with Drama Desk and Outer Critics Circle nominations Off-Broadway and Joseph Jefferson... More > Berkeley Summer Abroad and Berkeley Global Internships: Earn credit and see the world this summer. Information Session | February 28 | 12-1 p.m. | 223 Moses Hall Earn Berkeley credit while seeing the world this summer! Learn more about Berkeley Global Internships and Berkeley Summer Abroad at this info session. Berkeley Global Internships offer students valuable internship experience across the globe. Scholarships are available for Boston, Dublin, Israel, Hong Kong, London, Madrid, Mumbai, Prague, Paris, Singapore, Sydney, and Tokyo. Learn more at... More > IB Seminar: Identification with Societies in Humans and Other Animals Seminar | February 28 | 12:30-1:30 p.m. | 2040 Valley Life Sciences Building Featured Speaker: Mark Moffett, Smithsonian Institution Sponsor: Department of Integrative Biology Librarian Office Hours at the SPH DREAM Office Miscellaneous | February 7 – May 2, 2019 every Thursday | 1-3 p.m. | Berkeley Way West, 2220 (DREAM Office) Speaker/Performer: Michael Sholinbeck Sponsor: Library Drop by during office hours if you need help with your literature reviews; setting up searches in PubMed, Embase, and other databases; using EndNote, RefWorks, or other citation management software; finding statistics or data; and answering any other questions you may have. NOTE: On Feb. 14: 2:15-3pm only Econ 235, Financial Economics Student Seminar: No Meeting Seminar | February 28 | 1-2 p.m. | 597 Evans Hall Join us for a free, docent-led tour of the Garden as we explore interesting plants from around the world, learn about the vast diversity in the collection, and see what is currently in bloom. Meet at the Entry Plaza. Free with Garden admission. Advanced registration not required Seminar 251, Labor Seminar: "Place-Based Redistribution" Seminar | February 28 | 2-3:30 p.m. | 648 Evans Hall Speaker: Danny Yagan, UC Berkeley Sponsor: Center for Labor Economics (with Cecile Gaubert and Patrick Kline) 7th Annual Environmental Health Sciences Symposium: Children's Environmental Health: Cutting-Edge Solutions for the 21st Century Conference/Symposium | February 28 | 3-6 p.m. | Berkeley Way West, Colloquia Room Sponsor: Environmental Health Sciences, Berkeley Public Health Seminar 242, Econometrics: Identification and estimation of spillover effects in randomized experiments" Speaker: Gonzalo Vazquez-Bare, UC Santa Barbara Dr. Ambika Kamath: "Three windows into behavioral ecology: Re-examining mating systems in Anolis lizards" Seminar | February 28 | 3:30-4:30 p.m. | 100 Genetics & Plant Biology Building Speaker/Performer: Dr. Ambika Kamath Sponsor: Dept. of Environmental Science, Policy, and Mgmt. (ESPM) Miller postdoctoral fellow at UC Berkeley, Dr. Ambika Kamath, will present a research seminar titled, "Three windows into behavioral ecology: Re-examining mating systems in Anolis lizards." Open to the public. Bio-Tech Connect: Networking with industry Career Fair | February 28 | 3:30-6 p.m. | Stanley Hall, Atrium Sponsor: Bioengineering (BioE) The top talent of UC Berkeley and local biotech employers at the largest biomedical industry event on campus. Meet representatives from local biotech companies, large and small! All majors and levels welcome - early undergrad to PhD. Not a career fair - some will be hiring, some just want to meet you. Bring your resume if you're job searching and meet some awesome companies. Optimizing the Automated Programming Stack Seminar | February 28 | 4-5 p.m. | 306 Soda Hall Speaker: James Bornholt, Ph.D. Candidate, University of Washington In this talk, I present a new application-driven approach to optimizing the automated programming stack underpinning modern domain-specific tools. F-1 Optional Practical Training (OPT) Question and Answer Session Information Session | February 28 | 4-5 p.m. | Online Webinar Sponsor: Berkeley International Office(BIO)) If you are graduating soon and have questions about applying for F-1 employment eligibility after you graduate, then sign up for this ONLINE webinar. We'll do a brief overview of the OPT application process and timelines, followed by a Question and Answer session. Prior to attending this webinar, you need to review our... More > Evidence, Replication, and Ethics in Ethnographic Research Colloquium | February 28 | 4-5:30 p.m. | Simon Hall, 297, Goldberg Room Panelist/Discussants: Steven Lubet, Williams Memorial Professor of Law, Northwestern University, Pritzker School of Law; Martín Sánchez-Jankowski, Professor of Sociology, UC Berkeley Moderator: Calvin Morrill, Stefan A. Riesenfeld Professor of Law and Professor of Sociology and Associate Dean for Jurisprudence and Social Policy, UC Berkeley Sponsor: Center for Ethnographic Research Steven Lubet and Martín Sánchez-Jankowski will discuss what counts as evidence in ethnographic research and whether replication is possible or desired, as well as considering the ethics of ethnographic research and writing. Multiple Forces Shape Microbial Community Structure in the Phyllosphere Seminar | February 28 | 4-5 p.m. | 1011 Evans Hall Speaker/Performer: Norma Morella, Department of Plant and Microbial Biology, UC Berkeley Sponsor: Department of Statistics As our knowledge of host-associated microbial communities (microbiomes) continues to deepen, there remain key unresolved questions across multiple systems. Among these is an understanding of the forces underlying the assembly of, selection within, and co-evolution among microbiota, all of which depend in part on microbiome transmission mode. My PhD thesis research has focused on characterizing... More > Border Surveillance and the Black Mediterranean: Alternative Imaginaries of Refugees, Race and Rights Lecture | February 28 | 4-5:30 p.m. | 691 Barrows Hall Sponsors: Center for Race and Gender, The Program in Critical Theory Camilla Hawthorne, Assistant Professor of Sociology, UC Santa Cruz Debarati Sanyal, French, Designated Emphasis in Critical Theory, the Institute of European Studies, and the Center for Race & Gender, UC Berkeley Mathematics Department Colloquium: Discrete subgroups of Lie groups and geometric structures Colloquium | February 28 | 4:10-5 p.m. | 60 Evans Hall Speaker: Fanny Kassel, IHES Sponsor: Department of Mathematics Discrete subgroups of Lie groups play a fundamental role in several areas of mathematics. Discrete subgroups of $SL(2,\mathbb R)$ are well understood, and classified by the geometry of the corresponding hyperbolic surfaces. On the other hand, discrete subgroups of $SL(n,\mathbb R)$ for $n >2$, beyond lattices, remain quite mysterious. While lattices in this setting are rigid, there also exist... More > Diversifying Syllabi: Some Considerations Colloquium | February 28 | 4:10-7 p.m. | Moses Hall, Howison Library Speaker/Performer: Luvell Anderson, Syracuse University Sponsor: Department of Philosophy While there seems to be agreement that we should aim to make syllabi more diverse and inclusive, not much by way of instruction on how to do so has been offered. Given the perceived importance of syllabus diversification to greater inclusion and diversity within philosophy, it would be wise to pay some more attention to how we go about doing so. In this talk, I aim to do just that. I will first... More > Grain by Grain Book Launch Special Event | February 28 | 5-7 p.m. | North Gate Hall, Library Sponsors: Center for Diversified Farming Systems, Berkeley Food Institute, Graduate School of Journalism, Regrained Join Berkeley alum and Lentil Underground author Liz Carlisle and organic farmer Bob Quinn for a launch of their new book, Grain by Grain. When Bob Quinn was a kid, a stranger at a county fair gave him a few kernels of an unusual... More > Public University, Public Values Lecture | February 28 | 5-7 p.m. | Stephens Hall, Geballe Room, 220 Speaker/Performer: Maggie Nelson, Professor of English, University of Southern California Sponsor: Berkeley Center for the Study of Religion Public University, Public Values is a new series of talks and conversations co-organized by BCSR and the Townsend Center for the Humanities. The series is prompted by the recognition that the current moment of crisis in the liberal democracies of Europe and North America is, among other things, a crisis of value. The "political" focus that has shaped the humanities and much of the social... More > Maggie Nelson in Conversation with Nadia Ellis: Writing Freedom — and Its Constraints Special Event | February 28 | 5 p.m. | Stephens Hall, Geballe Room, 220 Stephens Sponsor: Townsend Center for the Humanities Maggie Nelson, the 2018-19 Una's Lecturer, is joined in conversation by UC Berkeley professor of English Nadia Ellis. Project Europe: A New History of the European Union Lecture | February 28 | 5-6:30 p.m. | 240 Mulford Hall Speaker/Performer: Kiran Klaus Patel, Maastricht University Sponsors: Institute of European Studies, GHI West, Pacific Regional Office of the German Historical Insitute Washington DC, Department of History, Center for German and European Studies Today, the EU seems to be in an existential crisis. Against this backdrop, the early history of European integration since the 1950s shines all the brighter. But is this an appropriate assessment? Kiran Patel analyzes the concrete effects and results of European integration and what we can learn from the past for our present day, summarizing some of the key findings of his monograph on the topic... More > Four Physics Stories and their Effect on China's Future Mobility: CEE Faculty Distinguished Lecture Lecture | February 28 | 5-6 p.m. | Sutardja Dai Hall, Banatao Auditorium Speaker: Carlos Daganzo, Professor of the Graduate School, Civil & Environmental Engineering, UC Berkeley Sponsor: Civil and Environmental Engineering (CEE) Four short stories will be used to illustrate how physics and imagination can be used to diagnose and remedy some of the critical urban mobility problems that are faced today by some of the world's megacities. The stories have in common that they address important problems in new ways. Anti-Jewish Violence in Poland, 1914-1920 and 1945-1946: New Social-Psychological Perspectives Lecture | February 28 | 5:30-7 p.m. | 270 Stephens Hall Speaker: William Hagen, Professor Emeritus of History, UC Davis Sponsor: Institute of Slavic, East European, and Eurasian Studies (ISEEES) This talk will summarize the speaker's arguments in his book, Anti-Jewish Violence in Poland, 1914-1920 (Cambridge UP, 2018), contrasting them with major recent works on the post-World War II years by Polish scholars Joanna Tokarska-Bakir and Marcin Zaremba. It will highlight interpretation focused on popular mentalities, societal traumas, and enactment of routinized, unreflected-upon... More > Preserving and Conserving Nestor: Sather Lecture Series: A Bronze Age Greek State in Formation Lecture | February 28 | 5:30 p.m. | 370 Dwinelle Hall Speaker/Performer: Jack L. Davis, Blegen Professor of Greek Archaeology, University of Cincinnati Sponsor: Department of Classics Internationally recognized scholar of Bronze Age Greece offers a series of lectures showing how the archaeological record sheds light on culture and communal life of early Greece. Career Connections: Human Resources Social Event | February 28 | 6-8 p.m. | Career Center (2440 Bancroft Way) Addressing Racism and Sexism in Wikipedia: A Panel Discussion Presentation | February 28 | 6-7:30 p.m. | Free Speech Movement Café (Moffitt Library) Speakers/Performers: Victoria Robinson, American Cultures Center, Ethnic Studies, Gender and Women's Studies; Juana Maria Rodriguez, Ethnic Studies, Gender and Women's Studies, and Performance Studies; Merrilee Proffitt, OCLC Research Sponsor: Free Speech Movement Café Educational Programs The Art + Feminism + Race + Justice Wikipedia edit-a-thon at UC Berkeley is part of a national effort that invites participants to become a Wikipedia editor and contribute to addressing this problem. In this panel, speakers will address the importance of moving marginalized voices to the center in repositories like Wikipedia. Salesforce Info-Session Information Session | February 28 | 6-7 p.m. | Sibley Auditorium, Bechtel Engineering Center, Sibley Auditorium APM Information Session Learn about APM opportunities at one of the most innovative companies in the world. Please join us for an Information Session specific to our Associate Product Management Program at Salesforce! Ben James, Director, Product Management, will share insight into our program, and members of the inaugural APM intern class will also be presenting on their APM projects and... More > Curator's Circle Event: Reception and Screening with Arthur Jafa in Conversation Special Event | February 28 | 6 p.m. | Berkeley Art Museum and Pacific Film Archive In conjunction with his exhibition Arthur Jafa/ MATRIX 272, artist, director, editor, and award-winning cinematographer Arthur Jafa joins writer, cultural producer, and musician Greg Tate for a public conversation before the program Affective Proximity: Films by Arthur Jafa and Others. Jafa's poignant, critically acclaimed work expands the concept of black cinema while exploring African American... More > Affective Proximity: Films by Arthur Jafa and Others Film - Series | February 28 | 7 p.m. | Berkeley Art Museum and Pacific Film Archive Fundamental to Arthur Jafa's artistic process is the compilation, editing, and remixing of appropriated images through what John Akomfrah has called "affective proximity." For this program Jafa has selected films that explore the effects of violence on black individuals and communities while also depicting their beauty, power, and resilience. Dawn Suggs's Chasing the Moon is a short meditation on... More > Performing Arts - Dance | February 28 – March 2, 2019 every day | 8-10 p.m. | Zellerbach Playhouse Director: Joe Goode, Professor of Theater, Dance, and Performance Studies Performer Group: Joe Goode, TDPS, Joe Goode Performance Group Cherie Hill, IrieDance, UC Berkeley Alumna Rulan Tangen, DANCING EARTH Latanya Tigner, TDPS, Dimensions Dance Theater Katie O'Connor, UC Berkeley Alumna Sponsor: Department of Theater, Dance & Performance Studies Celebrating the 50th anniversary of UC Berkeley's dance program, Berkeley Dance Project 2019: the body remembers will feature an all-student cast performing pieces by professional choreographers Joe Goode, Rulan Tangen, Latanya Tigner, and Cherie Hill, as well as recent dance program alumna Katie O'Connor. Tickets required: $13 online in advance or $15 at the door for students/seniors and UC Berkeley faculty/staff, $18 online in advance or $20 at the door for general admission Ticket info: Buy tickets online Sponsor: Bancroft Library Let there be laughter! This exhibition features Cal students' cartoons, jokes, and satire from throughout the years, selected from their humor magazines and other publications. Art for the Asking: 60 Years of the Graphic Arts Loan Collection at the Morrison Library will be up in Doe Library's Brown Gallery until March 1st, 2019. This exhibition celebrates 60 years of the Graphic Arts Loan Collection, and includes prints in the collection that have not been seen in 20 years, as well as prints that are now owned by the Berkeley Art Museum. There are also cases dedicated... More > Featuring works by internationally renowned contemporary Tibetan artists alongside rare historical pieces, this exhibition highlights the ways these artists explore the infinite possibilities of visual forms to reflect their transcultural, multilingual, and translocal lives. Though living and working in different geographical areas—Lhasa, Dharamsala, Kathmandu, New York, and the Bay Area—the... More > In the early twentieth century, inspired by modern science such as Albert Einstein's theory of relativity, an emerging avant-garde movement sought to expand the "dimensionality" of modern art, engaging with theoretical concepts of time and space to advance bold new forms of creative expression. Dimensionism: Modern Art in the Age of Einstein illuminates the remarkable connections between the... More > The 1960s architectural phenomenon Supergraphics—a mix of Swiss Modernism and West Coast Pop—was pioneered by San Francisco–based artist, graphic and landscape designer, and writer Barbara Stauffacher Solomon. Stauffacher Solomon, a UC Berkeley alumna, is creating new Supergraphics for BAMPFA's Art Wall. Land(e)scape 2018 is the fifth in a series of temporary, site-specific works commissioned for... More > BAMPFA's Japanese art collection began in 1919 with a remarkable donation of more than a thousand woodblock prints from the estate of UC Berkeley Professor of English William Dallam Armes. This exhibition features a selection of these exceptional prints, as well as hanging scroll paintings, screens, lacquerware, and ceramics that have entered the collection over the century since this... More > Arthur Jafa is an artist, director, editor, and award-winning cinematographer whose poignant work expands the concept of black cinema while exploring African American experience and race relations in everyday life. He has stated, "I have a very simple mantra and it's this: I want to make black cinema with the power, beauty, and alienation of black music." In his renowned work Love Is The Message,... More > The exuberance, romance, and beauty of dance are central themes in this exhibition of historical and contemporary works from BAMPFA's collection. The selection is wide ranging, including prints, drawings, and photographs from the United States, Europe, and Asia. Among the highlights are two classic photographs of José Limón and Martha Graham by Barbara Morgan; George Bellows's raucous image Dance... More > Masako Miki was born in Japan but has made the Bay Area, and Berkeley in particular, her home for more than twenty years. In her work she remains close to her ancestral traditions, especially those that arise from her association with Buddhist and Shinto beliefs and practices, as well as traditional Japanese folklore. Her current work, she says, is "inspired by the idea of animism from the Shinto... More > Bringing together nearly seventy works spanning the entirety of the artist's career, this exhibition presents a fresh and eye-opening examination of Hans Hofmann's prolific and innovative artistic practice. Featuring paintings and works on paper from 1930 through the end of Hofmann's life in 1966, the exhibition includes numerous masterworks from BAMPFA's distinguished collection as well as many... More > You don't have to be a pro to know that math and science can help improve your game. In our exhibit, Well Played!, you can experiment with force, angles, and trajectory to get the highest scores you can with classic arcade games such as Skeeball, Pinball, and Basketball. Want to improve your score? Try our interactive exhibits on the math and science behind force and trajectory, and then head... More > This exhibition of artists' books centers on ideas about the built environment and has been curated by Berkeley-based book artist Julie Chen for UC Berkeley's Environmental Design Library. Featuring works by 25 artists including Robbin Ami Silverberg, Clifton Meador, Inge Bruggeman, Karen Kunc, Sarah Bryant and Barbara Tetenbaum, the exhibition explores the built environment through text, image,... More > Sponsor: Environmental Design, College of This exhibition of artists books centers on ideas about the built environment, curated by Berkeley-based book artist Julie Chen for CEDs Environmental Design Library. 343386 N/A In the pantheon of the late nineteenth- and early twentieth-century artists who represent Mexico and Mexican art, the artwork of José Guadalupe Posada stands out as a bright constellation that continues to shine a light on important stories through woodcuts, imprints, and engravings. This exhibition was created using the books from the collections of the Doe Library. The exhibition is envisioned... More > Educated in physics, mathematics, and philosophy at Princeton University and trained in graphic design at Yale, Berkeley-based Aaron Marcus explores new possibilities for expression. He created his first "computer-assisted poem-drawings" in the spring of 1972, when he served as a research associate at Yale University's School of Art and Architecture. Using standard typographical symbols, Marcus... More > Bearing Light: Berkeley at 150 Exhibit - Artifacts | April 16, 2018 – February 28, 2019 every Monday, Tuesday, Wednesday, Thursday & Friday | 8 a.m.-5 p.m. | Bancroft Library, 2nd Floor Corridor This exhibition celebrates the University of California's sesquicentennial anniversary with photographs, correspondence, publications, and other documentation drawn from the University Archives and The Bancroft Library collections. It features an array of golden bears, including Oski, and explores the illustrious history of UC Berkeley. Facing West 1: Camera Portraits from the Bancroft Collection Exhibit - Photography | November 9, 2018 – March 15, 2019 every Monday, Tuesday, Wednesday, Thursday & Friday | 10 a.m.-4 p.m. | Bancroft Library The first part of a double exhibition celebrating the tenth anniversary of the renewed Bancroft Library and its gallery, Facing West 1 presents a cavalcade of individuals who made, and continue to make, California and the American West. These camera portraits highlight the communities and peoples of Hubert Howe Bancroft's original collecting region, which extended from the Rockies to the Pacific... More > Facing West: Camera Portraits from the Bancroft Collection Exhibit - Photography | November 29, 2018 – March 15, 2019 every Monday, Tuesday, Wednesday, Thursday & Friday | 10 a.m.-4 p.m. | Bancroft Library, Bancroft Gallery Sponsor: Magnes Collection of Jewish Art and Life This exhibition will be continuing in Spring 2019. Notions of resistance, alongside fears and realities of oppression, resound throughout Jewish history. As a minority, Jews express their political aspirations, ideals of heroism, and yearnings of retaliation and redemption in their rituals, art, and everyday life. Centering on coins in The Magnes Collection, this exhibition explores how... More > For nearly two decades, Yaakov (Jacob) Benor-Kalter (1897-1969) traversed the Old City of Jerusalem, documenting renowned historical monuments, ambiguous subjects in familiar alleyways, and scores of "new Jews" building a new homeland. Benor-Kalter's photographs smoothly oscillate between two worlds, and two Holy Lands, with one lens. After immigrating from Poland to the British Mandate of... More > Auditorium installation of high-resolution images of select collection items. Acquired by The Magnes Collection of Jewish Art and Life in 2017 thanks to an unprecedented gift from Taube Philanthropies, the most significant collection of works by Arthur Szyk (Łódź, Poland, 1894 – New Canaan, Connecticut, 1951) is now available to the world in a public institution for the first time as... More > The First World War (1914-1918) uprooted millions across Europe, and beyond. Many Jews left Eastern and Southern Europe, bringing with them prized personal and communal belongings. In an attempt to rescue precious heritage from imminent destruction, these "memory objects" often ended up with museums, collectors, and art dealers in the West. Vision+Light: Processing Perception Exhibit - Multimedia | February 20 – March 14, 2019 every day with exceptions | 12-5 p.m. | Kroeber Hall, Worth Ryder Art Gallery This acclaimed program returns for its sixth, and to date most varied and provocative exhibition of the convergence of science and art. Presenting artwork and installations across a range of media and technologies, Vision + Light: Processing Perception offers stunning visual, mind-bending virtual, and intriguing tactile experiences to spark thought and inspiration. Arthur Jafa Notebook Viewing Hour Exhibit - Multimedia | February 28 | 5-6 p.m. | Berkeley Art Museum and Pacific Film Archive Throughout the duration of the exhibition Arthur Jafa / MATRIX 272, a series of notebooks that Jafa began assembling in the 1990s are on view in the Florence Helzel Works on Paper Study Center on the museum's lower level. Reflecting the artist's interest in found images and affective collage, the notebooks offer unique insight into his singular approach. Viewing hours are Wednesdays 12–1,... More >
CommonCrawl
Pattern and correlates of out-of-pocket payment (OOP) on female sterilization in India, 1990–2014 Sanjay K. Mohanty ORCID: orcid.org/0000-0001-9041-59521, Suyash Mishra2, Sayantani Chatterjee2 & Niranjan Saggurti3 BMC Women's Health volume 20, Article number: 13 (2020) Cite this article Large scale public investment in family welfare programme has made female sterilization a free service in public health centers in India. Besides, it also provides financial compensation to acceptors. Despite these interventions, the use of contraception from private health centers has increased over time, across states and socio-economic groups in India. Though many studies have examined trends, patterns, and determinants of female sterilization services, studies on out-of-pocket payment (OOP) and compensations on sterilisation are limited in India. This paper examines the trends and variations in out-of-pocket payment (OOP) and compensations associated with female sterilization in India. Data from the National Family Health Survey - 4, 2015–16 was used for the analyses. A composite variable based on compensation received and amount paid by users was computed and categorized into four distinct groups. Multivariate analyses were used to understand the significant predictors of OOP of female sterilization. Public health centers continued to be the major providers of female sterilization services; nearly 77.8% had availed themselves of free sterilization and 61.6% had received compensation for female sterilization. About two-fifths of the women in the economically well-off state like Kerala and one-third of the women in a poor state like Bihar had paid but did not receive any compensation for female sterilization. The OOP on female sterilization varies from 70 to 79% across India. The OOP on female sterilization was significantly higher among the educated and women belonging to the higher wealth quintile linking OOP to ability to pay for better quality of care. Public sector investment in family planning is required to provide free or subsidized provision of family welfare services, especially to women from a poor household. Improving the quality of female sterilization services in public health centers and rationalizing the compensation may extend the reach of family planning services in India. Investment in family planning has both short and long run returns at the societal and individual levels. At the societal level, increase in the use of family planning reduces the fertility level, stabilizes population growth in the long run and increases the level of socioeconomic development [1]. The pathways of family planning, economic growth and poverty reduction have drawn considerable attention among leading economists and demographers [2,3,4,5,6,7]. A number of studies from Asia and Africa have established the positive effect of increasing family planning use on economic growth, per capita income and reduction in poverty [8,9,10,11,12,13,14,15]. At the individual level, access to contraception increases spacing between births, reduces unintended pregnancies and pregnancy complications, reduces unmet need, helps to realise the desired family size and improves the overall health of mothers and children [3, 16,17,18,19,20,21,22,23,24,25,26,27]. Research also suggests that the use of contraception is associated with increasing household income, savings and women participation in paid employment [2, 6, 28,29,30,31]. Children of small families tend to have higher educational attainment, better cognitive development and better health [32,33,34,35,36,37,38,39,40]. Given the multiple benefits, many national and state governments, international donors and developmental agencies continue to support family welfare programmes worldwide [20, 41, 42]. The theoretical rationale on investment in family planning and empirical evidence on cost and benefits of family planning at both the macro and micro levels have been well established [41,42,43,44]. In 1952, India became the first country in the world to launch the centrally funded official family planning programme with the aim of reducing population growth. Since its inception, family planning services are provided free at public health centers throughout the country. Over the last six decades, the family planning programme has undergone several changes in its design, approach and execution. During the first two decades, the family planning programme had adopted a clinical approach, that is, couples who needed family planning services had to visit clinics to avail themselves of the services. To strengthen the acceptance of family planning, the 'extension education approach' was introduced in the 1960s, by reaching the community and educating them about the utility of small family norms [45]. In the 1970s, for a brief period of 2 years, the family planning services adopted a coercive approach and suffered severe criticism. Later, a 'cafeteria approach' was adopted that aimed at providing approved family planning methods in keeping with the choice of the acceptors [45]. In 1977, the family planning programme was renamed Family Welfare Programme. In the 1990s, the programme was integrated into the Maternal and Child Health Programme (MCH) and later into the Reproductive and Child Health Programme (RCH). Since 2005, the family welfare programme has been under the umbrella of the National Health Mission (NHM) that aims to address health vulnerabilities persisting in India, holistically through the life cycle approach – from infancy to adolescent to adulthood, with special focus on mothers and children [46]. A large body of literature has focused on the factors affecting female sterilization in India. The general inference from the studies suggests that female sterilization is the most used method of contraception due to convenience, free provisioning at public health centers, and provision of compensation [47,48,49]. The acceptance of female sterilization was higher among the poor, less educated, working women, and those who had at least one son in their family [50, 51]. Several myths about temporary methods of modern contraception restricts many women from accepting short-term reversible methods of contraception and opting for sterilization [52, 53]. A number of studies have documented the overemphasis of female sterilization in family welfare programmes, poor quality of care and limited choice of methods in both the high and low fertility states [48, 53,54,55,56,57]. In India, sterilization is not only provided at no cost in public health centers, but compensation is also paid to acceptors towards wage loss, transportation to and from the facility, expenses of food, child care during hospital stay and laboratory fees for related tests. The compensation amount has been revised periodically and varies in high focus statesFootnote 1 and non-high focus states and by type of provider [58]. A sum of ₹10 was provided to vasectomy acceptors in 1952 in Madras as the first case of compensation. It increased to ₹20 by 1964 and increased to ₹170 for vasectomy in 1983 [59]. In 2007, the compensation for tubectomy in public health centers was ₹600 each in high and non-high focus states, which increased to ₹1400 in high focus states and remained at ₹600 in non-high focus states during 2015–16. During this period, the amount of compensation for vasectomy was ₹1100 in both the high and non-high focus states. In 2015–16, the compensation in high focus states increased to ₹2000. Many private providers and Non-Governmental Organization (NGO) trusts under the public-private partnership programmes (PPP) provide compensation to acceptors of sterilization in India [58]. Well-designed service delivery strategies are effective in increasing the level of acceptance of contraception [59,60,61,62,63,64]. Studies suggest reconsidering the provision of compensation particularly to institutions, doctors and individual providers given India's remarkable gains in reducing overall fertility [55]. Despite the programmatic emphasis and provision of compensation, the use of modern method of contraception has remained same or slightly declined from 48.5% in 2005–06 to 47.8% in 2015. Three–fourths of all modern contraceptive use is in the form of female sterilization in India. In terms of demographic output, the family welfare programmes in India have been successful in reducing the fertility level — a reduction in total fertility rate (TFR) from 5.2 in 1971 to 2.3 in 2016 [65]. However, regional variations in fertility levels and contraceptive use remain a concern. Modern contraceptive use was lowest in Manipur followed by Bihar and Lakshadweep and highest in Andhra Pradesh and Punjab [66]. The provisioning of free female sterilization and compensations for acceptors of female sterilization is perhaps one of the largest public investments by the national and state governments in India. While a large number of studies have examined the trends, patterns and determinants of contraceptive use with specific reference to female sterilization, there is no nationally representative study on OOP and compensation for female sterilization in India. Besides, the use of female sterilization from private health centers is on the rise across states and socio-economic groups. Findings from the fourth round of National Family Health Survey (NFHS 4) reveal that about 17% of the female sterilization users had undergone the procedure in private facilities [66]. In this context, this paper examines the inter-state variations of OOP and compensation received for voluntary female sterilization in India. We hypothesize that an increasing proportion of population is paying for female sterilization services and the inter-state variation in OOP is large in India. We have used unit data from the individual file of the fourth round of the National Family Health Survey (NFHS 4) conducted during 2015–16. The NFHS 4 survey had interviewed 601,509 households covering 699,686 women aged 15–49 years and 112,122 men aged 15–54 years. It has the distinction of providing district level estimates (640 districts) while the earlier rounds provided state level estimates of demographic and health variables. The results of the survey along with methodology and sampling design are available elsewhere [66]. The NFHS 4 survey data collection was conducted in two phases during January 2015 and December 2016. Data on female sterilization such as year, expenditure and compensation received for female sterilization were collected from the respondents who had undergone female sterilization. For all India analyses, we limit the cases to those who availed themselves of sterilization till 2014. Out of the 699,686 women interviewed, 526,966 ever married women and 172,720 were never married women in 15–49 age group. Those who were never in union, those with missing information and those who had not accepted sterilization were excluded from the analyses. The effective sample size was 165,489 women who had ever undergone female sterilization. Among them, a total of 38,561 women had paid for the sterilization services, 17,512 women were sterilized at public health centers and 20,840 availed themselves of the service from private centers (Fig. 1). Schematic presentation of women who had accepted sterilisation by type of facility in India, 2015–16 For the first time, the NFHS 4 collected information on the total amount paid for sterilization and compensation received by those sterilized. The questions pertaining to amount paid and compensation received on sterilization was "How much did you pay in total for the sterilization including any consultation you may have had?", "Did you receive compensation for the sterilization?". "How much compensation did you receive? . The data related to error for Do Not Know (DK), missing and sterilization amount paid were corrected prior to the analyses. The details and procedure of data cleaning is available elsewhere [67]. Outcome variables A composite variable based on compensation received and amount paid for female sterilization was used as an outcome variable. The composite variable was computed and categorized into four distinct groups: Those who neither paid nor received compensation, Those who paid and received compensation, Those who paid and did not receive compensation (iv) Those who did not pay but received compensation. The OOP is defined as the total amount spent on sterilization less compensation received. Independent variables Independent variables used in the analyses includes year (time,) female sterilization by type of facility, quality of care and background characteristics. The year of sterilization is used to examine the trends in compensation received and amount paid for female sterilization. The type of facility of sterilization were broadly categorized into three; public, private and othersFootnote 2. The respondents' perception of quality of care variable was categorized as 'good' ('very good' or 'all right') and 'not good' ('not so good' or 'bad'). The other explanatory variables included in the analyses were: women's age (less than 25, 25–34, 35–49), women's education (no education, up to primary school, up to secondary school, high school and above), and number of surviving children. The variables relating to household included wealth quintile (poorest, poorer, middle, richer, richest), caste (scheduled caste, scheduled tribe, other backward class and others), religion (Hindu, Muslim and others), place of residence (rural and urban). Caste as a social variable and the population of India are conventionally classified into four caste groups, namely, Scheduled Caste (SC), Scheduled Tribe (ST), Other Backward Class (OBC) and others. The ST, SC and OBC are considered socially disadvantaged caste groups and the reservation for education and employment and many other benefits of national, state and local governments are made available to them. Descriptive analysis, adjusted OOP at constant price, bivariate analysis and two-part regression model were used in the analyses. The variations in OOP and amount of compensation received were analyzed by wealth quintile, place of residence, and educational attainment of mothers across the states of India. The compensation received and amount paid for female sterilization was truncated at 99.5 percentile. We have presented the amount on female sterilization and compensation received between 2011 and 2016 at constant prices; based on the Consumer Price Index (CPI) available from Reserve Bank of India (RBI) for all India and for all the states. Since 2011, the CPI was computed for each states on an annual basis at 2016 prices. Multivariate analysis The two-part regression method was used to understand the significant predictors of OOP and to obtain the predicted mean OOP of female sterilization in India. In a typical dataset, the outcome variable that is, OOP on female sterilization is skewed and contains a large number of zero values. In such cases, the two-part model is one of the preferred methods for analysis. The first part using logit model describes the likelihood of an individual incurring OOP on female sterilization by selected socio-demographic and economic variables. The model takes the following form: $$ P\left({y}_i>0\right)=\frac{\exp \beta x}{1+\exp \beta x} $$ Where yi = 0 indicates that the individual has no OOP on sterilization. The second part of the model determines the probability of a woman incurring any OOP on sterilization using Ordinary least square (OLS) regression. In the Ordinary least square (OLS) regression, the logarithm of a woman incurring any OOP on sterilization was used as a dependent variable. The model predicts the OOP on female sterilization after adjusting for selected socio-demographic and economic variables. Female sterilization by type of facility in states of India Figure 2 presents the trends in female sterilization by type of facility in India. During 1990–2014, the share of female sterilization conducted in public health centers declined from 88% to 78% and that from private health centers increased from 12% to 22%. The share of female sterilization from other type of facilities remained at a low level over time (less than 0.4%). Public health centers thus remained to be the most preferred type of facility for female sterilization in India. Trends in share of female sterilization (percentage) by type of facility in India, 1990–2014 Figure 3 presents the inter-state variations in female sterilization by type of facility in India. More than 80% of the acceptors of female sterilization in India received services from public health centers. The state variations of female sterilization in public and private health centers were large. In Chandigarh, Haryana, Andaman and Nicobar Islands, Lakshadweep, Odisha, Uttarakhand, Tripura, Sikkim, Chhattisgarh, Rajasthan and Pondicherry, over 90% of the female sterilizations were carried out in public health centers. On the other hand, in Kerala, Karnataka, Telangana, Bihar, Manipur, Mizoram, Jammu and Kashmir, Delhi and Daman and Diu, about 25% of female sterilizations were carried out in private health centers. In general, most of the sterilizations were conducted in public health centers across the states of India. Percent distribution of female sterilization by type of facility in states of India, 2010–14 Amount paid, compensation received and OOP for female sterilization in India Table S1 presents the percent distribution of the amount paid and compensation received for female sterilization, total expenditure (at current prices) and OOP as a share of total expenditure during 1990–2014. The majority of the women had not paid but received compensation for female sterilization (about 60%). The proportion of women who had paid for sterilisation and did not receive any compensation has almost doubled over time; from 10.2% in 1990 to 20.9% by 2014. Similar increase was also noticed for those who paid and received compensation for sterilisation. The OOP as a share of the total expenditure on female sterilization in India showed an increase from 70% to 79% with some variations over time. Table S2 presents the trends in percent distribution of the amount paid and compensation received for female sterilization, total expenditure at current price (in ₹) and OOP as a share of total expenditure by type of facility (public and private health centers) in India. Among those women who utilized female sterilization services from public health centers, the majority did not pay but received compensation. In general, over 72% women reported receiving some form of compensation from public health centers in 1990 and 85% in 2014. The majority of women availing services from private health centers had paid for the services and did not receive any compensation. It has increased from 81% in 1990 to 91% by 2014. In 1990, the mean total expenditure on sterilization was ₹97 in public health centers and ₹5406 in private health centers and ₹306 in public and ₹10,304 in private health centers in 2014 (at current price). A gradual decline in the ratio of total expenditure in private and public health centers incurred for female sterilization was observed from 2005. Table 1 presents the distribution of composite variables based on amount paid and compensation received for female sterilization, along with the level of TFR, the OOP and its share in the total expenditure for female sterilization in the states of India during 2010–2014 at constant prices with base year 2016. At the national level, about 12% of the women had neither paid nor received compensation, 8% had paid and received compensation, 20% had paid for female sterilization but did not receive any compensation, while around 60% of the women did not pay for the service and received compensation. Among the major states, about 42% of the women in Kerala had paid but did not receive any compensation followed by Manipur and Nagaland (39%) in contrast to around 4% women in Madhya Pradesh and Himachal Pradesh respectively. In the state of Himachal Pradesh, 88% of the women did not pay but received compensation followed by Madhya Pradesh (84%); the least was found in Nagaland (23%) and Manipur (30%). The percentage of women who neither paid nor received any compensation varied from 36% in Arunachal Pradesh to 5% in Madhya Pradesh and Bihar. Nearly 46% of the women in Mizoram and 28% of the women in Bihar had paid and received compensation. Table 1 Percent distribution of women who paid and received compensation, percent sterilized, TFR, mean OOP and OOP as a share to total expenditure on female sterilization by states of India at constant prices (in ₹), 2010–14 The state differentials in mean OOP and its share to the total expenditure on female sterilization were high. The OOP in states such as Madhya Pradesh (−₹173) and Rajasthan (−₹143) with high utilization of public health facilities for female sterilization had negative OOP. The negative OOP is due to the fact that the compensation received is higher than the money spent. Besides, in these two states, over 90% of the acceptors availed themselves of sterilization at public health centers. The OOP for female sterilization was highest in Nagaland (₹8423) followed by Manipur (₹7390) and Kerala (₹6742). The share of OOP was more than 90% in Daman and Diu, Manipur, Kerala, Nagaland, Meghalaya, Mizoram and Telangana. It was comparatively less in states where the utilization of female sterilization from the public facility was more than 90%. Socioeconomic and demographic variations of OOP on female sterilization Table 2 presents the differentials in mean OOP and its share in the total expenditure for female sterilization by selected socio-economic and demographic characteristics in India, Uttar Pradesh and Odisha at constant prices during 2010–14. The states of Uttar Pradesh and Odisha are selected as illustrations to reflect the variations in OOP across the states, as the use of sterilization from public health centers is high in Odisha and from private health centers is high in Uttar Pradesh. The OOP for female sterilization was positively associated with educational attainment. It was about eight times higher for females with higher education compared to uneducated women in India. The share of OOP in the total expenditure was more than double for those women with above secondary level education. The OOP was highest among 'others' in the caste category, higher in urban than rural areas and also varied with the respondent's perception on the quality of care of female sterilization services. Those women who stated that the quality of care was good in India and Uttar Pradesh had incurred a higher OOP. The mean OOP in Odisha varied from ₹640 to ₹275 among those stating that the quality of care was 'not good' and those stating that it was 'good' respectively. The OOP was positively associated with the wealth quintile suggesting that the burden was higher among those who had the ability to pay more. The mean OOP varied between ₹5181 and ₹201among women belonging to the richest wealth quintile and the poorest wealth quintile respectively in India. Table 2 Mean OOP and its share to the total expenditure on female sterilization by background characteristics at constant prices (₹), India, 2010–2014 Correlates of OOP and predicted OOP on female sterilization in India Table 3 shows the result of a two-part regression model and predicted expenditure on sterilization in India. The results of the logit regression show that the likelihood of incurring OOP on sterilization was positively associated with age, education level, economic status of the woman and sterilization by type of facility. For instance, women aged 35–49 were 44% more likely to incur OOP on sterilization compared to woman aged less than 25 years. The likelihood of incurring OOP on sterilization was 51% higher among women with higher education compared to women with no education. Similarly, the likelihood of incurring OOP was higher among women belonging to the richest wealth quintile compared to women from the poorest quintile. Further, a woman utilizing sterilization from a private health facility was significantly more likely to incur OOP compared to those utilizing sterilization from a public health center. Table 3 Results of the two-part regression model and predicted OOP on female sterilization in India, 2015–16 For the second part, the log transformation of woman who incurred any OOP was used as a dependent variable. The probability of incurring any OOP on sterilization was higher among older women, belonging to the richest wealth quintile and availing themselves of sterilization from a private health center. The probability of incurring any OOP on sterilization was 23.6% higher among women aged 35–49 years compared to women aged less than 25 years. Further, the probability of incurring any OOP was 74.8% higher among women belonging to the richest wealth quintile compared to women from the poorest quintile. Similarly, the probability of incurring any OOP among women using a private health center for sterilization was about twice higher compared to women using a public health center. The predicted mean expenditure on sterilization was 62% higher for women aged 35–49 (₹3737) compared to women aged less than 25 years (₹2307). Similarly, the predicted mean was four times for women with higher education (₹6984) compared to women with no education (₹1671). On an average, the predicted mean expenditure was eleven times higher for a woman belonging to the richest wealth quintile (₹7670) compared to a woman from the poorest quintile (₹676). The OOP was almost ten times higher for a woman utilizing sterilization from a private health center (₹6428) compared to a public health center (₹669). For illustrating the state patterns, we have estimated the coefficients and OOP for Uttar Pradesh and Odisha. Table S3 presents the results of the two-part regression model to identify the correlates of incurred OOP and present the adjusted predicted mean OOP. In general, similar patterns were observed for Odisha and Uttar Pradesh. The predicted OOP for women aged 35+ years in Odisha was three times higher than that for women below 25 years. The OOP on sterilization increases with the educational level of the women in both the states. On an average, the mean OOP on female sterilization was about three times (₹6112) and ten times (₹5248) more for women with higher secondary education and above, compared to women with no education in Uttar Pradesh and Odisha respectively. Fertility reduction in India is largely attributed to increased use of female sterilization and increase in female age at marriage. Female sterilization continued to be the most preferred and dominant method of limiting family size across socioeconomic groups in India. It is popular among the poor and the less educated. India's family welfare programme provides not only free family planning services but compensation is also paid to acceptors towards wage loss, transportation to and from the facility, expenses of food, child care during hospital stay and laboratory fees for any tests. Besides, the central and state governments, the private sector and non-governmental organizations have been working towards promoting voluntary family planning services including female sterilization. Despite this, the regional patterns in the use of modern contraceptive methods are strikingly low in the poorer states of Bihar, Odisha, Jharkhand and Uttar Pradesh and the unmet need for limiting and spacing is higher in these states. Besides, the quality of family planning services remains a concern. The national, central and local governments and international donors continue to invest heavily in family planning programmes in India to achieve the desired family size, meeting the unmet need and providing family planning services at no-cost at public health centers, but the household does not necessarily receive free services. Though the demographic, health and social benefits of family planning programmes in India have been examined periodically, few studies focus on the economic benefits of family planning investments. Periodic assessments of economic benefits are necessary for evidence based policy and programmes. In this context, this is the first ever comprehensive study that examines the expenditure, compensation received and OOP for female sterilization in India. We have used the unit data of the recently released NFHS 4 that provided information on amount spent and compensation received on sterilization. The salient findings are: First, public health centers are the most preferred type of facility for undergoing female sterilization in India. Over 85% acceptors of female sterilization did not pay for availing themselves of the services and those are largely from public health centers. Similarly, a large proportion of sterilization acceptors from public health centers received compensation and the trend remains the same over time. Over 90% of acceptors from private health centers paid for the services. Despite this, the utilization of female sterilization in private health centers has increased over time. Second, the state pattern in the use of sterilization (among major states) by type of facility suggests that the use of sterilization in private health centers was highest in Kerala (49%) followed by Karnataka and Telangana. On the other hand, the use of female sterilization in public health centers was highest in Chandigarh followed by Haryana and Odisha. Third, the amount spent and compensation received and the OOP for female sterilisation varied across states in India. At the national level, about 12% of the women neither paid nor received compensation, 8% paid and received compensation, 20% paid for female sterilization but did not receive any compensation, while about 60% of the women did not pay for the service but received compensation. Among the major states, in Kerala, about 42% of the women paid but did not receive any compensation followed by Manipur and Nagaland (39%), in contrast to Madhya Pradesh and Himachal Pradesh where it was around 4%. In Himachal Pradesh, 88% of the women did not pay but received compensation followed by Madhya Pradesh (84%) and the least was in Nagaland (23%) and Manipur (30%). Fourth, the OOP as a share of the total amount spent on female sterilization in India varies between 70 and 79%, lower in public health centers and higher in private health centers. The state variations in OOP on female sterilization were prominent. A higher proportion of women from the economically developed states of Kerala, Karnataka, and Delhi and from the economically poorer states of Meghalaya and Bihar incurred OOP for undergoing female sterilization. Fifth, the amount of OOP for female sterilization varied widely across the public and private centers across the states of India. The total expenditure at the private health centers was many times higher than that in pubic health centers. The OOP on female sterilization was positively and significantly associated with educational attainment, share of urban population, economic well-being of households and better quality of care. This suggests that women belonging to households with better socioeconomic conditions are paying for services from private health centers and for better quality of care. However, the multivariate analyses suggested that OOP was also higher among women from scheduled castes. We provide some explanations in support of the results. Public health centers continue to be the major provider of female sterilization services in most states in India. The use of female sterilization from private health centers has been increasing over time possibly due to convenience, efficiency, quality of care and improved standard of living of the population. Previous studies suggest that about 10% of the sterilization acceptors from public health centers paid for services in 2015. The mean expenditure for carrying out female sterilization in private facilities was around ₹3400 [68]. About half of the women in Kerala are opting for sterilization in private health centers, which is suggestive of preferences and ability to pay for the services. The higher proportion of female sterilization acceptors in Bihar suggests the lack of facilities in public health centers. Similarly, lack of proper infrastructure and lack of access to public health facilities may hinder the utilization of sterilization among women in the marginalized sections of society. Reasons reported for the low use of services in public health centers "the waiting time" and "disrespectful behaviour" [69]. Further, under the Public Private Partnerships (PPPs), more private facilities have been accredited by the government to increase the provider base for family planning services. However, our findings suggest that the proportion of women using private health centers and receiving compensation is very low (less than 5%). The OOP, in general, is associated with the ability to pay for female sterilization services with the exception of Bihar, Nagaland, Meghalaya and Mizoram. The proportion of women who paid but did not receive compensation was higher in many economically backward states such as Nagaland, Meghalaya, Mizoram and Bihar. The share of women who opted for female sterilization was much lower in these states with a greater prevalence of high OOP and a higher share of acceptance of female sterilization from private facilities compared to other states. While female sterilization was less than 10% for Nagaland, Manipur and Meghalaya, it was around 20% for Bihar, nearly 50% of the estimate at the national level. Also, the share of women who had not paid but received compensation was higher in states such as Haryana where there was higher utilization of services from public health centers. The reduction in the ratio of total expenditure on female sterilization from private and public health centers could be attributed to the public private partnership (PPP) programmes through which accredited private health centers also provide compensation to sterilization acceptors. These findings have implications for equity-driven interventions on sterilization in India. First, the public and private differentials in OOP on sterilization are large across states and socio-economic groups. Given the increasing use of services from private health centers, state interventions to regulate the price in private health center are called for. This, in-turn would reduce the catastrophic health spending and distress financing on sterilization. For example, the high OOP even in many of the poorer states such as Mizoram, Meghalaya and Nagaland warrant larger investment on family planning services. Second, the high OOP among socially disadvantaged groups such as Scheduled Castes and Scheduled Tribes needs programmatic attention. While provisioning of such services in public health centers are free or subsidised, private health centers does not provide subsidised services to any group of population. In such cases, the public-private partnership may be strengthened to reduce the OOP burden for disadvantaged groups. Third, public health centers continue to be the largest providers of sterilization services in India. Thus, improving the quality of female sterilization services in and across states is important for improving women's health. Fourth, the compensation provided for female sterilization should exclude women from higher socioeconomic households. Although, the compensation is set low enough so that people do not access sterilization because they need the compensation, and it implies no coercion in sterilization, the compensation received for female sterilization should be revised for certain lagging sections of the population. Although the findings offer important insights into the economics of female sterilization within family planning programmes, the results must be interpreted in the light of certain study limitations. These include underestimation of costs as it does not include indirect costs associated with female sterilization such as loss of wages, transport to and from facility, expenses for food and child care during hospital stay, and laboratory fees for related tests. The questions on quality of care were not exhaustive as they were based on the perception of the receptors, and did not capture all the essential dimensions of quality of care. Based on the results, this study concludes that investment in family planning in public health centers should continue as these services are largely availed by the poor, less educated and marginalized population. Family planning programmes could benefit from an equity- driven focus and there is an urgent need to regulate the private sector on cost and quality of services. Public and private sector investments as per public private partnerships (PPP) should increase coverage for accessibility of voluntary family planning services and to achieve the desired family size. Women in the poorer and high fertility states such as Bihar and north-eastern states such as Manipur and Meghalaya needs further attention as OOP is high in those states. The dataset used for the current study is available in DHS repository. Available from: [https://dhsprogram.com/data/dataset/India_Standard-DHS_2015.cfm?flag=0]. High Focus States: UP, Bihar, Chhattisgarh, Jharkhand, Odisha, MP, Rajasthan, Gujarat, Uttarakhand, Assam, Haryana. The public centers include Government/ municipal hospitals, sub-centers, public health centers, community health centers, district hospitals and camps. The private centers include private hospitals, private clinic, and private mobile clinic. Non-governmental organisations (NGOs), trust hospitals or clinics are classified as 'others'. CPI: DK: Do Not Know IUD: Intrauterine Device MCH: NFHS: National Family Health Survey NGO: Non-Governmental Organisation NHM: National Health Mission OBC: Other Backward Classes OOP: Out -of -Pocket Payment PPP: RBI: RCH: Reproductive and Child Health SC: Scheduled Caste Scheduled Tribe TFR: Coale AJ, Hoover EM. Population growth and economic development. Priceton: Princeton University Press; 2015. Maloney CB. The economic benefits of birth control and access to family planning. 2015. Available at: https://www.jec.senate.gov/public/index.cfm/democrats/fact-sheets?ID=B73D9027-1539-44D1-93C8-C6A43ED7FB1E. Bongaarts J, Sinding S. Population policy in transition in the developing world. Science. 2011;333(6042):574–6. Coleman I, Lemmon GT. Family planning and US foreign policy: ensuring US leadership for healthy families and communities and prosperous stable societies. 2011. Available at: http://www.universalaccessproject.org/wp/wpcontent/uploads/2015/06/CFR-Report-Family-Planning-and-US-Foreign-Policy.pdf Lee R, Mason A. What is the demographic dividend? Finance Dev. 2006;43(3):16. Bloom DE, Canning D. Global demographic change: dimensions and economic significance. Cambridge: National Bureau of Economic Research; 2004. Campbell AA. The role of family planning in the reduction of poverty. J Marriage Fam. 1968;1:236–45. Karra M, Canning D, Wilde J. The effect of fertility decline on economic growth in Africa: a macrosimulation model. Popul Dev Rev. 2017;43(S1):237–63. Li Y. The relationship between fertility rate and economic growth in developing countries. 2016. Available at: http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=8727479&fileOId=8768892 Ashraf QH, Weil DN, Wilde J. The effect of fertility reduction on economic growth. Popul Dev Rev. 2013;39(1):97–130. Union A. Family Planning and the Demographic Dividend in Africa. Policy Brief. 2013. Available at: http://www.who.int/pmnch/media/events/2013/au_policy_brief_family.pdf. Accessed 30 Mar 2019. Fürnkranz-Prskawetz A, Lee RD, Lee SH, Mason A, Miller T, Mwabu G, Ogawa N, Soyibo A. Population change and economic growth in africa; 2013. Sinding SW. Population, poverty and economic development. Philos Trans Royal Soc B Biol Sci. 2009;364(1532):3023–30. Gribble J, Voss ML. Family planning and economic well-being: new evidence from Bangladesh. 2009. Available at: https://assets.prb.org/pdf09/fp-econ-bangladesh.pdf. Bloom DE, Finlay JE. Demographic change and economic growth in Asia. Asian Econ Policy Rev. 2009;4(1):45–64. Singh S, Shekhar C, Acharya R, Moore AM, Stillman M, Pradhan MR, Frost JJ, Sahoo H, Alagarajan M, Hussain R, Sundaram A. The incidence of abortion and unintended pregnancy in India, 2015. Lancet Glob Health. 2018;6(1):e111–20. Kallner HK, Danielsson KG. Prevention of unintended pregnancy and use of contraception—important factors for preconception care. Ups J Med Sci. 2016;121(4):252–5. Chola L, McGee S, Tugendhaft A, Buchmann E, Hofman K. Scaling up family planning to reduce maternal and child mortality: the potential costs and benefits of modern contraceptive use in South Africa. PLoS One. 2015;10(6):e0130077. Canning D, Schultz TP. The economic consequences of reproductive health and family planning. Lancet. 2012;380(9837):165–71. Cleland J, Conde-Agudelo A, Peterson H, Ross J, Tsui A. Contraception and health. Lancet. 2012;380(9837):149–56. Conde-Agudelo A, Rosas-Bermudez A, Castaño F, Norton MH. Effects of birth spacing on maternal, perinatal, infant, and child health: a systematic review of causal mechanisms. Stud Fam Plan. 2012;43(2):93–114. Butler AS, Clayton EW. Overview of family planning in the United States. In: A review of the HHS family planning program: Mission, management, and measurement of results. Washington DC: National Academies Press (US); 2009. Joshi S, Schultz TP. Family planning as an investment in development: evaluation of a program's consequences in Matlab, Bangladesh. Yale University Economic Growth Center Discussion Paper, vol. 951; 2007. World Health Organization. Health benefits of family planning. 1995. Available at: http://apps.who.int/iris/bitstream/handle/10665/62091/WHO_FHE_FPP_95.11.pdf?sequence=1 Jones EF, Forrest JD, Henshaw SK, Silverman J, Torres A. Unintended pregnancy, contraceptive practice and family planning services in developed countries. Fam Plan Perspect. 1988;20(2):53–67. Bhargava A, Jamison DT, Lau LJ, Murray CJ. Modeling the effects of health on economic growth. J Health Econ. 2001;20(3):423–40. Jayaraman A, Mishra V, Arnold F. The relationship of family size and composition to fertility desires, contraceptive adoption and method choice in South Asia. Int Perspect Sex Reprod Health. 2009;1:29–38. Erfani A. The impact of family planning on women's educational advancement in Tehran, Iran. International Center for Research on Women Fertility and Empowerment Working Paper Series; 2012. p. 008–2012. Miller G. Contraception as development? New evidence from family planning in Colombia. Econ J. 2009;120(545):709–36. Cleland J, Bernstein S, Ezeh A, Faundes A, Glasier A, Innis J. Family planning: the unfinished agenda. Lancet. 2006;368(9549):1810–27. Mammen K, Paxson C. Women's work and economic development. J Econ Perspect. 2000;14(4):141–64. Kugler AD, Kumar S. Preference for boys, family size, and educational attainment in India. Demography. 2017;54(3):835–59. Cáceres-Delpiano J. The impacts of family size on investment in child quality. J Hum Resour. 2006;41(4):738–54. Goux D, Maurin E. The effect of overcrowded housing on children's performance at school. J Public Econ. 2005;89(5–6):797–819. Downey DB. When bigger is not better: family size, parental resources, and children's educational performance. Am Sociol Rev. 1995;1:746–61. Knodel J, Havanon N, Sittitrai W. Family size and the education of children in the context of rapid fertility decline. Popul Dev Rev. 1990;1:31–62. Blake J. Family size and achievement. Berkeley: Univ of California Press; 1989. Blake J. Family size and the quality of children. Demography. 1981;18(4):421–42. Rosenzweig MR, Wolpin KI. Life-cycle labor supply and fertility: causal inferences from household models. J Polit Econ. 1980;88(2):328–48. Balderama-Guzman V. Child health, nutrition and family size: a comparative study of rural and urban children. Popul Data Inf Serv. 1978;4:32. Starbird E, Norton M, Marcus R. Investing in family planning: key to achieving the sustainable development goals. Glob Health Sci Pract. 2016;4(2):191–210. Singh S, Darroch JE, Ashford LS, Vlassoff M. Adding it up: the costs and benefits of investing in family planning and maternal and new born health. New York: Guttmacher Institute; 2009. Miller G, Babiarz KS. Family planning program effects: evidence from microdata. Popul Dev Rev. 2016;1:7–26. Bongaarts J, Cleland J, Townsend JW, Bertrand JT, Gupta MD. Family planning programs for the 21st century. New York: Population Council; 2012. Bhende AA, Kanitkar T. Principles of population studies. Bombay: Himalaya Publishing House; 1978. Ministry of Health and Family Welfare, Government of India. 2005. https://nhm.gov.in/index1.php?lang=1&level=1&sublinkid=969&lid=49. Accessed 28 Oct 2019. Asari VG. Determinants of contraceptive use in Kerala: the case of son/daughter preference. J Fam Welf. 1994;40(3):19–25. De Oliveira IT, Dias JG, Padmadas SS. Dominance of sterilization and alternative choices of contraception in India: an appraisal of the socioeconomic impact. PLoS One. 2014;9(1):e86654. Dwivedi LK, Ram F, Reshmi RS. An approach to understanding change in contraceptive behaviour in India. Genus. 2007;1:19–54. Sulaja S. Son preference in Thbsquiuern stated of India: an analysis. Health Popul Perspect Issues. 2005;28(3):122–31. Rajaretnam T. Family size desire sex preference socio-economic condition and contraceptive use in rural Karnataka India. Demography India. 1995;24(2):275–90. Santhya KG. Changing family planning scenario in India: an overview of recent evidence; 2013. Koenig MA, Foo GH, Joshi K. Quality of care within the Indian family welfare programme: a review of recent evidence. Stud Fam Plan. 2000;31(1):1–18. Pachauri S. Priority strategies for India's family planning programme. Indian J Med Res. 2014;140(Suppl 1):S137. Srinivasan K. Population policies and family planning programmes in India: a review and recommendations. IIPS Newsletter. 2006;47(1–2):6–44. Visaria L, Jejeebhoy S, Merrick T. From family planning to reproductive health: challenges facing India. Int Fam Plan Perspect. 1999;1:S44–9. Srinivasan K. Population policies and programmes since independence (a saga of great expectations and poor performance). Demography India. 1998;27(1):1–22. Ministry of Health and Family Welfare (MoHFW). Annual Report 2015–16: Government of India. New Delhi; 2016. Available at: https://mohfw.gov.in/sites/default/files/56324455632156323214.pdf. Accessed 26 Oct 2019. Satia JK, Maru RM. Incentives and disincentives in the Indian family welfare program. Stud Fam Plan. 1986;17(3):136–45. Heil SH, Gaalema DE, Herrmann ES. Incentives to promote family planning. Prev Med. 2012;55:S106–12. Stevens JR, Stevens CM. Introductory small cash incentives to promote child spacing in India. Stud Fam Plan. 1992;23(3):171–86. Weeden D, Bennett A, Lauro D, Viravaidya M. An incentives program to increase contraceptive prevalence in rural Thailand. Int Fam Plan Perspect. 1986;1:11–6. Perkin GW. Nonmonetary commodity incentives in family planning programs: a preliminary trial. Stud Fam Plan. 1970;1(57):12–5. Sunil TS, Pillai VK, Pandey A. Do incentives matter?–evaluation of a family planning program in India. Popul Res Policy Rev. 1999;18(6):563–77. Registrar General of India (RGI). Sample registration system statistical report 2016: Ministry of Home Affairs, Government of India. New Delhi; 2017. International Institute for Population Sciences (IIPS) and ICF. National Family Health Survey (NFHS-4), 2015–16. Mumbai: IIPS; 2017. Mohanty SK, Panda BK, Khan PK, Behera P. Out-of-pocket expenditure and correlates of caesarean births in public and private health centers in India. Soc Sci Med. 2019;224:45–57. Stover J, Chandler, R, Expenditures on family planning in PF2020 focus countries in 2015. Project for the International Family Planning Expenditure Tracking Advisory Group; 2017. Available at: http://www.track20.org/download/pdf/Expenditures_Assessment_12.5.17.pdf. Keesara SR, Juma PA, Harper CC. Why do women choose private over public facilities for family planning services? A qualitative study of post-partum women in an informal urban settlement in Kenya. BMC Health Serv Res. 2015;15(1):335. This paper was prepared as part of the RASTA initiative of the Population Council. The authors express their gratitude to the reviewers and the editorial board of the Journal - BMC Women's Health. The support for the publication of this paper was received from RASTA initiative of the Population Council, New Delhi. Department of Fertility Studies, International Institute for Population Sciences, Mumbai, India Sanjay K. Mohanty International Institute for Population Sciences, Mumbai, India Suyash Mishra & Sayantani Chatterjee Population Council, New Delhi, India Niranjan Saggurti Suyash Mishra Sayantani Chatterjee Conception and design of study: SKM; analysis and interpretation of data: SKM, SC and SM; drafting the manuscript: SKM, SM, NS; critical revision of the manuscript for important intellectual content: SKM, SC, SM, NS. All the authors have read and approved the final manuscript. Correspondence to Sanjay K. Mohanty. The analysis is based on secondary data available in the public domain, hence ethical approval was not required. Table S1. Percent distribution of women who paid for female sterilization and received compensation by year in India at current prices (in INR), 1990–2014. Table S2. Percent distribution of women who paid for female sterilization and received compensation by year and type of facility in India at current prices (in Indian Rupees), 1990–2014. Table S3. Results of Two-part model and predicted OOP payment on female sterilization in Uttar Pradesh and Odisha, 2015–16 Mohanty, S.K., Mishra, S., Chatterjee, S. et al. Pattern and correlates of out-of-pocket payment (OOP) on female sterilization in India, 1990–2014. BMC Women's Health 20, 13 (2020). https://doi.org/10.1186/s12905-020-0884-1 Female sterilization
CommonCrawl
How many solar masses is the central black hole in the Whirlpool Galaxy? I am having trouble finding the mass of the central black hole in the Whirlpool Galaxy M51a/NGC5194, it is stated that its companion galaxy NGC5195 has a supermassive black hole at its centre that is 19 million solar masses but I can't find details on the Whirlpool Galaxy's central black hole, is its mass known? black-hole galaxy $\begingroup$ Maybe it's hard to make a good mass estimate with all the dust obscuring the centre of NGC 5194. $\endgroup$ – PM 2Ring Jul 26 '20 at 17:30 $\begingroup$ Using the HyperLEDA mean central stellar velocity dispersion for M51a ($\sigma = 88$ km/s) and the $M_{\rm BH}$-$\sigma$ relation from Saglia et al. (2016), one would predict $M_{\rm BH} \sim 3 \times 10^{6} M_{\odot}$. That is not, of course, an actual measurement.... $\endgroup$ – Peter Erwin Dec 28 '20 at 15:46 It is not well known, but a paper by M. Brightman et al gives a value of $10^{6.3\pm0.4}$ or between 8 hundred thousand and 5 million solar masses, while noting that this estimate is lower than previous estimates which had $10^{6.95}$ or about 9 million masses. It seems that, although the galaxy is face-on to us, its black hole is viewed from the edge. This means that the black hole itself is hidden behind a torus of relatively cool dust and gas. [ESA] which is visible as a dark streak in the image of the central part of the galaxy There is much that we don't understand about the M51a black hole. It is active, but not as active as would be expected, given the interaction between the two galaxies in M51. There is a second dark streak making an "X" the cause of which is unknown, and, as suggested by the wide error margins above, the mass of the black hole is not well constrained. James KJames K 62k33 gold badges139139 silver badges210210 bronze badges What are the differences between a Black Hole and a Supermassive Black Hole Why do we think that there is no two-solar-mass black hole? Succinct explanation of black hole mass, diameter, shape? How did the object CO-0.40-0.22 get its name, and how is it distinct from CO-0.40-0.22*? What will happen to the shape of a galaxy when a super massive black hole lying in its center dies(evaporates out)? How did the authors determine both the spatial size of gas cloud HCN-0.009-0.044 and its central mass at the same time? Mass of a potential black hole in a binary system Where is the intervening light in the M87 black hole?
CommonCrawl
Spaceability and algebrability of sets of nowhere integrable functions by Szymon Gła̧b, Pedro L. Kaufmann and Leonardo Pellegrini PDF We show that the set of Lebesgue integrable functions in $[0,1]$ which are nowhere essentially bounded is spaceable, improving a result from García-Pacheco, Martín, and Seoane-Sepúlveda, and that it is strongly $\mathfrak {c}$-algebrable. We prove strong $\mathfrak {c}$-algebrability and nonseparable spaceability of the set of functions of bounded variation which have a dense set of jump discontinuities. Applications to sets of Lebesgue-nowhere-Riemann integrable and Riemann-nowhere-Newton integrable functions are presented as corollaries. In addition, we prove that the set of Kurzweil integrable functions which are not Lebesgue integrable is spaceable (in the Alexievicz norm) but not $1$-algebrable. We also show that there exists an infinite dimensional vector space $S$ of differentiable functions such that each element of the $C([0,1])$-closure of $S$ is a primitive to a Kurzweil integrable function, in connection to a classic spaceability result from Gurariy. Richard Aron, V. I. Gurariy, and J. B. Seoane, Lineability and spaceability of sets of functions on $\Bbb R$, Proc. Amer. Math. Soc. 133 (2005), no. 3, 795–803. MR 2113929, DOI 10.1090/S0002-9939-04-07533-1 Richard M. Aron and Juan B. Seoane-Sepúlveda, Algebrability of the set of everywhere surjective functions on $\Bbb C$, Bull. Belg. Math. Soc. Simon Stevin 14 (2007), no. 1, 25–31. MR 2327324 A. Bartoszewicz and Sz. Gła̧b, Strong algebrability of sets of sequences and functions, Proc. Amer. Math. Soc. (to appear). B. Bongiorno, L. Di Piazza, and D. Preiss, A constructive minimal integral which includes Lebesgue integrable functions and derivatives, J. London Math. Soc. (2) 62 (2000), no. 1, 117–126. MR 1771855, DOI 10.1112/S0024610700008905 P. Enflo and V. I. Gurariy, On lineability and spaceability of sets in function spaces, unpublished notes. F. J. García-Pacheco, M. Martín, and J. B. Seoane-Sepúlveda, Lineability, spaceability, and algebrability of certain subsets of function spaces, Taiwanese J. Math. 13 (2009), no. 4, 1257–1269. MR 2543741, DOI 10.11650/twjm/1500405506 F. J. García-Pacheco, N. Palmberg, and J. B. Seoane-Sepúlveda, Lineability and algebrability of pathological phenomena in analysis, J. Math. Anal. Appl. 326 (2007), no. 2, 929–939. MR 2280953, DOI 10.1016/j.jmaa.2006.03.025 Russell A. Gordon, The integrals of Lebesgue, Denjoy, Perron, and Henstock, Graduate Studies in Mathematics, vol. 4, American Mathematical Society, Providence, RI, 1994. MR 1288751, DOI 10.1090/gsm/004 V. I. Gurariĭ, Subspaces and bases in spaces of continuous functions, Dokl. Akad. Nauk SSSR 167 (1966), 971–973 (Russian). MR 0199674 V. I. Gurariĭ, Linear spaces composed of everywhere nondifferentiable functions, C. R. Acad. Bulgare Sci. 44 (1991), no. 5, 13–16 (Russian). MR 1127022 Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 26A30, 26A42, 26A39, 26A45 Retrieve articles in all journals with MSC (2010): 26A30, 26A42, 26A39, 26A45 Szymon Gła̧b Affiliation: Institute of Mathematics, Technical University of Łódź, Wólczańska 215, 93-005 Łódź, Poland Email: [email protected] Pedro L. Kaufmann Affiliation: Instituto de matemática e estatística, Universidade de São Paulo, Rua do Matão, 1010, CEP 05508-900, São Paulo, Brazil Email: [email protected] Leonardo Pellegrini Email: [email protected] Received by editor(s): September 23, 2011 Published electronically: December 28, 2012 Additional Notes: The second author was supported by CAPES, Research Grant PNPD 2256-2009. Communicated by: Thomas Schlumprecht The copyright for this article reverts to public domain 28 years after publication. MSC (2010): Primary 26A30; Secondary 26A42, 26A39, 26A45 DOI: https://doi.org/10.1090/S0002-9939-2012-11574-6
CommonCrawl
An Introduction to Antenna Basics August 24, 2016 by Mark Hughes Antennas are used to transmit and receive information through changes in the electromagnetic fields that surround them. This article is a primer on antenna theory. An Abridged History of Electromagnetism Over 2600 years ago (and likely well before that) the ancient Greeks discovered that a piece of amber rubbed on a piece of fur would attract lightweight objects like feathers. Around the same time, the ancients discovered lodestone, which are pieces of magnetised rock. It took a few hundred years more to determine that there are two different properties of attraction and repulsion (magnetic and electric): likes repel and opposites attract. Another 2000 years passed before scientists first discovered that these two entirely different novelties of nature were inextricably linked. In the early nineteenth-century, Hans Christen Oersted placed a wire perpendicular to a compass needle and saw nothing. But when he rotated the wire parallel to the compass needle and passed a current through the wire, it deflected in one direction. When he passed the current through the wire in the opposite direction, the compass needle deflected in the opposite direction. A current carrying wire perpendicular to a compass needle causes no movement. A compass needle placed parallel to a current carrying wire will rotate. When the direction of current is reversed, the direction of rotation is reversed. This wire was the first antenna transmitter and the compass needle the first receiver. The scientists just did not know it at the time. While not terribly elegant, it provided a clue about the way the universe worked—that charges moving through a wire create a magnetic field that is perpendicular to the wire. (Scientists soon learned the field surrounding a wire is circular, not perpendicular.) With this information, scientists were able to describe the ways in which electric fields and magnetic fields interact with electric charges and formed a basis of an understanding of electromagnetism. The video above shows an alternating-current lamp filament being flexed between support points in the presence of a strong magnetic field. Shortly after, Nikola Tesla wirelessly lit lamps in his workshop, demonstrated the first remote-control toy boat, and established the alternating-current system we use to transfer electricity throughout the world today. Less than a full century after Orstead's experiment, Guglielmo Marconi devised a way to send the first wireless telegraph signals across the Atlantic. And here we stand, a full two centuries after that first compass experiment, able to capture images from distant planets and send them through the vastness of space to a device we can hold in the palm of our hand—all with antennas. Image of Pluto, courtesy of NASA. Our universe came to us with certain rules. We discovered this thousands of years ago when we graduated from noticing the merely attractive force of gravity and first separated objects based on their ability to attract or repel other objects. Then we discovered another set of rules of attraction and repulsion that were completely separate from the first. Humans categorized objects and through intense experimentation determined that positive and negative are opposite manifestations of a property called "charge" just as North pole and South pole are opposite manifestations of something called magnetism, just like left and right are two types of hands. Image showing mirror-symmetry between electric charges, magnetic poles, and hands. Something was happening in Orstead's wire whether he had a compass needle oriented beneath it or not. This leads to an idea of imperceivable electromagnetic fields that permeate the universe—through the densest matter and nature's best vacuums. Every one of our categorized objects (+/-/N/S) influences the space around it and is influenced if a field changes. Magnetic field surrounding a current carrying solenoid. A Mathematica-generated image based on code by Paul Nylander. By moving charges in predictable ways, we can change the electromagnetic fields in predictable ways and use those changes to transmit information via electromagnetic waves—regular oscillations in the electromagnetic field. Wave Superposition Waves transfer energy from one location to another. Left alone over a long period of time, a pool of water will appear flat and still. Disturb the water in one location and the water molecules will disturb neighboring water molecules, which will disturb neighboring water molecules and so on until the disturbance makes it to the edge of the pool. The molecules that started the chain of events remain close to their initial locations, but the disturbance will reach the pool's edge in seconds. Waves transfer energy without transferring matter. Single disturbance in a pool Waves are how we describe the movement of a disturbance through a medium. Whether through one initial disturbance or one million, the chain-reaction of molecular collisions in the pool is what drives the disturbance to propagate outwards. Graphic of two waves in a pool When two waves are disturbing the same region of space, their amplitudes will add or subtract to create either constructive or destructive interference. This transient additive or subtractive practice is referred to as superposition. Graphic of wave pulse constructive interference, courtesy of the Penn State College of Engineering. After the waves interfere in a particular location, they continue on in the same direction and the same speed that they began as long as they remain in the same medium. The speed and direction can change when a wave enters a new medium. Sound waves travel through air, water waves travel through water—the substance that waves travel through is termed the "medium". Electromagnetic waves can travel through mediums such as air and water, or through the emptiness of space—they do not require a medium to propagate energy from one location to another. Wave Reflection When waves transition from one medium to another, part of their energy is transmitted, part of their energy is reflected, and part of their energy is dissipated into the environment. The material properties of the two mediums determine the ratios of transmission to reflection and dissipation. And material properties also determine whether a wave will invert when it reflects or remain upright. A single wave-pulse transmitting and reflecting energy. Graphic courtesy of Wikimedia. A continuous incident wave (orange) hits an interface where some energy is reflected (light orange) and some energy is transmitted (dark orange) Reflection and Inversion When waves travel from one medium to another, some of the incident energy is reflected. Depending on the material properties of the mediums, waves may invert when they reflect. Imagine a long spring tied to a pole. If you were to flick the spring to the left-of-center, the disturbance would travel the length of the spring until it hit the pole, at which point it would reverse directions and start travelling back towards you on the opposite side, right-of-center. This is an inversion. .Gif courtesy of the Penn State College of Engineering. Take that same spring and tie it to a rope. If you were to flick the spring to the left-of-center, the disturbance would travel the length of the spring until it hit the rope, at which point it would reverse direction and start travelling back towards you on the same side, left-of-center. Understanding rope reflection helps us to understand what is happening inside an antenna. Here are four situations that help illustrate the concepts of reflection and inversion: Whether or not a wave inverts when it reflects is determined by the properties of the mediums on either side of the interface. If the wave does invert when it reflects, and we want constructive interference in the rope, we must have a length of rope that is half of a wavelength long, a full wavelength long, one and a half wavelengths long and so on: $$L=n\times \frac{\lambda}{2}$$, where n is a positive integer. Antenna resonance is based on the same basic principles of reflection and interference: Choose a length of wire that allows reflected energy to constructively interfere to create a larger signal rather than a smaller one. When two waves of the same wavelength travel in opposite directions in the same medium (depicted in Blue and Orange in the examples below), they can interact to form a standing wave (depicted in Green in the examples below). Standing waves are so named because while the blue waves travel to the left and the orange waves travel to the right, the green standing-waves have no apparent side-to-side motion. An incident wave (Orange) and a reflected wave (Blue) combine to form a standing wave (Green) Standing waves only occur at specific lengths in the medium that are determined by the reflection behavior and the wavelength of the incident wave. Standing Wave Ratio (SWR) Standing waves of maximum amplitude occur at very precise combinations of frequency (or wavelength) and antenna length. Unfortunately, it is impractical and actually impossible to have antennas that are the precise length to form a perfect standing wave for the desired range of operating frequencies. Fortunately, this is not necessary. A single-length antenna can function for a small range of frequencies with a small, acceptable level of imbalance. Standing waves with net voltage shown over a period of oscillation. Image Interferometrist (Own work) [CC BY-SA 4.0], via Wikimedia Commons Antenna length must be adjusted to create as close to a perfect standing wave as possible at the center of the operating frequency range. SWR (Standing Wave Ratio) meters measure the ratio of transmitted to reflected energy, with the idea to have the ratio be as close to 1:1 as possible. Small adjustments can be made by introducing passive circuit components between the final circuit amplification stage and the antenna. Small imperfections in antenna tuning will cause a potential difference to exist in the final amplification circuitry, heating the final part of the transmission circuit. Large imbalances can feed high potential differences back into the transmitter circuitry, causing dielectric breakdown and arcing. Transmitting Information The two types of information transfer that you are likely most familiar with are FM (Frequency Modulation) and AM (Amplitude Modulation). With Frequency Modulation, information is transmitted by modifying the frequency of a carrier wave. With Amplitude Modulation, the frequency of the carrier wave is constant. Information is transmitted by modifying the amplitude of the carrier. The Dipole Antenna A simple antenna that uses two identical elements is called a dipole. The shortest dipole antennas operate at one-half wavelength and establish standing waves along their length. Standing waves in a dipole antenna, courtesy of wikimedia.org The changing electric fields along the length of the antenna create radio waves that propagate outwards. An antenna radiating energy, courtesy of wikimedia.org Antennas allow us to transmit and receive information through influencing and being influenced by the electromagnetic fields that permeate the universe. The next article will explain different types of antennas and how they allow information to travel over large distances. Antenna Basics: Radiation Patterns, Permittivity, Directivity, and Gain antenna dipole Sine Waveform Standing Wave constructive interference destructive interference electromagnetic radiation u-blox Adds Devices with Internal Antennas to Series of Bluetooth 5 Communications Modules Protecting Technological Infrastructure with EMP-Proof Conductive Concrete Engineers from the University of Nebraska have created a spray-on conductive concrete that can defend against EMP attacks. LVDS: High-Speed, Low-Power, Robust Data Transfer This technical brief discusses characteristics and advantages of low-voltage differential signaling (LVDS). Robert Keim Wireless RF Energy Harvesting: RF-to-DC Conversion and a Look at Powercast Hardware This article looks at the technology and the sample kits that Powercast provides users who want to learn more about the technology. MattWCarp 2016-08-27 This is a great intro article. I ask the author to consider more information on two points mentioned in the sequel: 1. Whether or not a wave inverts when it reflects is determined by the properties of the mediums on either side of the interface. Elaborate in this area! 2. Small adjustments can be made by introducing passive circuit components between the final circuit amplification stage and the antenna. More is needed here. If these two areas were covered, I think we'd have almost everything we'd need to design our own dipoles. Beyond that, I'd think the other topics needed to round out the basics of antennas would be impedance matching and reflectors. Mark Hughes 2016-08-27 Glad you liked the article. Sorry about not being able to cover it all in one article, I'll ask the editors if they wouldn't mind me extending the discussion to include those topics. Feel free to direct message me, or post more thoughts or questions here. I'll address what I can! Morvan 2016-10-14 As a Brazilian, and, too, as a researcher, I could not not to mention, already you have made, regarding Marconi, to speak about Father Landell de Moura (Access here, for a Portuguese reader), contemporary of Gugliemo Marconi and his companion, or competitor, of course, each one in his country, in electromagnetical researches. Landell de Moura lose radio patent (and very acknowledgment, scientifically speaking) justly by the lack of awareness of such importance, regarding Brazilian Government, and Church Virulent Opposition, threatening him of heresy, hence, treating his studies as "demoniac[al] things". Padre (Father) Landel is co-inventor of radio, among very other important discoveries of his epoch. Hello @Morvan, Thank you for taking the time to bring Father Landell de Moura to my attention. He appears to have made substantial contributions to the field. Sadly, as I've learned today, there isn't much written about him in English compared to the volumes of work written about his contemporaries. The good news is that I spoke with one of the editors who happens to know of a bilingual writer who might be able to translate some of the information to allow more people to learn about his life and contributions. Hi. Good to know it and thanks for your kindly reply. bgitego 2016-11-30 Hey, great article, thanks for sharing your knowledge, by the way I saw some typos I believe on your section on rope reflection. There's reference to spring and rope, but the illustrations are design for a string tie to rope analogy. Maybe it's not a typo ... but it's somewhat confusing to follow along. Thanks again learned a lot from this looking forward to part 2. Hi @bgitego, Glad you like the article—to answer your question about the rope/spring section. Those graphics were originally a movie similar to this one: https://www.youtube.com/watch?v=ZxIlyptT1FY . We removed the movie and replaced it with graphics to reduce the overall data-download requirement to view the article and the page load time. Sorry for the confusion. sjgallagher2 2018-07-29 This is a good article, clearly a lot of work went into it! I'd like to bring up some historical points, even if the article is a few years old. First, Oersted didn't create the first antenna just by showing that a current-carrying wire has a magnetic field. An antenna is typically designed to radiate electromagnetic waves into the far field, with the intent being having electromagnetic field strength which falls off as 1/r instead of 1/r^2 like the magnetic field around the wire. Alternating current in some form is used to accelerate charges and create the far field component of the field, and Oersted's current carrying wire doesn't depend on changes in current. In essence, the non-contact nature of electromagnetic force is not the only requirement for radiation. From a historical perspective, Maxwell developed the mathematical foundations for electromagnetic waves, and Hertz proved their existence experimentally using what we might today call an LC resonant tank, and a spark-gap, which could illuminate a bulb across a table, or by pointing at a mirror and reflecting to the bulb. Hertz's dipole, the Hertzian dipole, was very short relative to the wavelength of the wave (which he could determine by moving around his apparatus) and was thus quite inefficient, but it shows how you could radiate from even a very electrically short antenna. Marconi used a monopole antenna in his transmission; Tesla's work in wireless power transmission was actually using near-field coupling to my understanding. Anyway, great introduction to antennas! Hi @sjgallagher2, Glad you enjoyed the article. Did you find part 2? https://www.allaboutcircuits.com/technical-articles/antenna-basics-field-radiation-patterns-permittivity-directivity-gain/ I enjoy learning about the history of scientific development, but I'll admit that I don't have very many resources at my fingertips. Do you have any books / websites / etc… that you could recommend? The American Association of Physics Teachers publishes some collections of articles on some of historic topics, but it appears to be behind a paywall. Have a great day—and take a look at part 2!
CommonCrawl
Treatments used for malaria in young Ethiopian children: a retrospective study Abyot Endale Gurmu1Email authorView ORCID ID profile, Teresa Kisi2, Habteweld Shibru3, Bertrand Graz4 and Merlin Willcox5 Received: 17 July 2018 Published: 5 December 2018 In Ethiopia, medicinal plants have been used to treat different diseases, including malaria, for many centuries. People living in rural areas are especially noted for their use of medicinal plants as a major component of their health care. This study aimed to study treatment-seeking and prioritize plants/plant recipes as anti-malarials, in Dembia district, one of the malarious districts in Northwest Ethiopia. Parents of children aged under 5 years who had had a recent episode of fever were interviewed retrospectively about their child's treatment and self-reported outcome. Treatments and subsequent clinical outcomes were analysed using Fisher's exact test to elicit whether there were statistically significant correlations between them. Of 447 children with malaria-like symptoms, only 30% took the recommended first-line treatment (ACT) (all of whom were cured), and 47% took chloroquine (85% cured). Ninety-nine (22.2%) had used medicinal plants as their first-choice treatment. Allium sativum (Liliaceae), Justicia schimperiana (Acanthaceae), Buddleja polystachya (Scrophulariaceae) and Phytolacca dodecandra (Phytolaccaceae) were the most frequently used. Justicia schimperiana was the one associated with the best clinical outcomes (69% self-reported cure rate). However, the difference in clinical outcomes between the plants was not statistically significant. In this study, only 30% of children took the recommended first-line treatment. 22% of children with presumed malaria were first treated with herbal medicines. The most commonly used herbal medicine was garlic, but J. schimperiana was associated with the highest reported cure rate of the plants. Further research is warranted to investigate its anti-malarial properties. Ethiopia has made tremendous progress in reducing incidence and mortality from malaria. There was a rapid scale-up of long-lasting insecticidal nets (LLINs) and artemisinin-based combination therapy (ACT) in 2007, associated with a 73% reduction in malaria admissions and a 62% reduction in malaria deaths in children aged under 5 years [1]. The Health Extension Programme increased coverage of primary health care in Ethiopia to 90% in 2010 [2]. Health extension workers are employed, each of whom is responsible for 500 households. Amongst other tasks, they can conduct rapid diagnostic tests for malaria and administer anti-malarial drugs. This contributed to Ethiopia's rapid reduction in under 5 mortality, the fastest in East Africa, from 205 deaths per 1000 live births in 1990 to 64 in 2013 [3]. Because of increasing chloroquine resistance, in 2004 Ethiopia adopted artemether–lumefantrine (AL) as the first-line treatment for Plasmodium falciparum infections. Chloroquine remains the first-line treatment for Plasmodium vivax. Although national treatment guidelines recommend that this should be followed by a 2-week course of primaquine, in practice it is not routinely used because there are no widely available tests for G6PD deficiency. It is only used under the supervision of healthcare providers for patients with limited risk of malaria infection in the future, such as those who are not living in malaria endemic areas. A recent review showed that in Ethiopia, 98.1% of patients with P. falciparum were successfully treated with AL and 94.7% of patients with P. vivax were successfully treated with CQ [4]. However, AL is only effective in 75.1% of patients with P. vivax [4]. Although prevalence of P. falciparum has reduced rapidly since the introduction of AL, prevalence of P. vivax has risen slightly [5]. Therefore, malaria remains a leading public health problem in Ethiopia. It is estimated that about 75% of the total area of the country and 65% of the population is at risk of infection [6]. In 2016, an estimated 2.6 million malaria cases and 5000 deaths occurred in the country, which was an increase since 2014 [7]. Two-thirds of cases were due to P. falciparum, but Ethiopia is home to the second highest number of cases and mortality due to P. vivax in the world (after India). There is still room for progress: use of insecticide-treated bed nets in children under 5 ranges from 45 to 69% [8, 9]. The biggest gap between practice and policy is that only 41% of children under 5 with fever were taken to a health facility or provider, and only 34% sought care promptly [9]. Furthermore, AL was less than 20% of the anti-malarials received by under-5 s [7]. One possible explanation for this is the use of herbal medicines. Traditional medicinal plants are an integral part of the variety of cultures in Ethiopia [10]; up to 80% of the population uses traditional medicine due to the cultural acceptability of healers and local pharmacopeias, the relatively low cost of traditional medicine and difficult access to modern health facilities [11]. People living in rural areas are particularly noted for their use of medicinal plants as a major component of their health care [12–15]. The hypothesis of the current study was that many people who do not use formal care are using herbal medicines instead. Plants which are traditionally used for the treatment of malaria are a potential source of active lead compounds with new mechanisms of action. The classical way of identifying medicinal plants for further research is through ethnobotanical studies. Yet conventional ethnobotanical studies rarely involve clinicians. They could and should provide much more clinical information if the ultimate goal is to know which one, among numerous treatments for a given ailment, has the best effects. Although identification of the plants is usually of a good standard, definition of the diseases which they treat is not. There is rarely sufficient questioning about the observed patient status and progress, perceived efficacy and limitations of the remedies, and whether these are indeed the 'treatment of choice'. Many plants are 'supposed' to be good for one disease or another, but are not actually the preferred treatment used in everyday life. The 'Retrospective Treatment Outcome Study' (RTO) was designed circumvent these problems [16]. This adds two essential elements to the ethnobotanical method: clinical information and statistical analysis. Clinical information is collected retrospectively on the presentation and progress of a defined disease episode. This approach has proved to work well in Mali [17], and has then been used in other places. This study aimed to measure the frequency of use of different treatments, and associated outcomes, in children under 5 years of age with a recent episode of fever (identified by the parents as uncomplicated malaria) in a rural district of Ethiopia. Dembia is a rural district in Northwest Ethiopia, covering a total area of 127,000 km2 (Fig. 1). The altitude of the district ranges from 1750 to 2100 m above sea level. It lies next to the largest lake in Ethiopia (Lake Tana) which contributes to the high and rising level of malaria in the area [18]. Following the rainy season, the incidence of malaria peaks from the end of September to the middle of December. There were 12,221 malaria cases in the district in the year 2012, and this rose to 22,166 in 2016 [18]. Within the district there are 5 urban kebeles and 40 rural kebeles (a kebele is the smallest administration unit, which is equivalent to a village). According to demographic data of the district, Aberjeha, Chenker and Tezeba were the most malarious kebeles with populations of 10,490, 6520 and 6213, respectively, and with incidence rates of 7.7%, 7.0% and 6.6%, respectively (Dembia District Health Office Demographic Data, 2012). These were selected for the study. The proportion of under five children was estimated to be 14% (3251) of the total population. Map of the study area The sample size for the study was determined using the formula for a single population proportion. It was estimated that 50% of the population would be using plants, and required a 95% level of confidence. Therefore, sample size was determined as follows: $$ {\text{n }} = \, \frac{{\left( {{\text{Z}}_{\alpha / 2} } \right)^{ 2} {\text{P }}\left( { 1- {\text{ P}}} \right)}}{{{\text{d}}^{2} }} \, = \, \frac{{\left( { 1. 9 6} \right)^{ 2} 0. 5\left( {0. 5} \right)}}{{(0.05)^{2} }} \, = { 385} $$ In order to allow for an estimated 15–17% non-response, the final sample size was increased to 451. The selection of respondents was performed in order to ensure that they were representative of the whole population, with a corresponding proportion of houses to be randomly visited until the desired sample size was reached. Data collection process During the peak malaria season, from November to December 2013, the parents of children aged under 5 years who had had symptoms of uncomplicated malaria (mainly fever) within the previous 2 months were interviewed. Data was collected using a structured and pre-tested questionnaire which was developed by Graz et al. [16] and subsequently modified, in order to elicit relevant symptoms, treatments and self-reported outcomes. Specimens of the reported anti-malarial plants were collected and identified by Mr. Abiyu Enyew (Botanist), Department of Biology, College of Natural Sciences, University of Gondar, to correspond the local name to the scientific name and deposited at herbarium unit, Department of Biology. Data were coded, checked for completeness and consistency, entered using EPI-INFO™7 statistical software and then exported to SPSS version 20 for further analysis. Descriptive statistics of the collected data (list of plants/recipes used, frequency of children used, mode of preparation and treatment outcome, form of the treatment) was done. Statistical correlation with reported clinical recovery was also computed using Fisher's exact test. Permissions and ethical clearance The study was carried out after getting permission from the ethical review board of the University of Gondar (R/C/S/V/P/05/239/2013). Then a letter of permission was obtained from the district health office and local administrator, and the individual household heads were invited to give written informed consent before the interview. Confidentiality was respected by keeping the privacy of the respondents while filling the questionnaire. Seriously ill children were advised to visit a health facility. Any personally identifiable information (such as names, addresses) was not entered into the database. Socio-demographic characteristics of respondents Among 451 patients who were approached for the study, 4 either did not complete the interview or were not willing to be included in the study. Finally, 447 were included, resulting in a response rate of 99.1%. 52.1% were girls and half of them (50.3%) were aged between 1 and 3 years. The mean age was found to be 34 months, with a range between 7 months and 5 years. Treatment providers and treatment choices Over three-quarters of respondents (75.6%) had sought treatment from a nurse, health extension worker, doctor or pharmacist, mostly (50.1%) from a nurse (Table 1). For those who took herbal medicines, more were provided by a family member (13.5%) than by a traditional healer (8.5%). Treatment providers for children under 5 years of age with fever in Dembia district, Northwest Ethiopia Frequency (percentage) Health extension workers Pharmacist/druggist Drug vender 2 (0.4% As shown in Fig. 2, from the total of 447 children with malaria, the commonest treatment was chloroquine (47%) followed by ACT (30%), usually artemether + lumefantrine. Ninety-nine (22.2%) were found to use medicinal plants alone for treatment of the illness as a first choice while 12 and 3 were found to use medicinal plants as a second and third alternative, respectively. Holy water was also reported to be used by three respondents as the first choice. Twelve plant species were mentioned and identified as treatments for malaria. Allium sativum (Liliaceae) was the most frequently reported plant (Table 2). First line treatment for malaria and malarial like symptoms in under five children Traditional recipes and patient-reported clinical outcomes in children under 5 years of age with fever in Dembia district, Northwest Ethiopia Plant species used Plant part No of cases reporting use No of cases reporting clinical recovery (%) No of treatment failures Allium sativum (Liliaceae) Decoction Justicia schimperina (Hochst) Dandy. (Acanthaceae). Buddleja polystachya (Scrophulariaceae) Maceration/Infusion Phytolacca dodecandra (Phytolaccaceae) Maceration/decoction Verbena officinalis (Verbenaceae) Leaf/aerial part Ocimum lamiifolium (Lamiaceae) Allium sativum (Liliaceae) + Brassica nigra (Cruciferaceae) Bulb/seed Brassica nigra (Cruciferaceae) + Honey Zehneria scabra (Cucurbitaceae) Arial part Zingiber officinale (Zingiberaceae) Achyranthes aspera L. (Amaranthaceae) Clerodendrum myricoides (Lamiaceae) Allium sativum (Liliaceae) + Verbena officinalis (Verbenaceae) Bulb/aerial part Honey + Allium sativum (Liliaceae) Ocimum lamiifolium (Lamiaceae) + Zehneriascabra (Cucurbitaceae) Honey + Brassica nigra (Cruciferaceae) Treatments used and associated outcomes According to their parents, all children who were treated with ACT were cured while 84.6% and 15.4% of children treated with CQ were cured and improved, respectively (p < 0.001). Of the herbal treatments, garlic was the most commonly used, but Justicia schimperiana was associated with the highest proportion of patients who said they were "cured" (69%). Although this seemed to be higher than the reported cure rate for all other medicinal plants (53%), the difference was not statistically significant (Fisher's exact test statistic = 0.284). Most patients used leaves (41.9%) followed by roots (17.1%). The oral route was the most frequent form of administration (92.8%) whereas decoctions were the most common preparation. Respondents were also asked about the unit of measurement for medicinal preparations and the majority of them used teaspoons and coffee cups for liquid preparations such as decoctions. Water was the most widely used solvent and honey was commonly used as sweetening agent to mask unpleasant tastes. Summary of findings and comparison with the literature It was found that much higher rates of treatment-seeking from formal health workers than reported in other sources. In spite of this, only a minority of children with malaria were treated with ACT, confirming results of other studies [7]. Chloroquine was the commonest treatment, used in almost half of all cases. This is more frequent than would be expected, given that P. vivax is estimated to cause less than one-third of malaria cases in Ethiopia [7]. This may in part be because CQ can be obtained from drug shops at a relatively cheap price ACT are only available from official health centres. Although there are a few reports of CQ resistant P. vivax in Ethiopia [19–22], CQ is still recommended for the treatment of P. vivax malaria in Ethiopia [23] and is still effective in 94.7% of cases [4]. The lower reported "cure" rate in this study (reported by parents) could imply that chloroquine was being used for cases of falciparum malaria (some of which are resistant to chloroquine), and that some of the children may have had a disease other than malaria. The use of herbal medicine was lower than expected. This may partly be due to the expansion of the Health Extension Programme (HEP) at the household level which increased treatment-seeking for malaria in some areas [24] although in this study, only 12.5% of respondents received treatment from a Health extension worker. None of the plants were associated with a very high reported "cure" rate, unlike studies in some other African countries [17]. However almost all parents reported that their children had "improved". Previous ethnobotanical studies have shown that several plants reported here are used elsewhere for the management of malaria and malaria-like symptoms (Table 3). Previous ethnobotanical reports of anti-malarial use of the most frequently cited plants Part used Allium sativum Maceration in oil, Swallowing (eating the bulb) India, Nigeria, Ethiopia [12, 14, 29–36] Brassica nigra India, Ethiopia Buddleja polystachya Justicia schimperiana Phytolacca dodecandra Leaf, root Zingiber officinale Maceration, decoction, mixture Nigeria, India, Sri Lanka, Zambia, Nicaragua Ocimum lamiifolium Concoction Clerodendrum myricoides The anti-malarial activity of several of these plant extracts has been assessed in rodent models in vivo (Table 4). Ajoene, a compound isolated from Allium sativum, was found to prevent the development of parasitaemia in mice infected with Plasmodium berghei and substantially improved the anti-malarial activity of chloroquine [25]. However the most promising plant was J. schimperiana. Although not the most commonly used, it was associated with the highest reported cure rates in this study. Its root extract has been tested against P. falciparum in vitro and was not very active (IC50 = 71 mcg/ml for the methanolic extract) [26]. However, aqueous leaf extracts (which more closely resemble the traditional preparation) suppressed growth of P. berghei in mice in the 4-day suppressive test by 41%, and the methanol leaf extract suppressed parasitaemia by 65% [27]. Although Clerodendrum myricoides and Zehneria scabra also had good anti-malarial activity in mouse models, they were not widely used, possibly because they are widely regarded as poisonous [28]. Previous studies on the in vivo anti-malarial activity of the most frequently cited plant extracts, given orally to mice (infected with Plasmodium berghei) Type of extract Dose (mg/kg) % Chemosuppression Hydro-alcoholic 8, 65 and 85, respectively 16, 26 and 28, respectively Isolated compounds Ajoene About 67% Hydroalcoholic 100, 200,400 Zehneria scabra 72,62 and 73, respectively Strengths and limitations of the study This was a community-based study conducted in one of the most malarious areas of Ethiopia. There was a very high response rate so the results are likely representative of the population in this area, and it could be confident in the prevalence of use of the different treatments. Voucher specimens of plants were collected and identified by a botanist. The first limitation of this study is that patients were not asked whether they had been tested before receiving the treatment. With hindsight, it would have been useful to know the proportion of patients tested, and the proportion with P. falciparum or P. vivax. As the survey was retrospective, it is likely that some patients were not tested, and some may not actually have had malaria. Secondly, parents may not have been aware of the qualifications of the person they saw and may have said she was a 'nurse' when in fact she may have not been qualified. Thirdly, the data on outcomes is based on self-reporting by parents of the children. It seems that the question which discriminated best between treatments was whether a patient was 'cured', since almost all patients claimed to have at least "improved". Therefore, the % of patients 'cured' on each treatment was compared. This approach seems to be valid, because 100% of patients claim to have been 'cured' after taking an ACT (to which there is no documented resistance in Ethiopia) compared to 84.6% who took chloroquine, against which there is some resistance. Lastly, because herbal medicine was less frequently used than predicted, the sample size was too small to be able to find statistically significant differences in outcomes between patients who had taken different herbal remedies. Sample size calculation was based on an estimation of 50% of patients having taken herbal medicines. If that had been the case, this sample would have included twice as many patients who had taken herbals, which would have increased the statistical power to find differences in outcomes between subgroups—for example those who had taken J. schimperiana versus those who had taken Allium sativum. Implications for policy, practice and research Further investigations are needed to understand why the majority of children with malaria in Ethiopia receive CQ rather than AL, although the majority of malaria cases are reported to be caused by P. falciparum rather than P. vivax. Since most of the anti-malarial medicines were provided by health workers, a qualitative study of health workers would help to understand this. Factors to explore would include availability of AL and use of diagnostic tests or microscopy to distinguish between malaria species. Justicia schimperiana may have the potential to be a candidate for the development of efficacious and safe anti-malarial phytomedicines and/or compounds, using a 'reverse pharmacology' model [17]. It would be useful to further investigate the way in which it is prepared and used, and to isolate the active phytochemical(s). In the most malarious villages of Dembia district, Ethiopia, only 30% of children with presumed malaria took the recommended first-line treatment (artemether–lumefantrine), while 47% took chloroquine and 22% were treated with herbal medicines as the first-line treatment. The most commonly used herbal medicine was garlic, but J. schimperiana was associated with the highest proportion of patients who said they were 'cured' (69%). Further research is warranted to understand reasons for the low use of AL, and to investigate the anti-malarial properties of J. schimperiana. RTO: Retrospective Treatment Outcomes SPSS: Statistical Packages for Social Sciences AEG coordinated the overall work and designed the protocol. TK and HS involved in protocol development and manuscript writing. MW and BG designed the protocol and manuscript writing. All authors read and approved the final manuscript. We are very grateful to the University of Gondar for sponsoring this study and Community Health Association-Geneva for financial support. Training on the RTO methodology was provided by the MUTHI project (EU-FP7 grant agreement no. 266005). We also want to extend our gratitude to data collectors; Mr. Gashaw Sisay, Mr. Baraki Huluf, Ms. Leknesh Belay, Mr. Lakachew Molla, Ms. Elsa Abuhay and Ms. Bosena Lakew. Mr. Zemene Demelash is also highly acknowledged for his contribution in data entry and clearance. We wish to thank the study participants and local leaders, without whose open and keen collaboration the study would not have been possible. Almost all the materials and data of our study are included in the manuscript, a few of the material and data will be available to other researchers upon request. Letter of permission was obtained from the district health office and local administrator, and the individual household heads were invited to give written informed consent before the interview. Moreover, participants of the study have consented to their photograph being taken for publication, if necessary. Ethical approval was secured from the Institutional Review Board of the University of Gondar (R/C/S/V/P/05/239/2013), prior to starting of the study. This project was funded by Community Health Association-Geneva. Department of Pharmacognosy, School of Pharmacy, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia Department of Public Health, College of Health Sciences, Arsi University, Asella, Ethiopia Department of Internal Medicine, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia Antenna Foundation, Geneva, Switzerland Department of Primary Care and Population Sciences, Aldermoor Health Centre, University of Southampton, Aldermoor Close, Southampton, SO16 5ST, UK Otten M, Aregawi M, Were W, Karema C, Medin A, Bekele W, et al. Initial evidence of reduction of malaria cases and deaths in Rwanda and Ethiopia due to rapid scale-up of malaria prevention and treatment. Malar J. 2009;8:14.View ArticlePubMedPubMed CentralGoogle Scholar Banteyerga H. Ethiopia's health extension program: improving health through community involvement. MEDICC Rev. 2011;13(3):46–9.View ArticlePubMedGoogle Scholar Ruducha J, Mann C, Singh NS, Gemebo TD, Tessema NS, Baschieri A, et al. How Ethiopia achieved millennium development goal 4 through multisectoral interventions: a countdown to 2015 case study. Lancet Glob Health. 2017;5:e1142–51.View ArticlePubMedPubMed CentralGoogle Scholar Gebreyohannes EA, Bhagavathula AS, Seid MA, Tegegn HG. Anti-malarial treatment outcomes in Ethiopia: a systematic review and meta-analysis. Malar J. 2017;16:269.View ArticlePubMedPubMed CentralGoogle Scholar Tesfa H, Bayih AG, Zeleke AJ. A 17-year trend analysis of malaria at Adi Arkay, north Gondar zone, Northwest Ethiopia. Malar J. 2018;17:155.View ArticlePubMedPubMed CentralGoogle Scholar Alemu A, Abebe G, Tsegaye W, Golassa L. Climatic variables and malaria transmission dynamics in Jimma town, South West Ethiopia. Parasit Vectors. 2011;4:30.View ArticlePubMedPubMed CentralGoogle Scholar WHO. World malaria report 2017. Geneva: World Health Organization; 2018.Google Scholar UNICEF. The State of the World's Children 2017. New York: UNICEF; 2017.Google Scholar Birhanu Z, Yihdego YYE, Yewhalaw D. Caretakers' understanding of malaria, use of insecticide treated net and care seeking-behavior for febrile illness of their children in Ethiopia. BMC Infect Dis. 2017;17:629.View ArticlePubMedPubMed CentralGoogle Scholar Pankhurst R. An historical examination of traditional Ethiopian medicine and surgery. Ethiopian Med J. 1965;3:157–72.Google Scholar Kassaye KD, Amberbir A, Getachew B, Mussema Y. A historical overview of traditional medicine practices and policy in Ethiopia. Ethiop J Health Dev. 2006;20:127–34.Google Scholar Endale A, Berhanu Z, Berhane A, Tsega B. Ethnobotanical study of antimalarial plants in Denbia District, North Gondar, Amhara Region, Northwest Ethiopia. In: 6th multilateral initiative on malaria conference; Durban, South Africa. 2013. p. 305.Google Scholar Paulos B, Fenta TG, Bisrat D, Asres K. Health seeking behavior and use of medicinal plants among the Hamer ethnic group, South Omo zone, southwestern Ethiopia. J Ethnobiol Ethnomed. 2016;12:44.View ArticlePubMedPubMed CentralGoogle Scholar Suleman S, Beyene Tufa T, Kebebe D, Belew S, Mekonnen Y, Gashe F, et al. Treatment of malaria and related symptoms using traditional herbal medicine in Ethiopia. J Ethnopharmacol. 2018;213:262–79.View ArticlePubMedGoogle Scholar Ragunathan M, Weldegerima B. Medico ethno botany: a study on the Amhara ethnic group of Gondar district of north Gondar zone Ethiopia. J Natural Remedies. 2007;7:200–6.Google Scholar Graz B, Diallo D, Falquet J, Willcox M, Giani S. Screening of traditional herbal medicine: first, do a retrospective study, with correlation between diverse treatments used and reported patient outcome. J Ethnopharmacol. 2005;101:338–9.View ArticlePubMedGoogle Scholar Willcox M, Graz B, Falquet J, Diakite C, Giani S, Diallo D. A "reverse pharmacology" approach for developing an anti-malarial phytomedicine. Malar J. 2011;10(Suppl 1):S8.View ArticlePubMedPubMed CentralGoogle Scholar Agegnehu F, Shimeka A, Berihun F, Tamir M. Determinants of malaria infection in Dembia district, Northwest Ethiopia: a case-control study. BMC Public Health. 2018;18:480.View ArticlePubMedPubMed CentralGoogle Scholar Yohannes AM, Teklehaimanot A, Bergqvist Y, Ringwald P. Confirmed vivax resistance to chloroquine and effectiveness of artemether-lumefantrine for the treatment of vivax malaria in Ethiopia. Am J Trop Med Hyg. 2011;84:137–40.View ArticlePubMedPubMed CentralGoogle Scholar Teka H, Petros B, Yamuah L, Tesfaye G, Elhassan I, Muchohi S, et al. Chloroquine-resistant Plasmodium vivax malaria in Debre Zeit, Ethiopia. Malar J. 2008;7:220.View ArticlePubMedPubMed CentralGoogle Scholar Schunk M, Kumma WP, Miranda IB, Osman ME, Roewer S, Alano A, et al. High prevalence of drug-resistance mutations in Plasmodium falciparum and Plasmodium vivax in southern Ethiopia. Malar J. 2006;5:54.View ArticlePubMedPubMed CentralGoogle Scholar Ketema T, Bacha K, Birhanu T, Petros B. Chloroquine-resistant Plasmodium vivax malaria in Serbo town, Jimma zone, south-west Ethiopia. Malar J. 2009;8:177.View ArticlePubMedPubMed CentralGoogle Scholar Drug Administration and Control Authority of Ethiopia. Standard treatment guideline for general hospitals. Addis Ababa: Drug Administration and Control Authority of Ethiopia; 2010.Google Scholar Bilal NK, Herbst CH, Zhao F, Soucat A, Lemiere C. Health extension workers in Ethiopia: improved access and coverage for the rural poor. In: Chuhan-Pole P, Angwafo M, editors. Yes africa can: success stories from a dynamic continent. Washington, DC: World Bank; 2011.Google Scholar Perez HA, De la Rosa M, Apitz R. In vivo activity of ajoene against rodent malaria. Antimicrob Agents Chemother. 1994;38:337–9.View ArticlePubMedPubMed CentralGoogle Scholar Bogale M, Petros B. Evaluation of the antimalarial activity of some Ethiopian traditional medicinal plants against Plasmodium falciparium in vitro. Ethiop J Sci. 1996;2:233–43.Google Scholar Abdela J, Engidawork E, Shibeshi W. In vivo antimalarial activity of solvent fractions of the leaves of Justicia schimperiana Hochst. Ex Nees against Plasmodium berghei in mice. Ethiop Pharm J. 2014;30:95–108.Google Scholar Neuwinger HD. African ethnobotany: poisons and drugs. London: Chapman & Hall; 1996.Google Scholar Aminuddin RDG, Subhan Khan A. Treatment of malaria through herbal drugs from Orissa, India. Fitoterapia. 1993;64:545–8.Google Scholar Shankar D, Venugopal S. Understanding of malaria in Ayurveda and strategies for local production of herbal anti-malarials. In: First international meeting of the research initiative on traditional antimalarials 1999; Moshi, Tanzania.Google Scholar Berhanu A, Asfaw Z, Kelbessa E. Ethnobotany of plants used as insecticides, repellents and antimalarial agents in Jabitehnan district, West Gojjam. Ethiop J Sci. 2006;29:87–92.Google Scholar Odugbemi TO, Akinsulire RO, Aibinu IE, Fabeku PO. Medicinal plants useful for malaria therapy in Okeigbo, Ondo state, Southwest Nigeria. Afr J Trad Complement Altern Med. 2007;4:191–8.Google Scholar Abera B. Medicinal plants used in traditional medicine by Oromo people, Ghimbi District, Southwest Ethiopia. J Ethnobiol Ethnomed. 2014;10:40.View ArticlePubMedPubMed CentralGoogle Scholar Meragiaw MM, Asfaw Z. Review of antimalarial, pesticidal and repellent plants in the Ethiopian traditional herbal medicine. J Herbal Sci. 2014;3:21–45.Google Scholar Asnake S, Teklehaymanot T, Hymete A, Erko B, Giday M. Survey of medicinal plants used to treat malaria by Sidama people of Boricha District, Sidama Zone, South Region of Ethiopia. Evid Based Complement Alternat Med. 2016;2016:9.View ArticleGoogle Scholar Kenea O, Tekie H. Ethnobotanical survey of plants traditionally used for malaria prevention and treatment in selected resettlement and indigenous villages in Sasiga District, Western Ethiopia. J Biol Agric Healthc. 2015;5:1–9.Google Scholar Lall Dev K. Indigenous drugs of India. Calcutta: Thacker, Spink & Co.; 1896.Google Scholar Suleman S, Mekonnen Z, Tilahun G, Chatterjee S. Utilization of traditional antimalarial ethnophytotherapeutic remedies among Assendabo inhabitants in (South-West) Ethiopia. Curr Drug Ther. 2009;4:78–91.View ArticleGoogle Scholar Yirga G, Zeraburk S. Ethnobotanical study of traditional medicinal plants in Gindeberet District, Western Ethiopia. Mediterr J Soc Sci. 2011;2:49–54.Google Scholar Coe FG, Anderson GJ. Ethnobotany of the Garífuna of Eastern Nicaragua. Econ Bot. 1996;50(1):71–107.View ArticleGoogle Scholar Lebbie AR, Guries RP. Ethnobotanical value and conservation of sacred groves of the Kpaa Mende in Sierra Leone. Econ Bot. 1995;49:297–308.View ArticleGoogle Scholar Singh VK, Ali ZA. Folk medicines in primary health care: common plants used for the treatment of fevers in India. Fitoterapia. 1994;65:68–74.Google Scholar Vongo R. The role of traditional medicine on antimalarials in Zambia. In: First international meeting of the research initiative on traditional antimalarials; 1999; Moshi, Tanzania.Google Scholar Petros Z, Melaku D. In vivo anti-plasmodial activity of Adhatoda schimperiana leaf extract in mice. Pharmacol OnLine. 2012;3:95–103.Google Scholar Coppi A, Cabinian M, Mirelman D, Sinnis P. Antimalarial activity of allicin, a biologically active compound from garlic cloves. Antimicrob Agents Chemother. 2006;50:1731–7.View ArticlePubMedPubMed CentralGoogle Scholar Muluye AB, Melese E, Adinew GM. Antimalarial activity of 80% methanolic extract of Brassica nigra (L.) Koch. (Brassicaceae) seeds against Plasmodium berghei infection in mice. BMC Comp Alt Med. 2015;15:367.View ArticleGoogle Scholar Adinew G. Antimalarial activity of methanolic extract of Phytolacca dodecandra leaves against Plasmodium berghei infected Swiss albino mice. Int J Pharmacol Clin Sci. 2014;3:39–43.Google Scholar Tesfaye W, Endalkachew A. In vivo antimalarial activity of the crude extract and solvent fractions of the leaves of Zehneria scabra (Cucurbitaceae) against Plasmodium berghei in mice. J Med Plant Res. 2014;8:1230–6.Google Scholar Deressa T, Mekonnen Y, Animut A. In vivo anti-malarial activities of Clerodendrum myricoides, Dodonea angustifolia and Aloe debrana against Plasmodium berghei. Ethiop J Health Dev. 2010;24:25–9.Google Scholar
CommonCrawl
What happens when star spins too fast? Some stars such as a neutron star can spin very fast around 600 times a second, wouldn't the star be ripped apart? Although the gravitational pull is very strong but gravity is the weakest of all known forces and any good theory why neutron stars spin that fast does it have anything to do with the magnetic field and the surrounding dust clouds? The structure (mass versus radius and density profile) is influenced by its rotation rate, but not by as much as you might think. Even in Newtonian physics you can think of a mass element $m$ at the surface of a star of mass $M$ and radius $R$, rotating with angular velocity $\omega$. A condition for stability would be that the surface gravity is strong enough to provide the centripetal acceleration of the test mass. $$ \frac{GMm}{R^2} > m R \omega^2$$ If this is not satisfied then the object might break up (it is more complicated than this because the object will not stay spherical and the radius at the equator will increase etc., but these are small numerical factors). Thus $$ \omega < \left(\frac{GM}{R^3}\right)^{1/2}$$ or in terms of rotation period $P = 2\pi/\omega$ and so $$ P > 2\pi \left(\frac{GM}{R^3}\right)^{-1/2},$$ is the condition for stability. For a typical $1.4M_{\odot}$ neutron star with radius 10 km, then $P>0.46$ milli-seconds. A proper General Relativistic calculation of this limit would give a similar result, but depends to some extent on the equation of state of the neutron star. Happily, this is easily satisfied for all observed neutron stars - they can spin extremely fast because of their enormous surface gravities and all are well below the instability limit. I believe the fastest known rotating pulsar has a period of 1.4 milli-seconds. You also ask how pusars can attain these speeds. There are two classes of explanation for the two classes of pulsars. Most pulsars are thought (at least initially) to be the product of a core-collapse supernova. The core collapses from something a little smaller than the radius of the Earth, to about 10km radius in a fraction of a second. Conservation of angular momentum demands that the rotation rate increases as the inverse of the radius squared. i.e. The spin rate increases by factors of a million or so. Pulsars spin down with age because they turn their rotational kinetic energy into magnetic dipole radiation. However, the fastest rotating pulsars - the "milli-second pulsars" are "born again", by accreting material from a binary companion. The accreted material has angular momentum and the accretion of this angular momentum is able to spin the neutron star up to very high rates because it has a relatively (for a stellar-mass object) small moment of inertia. Related: Is it possible to break apart a neutron star? The rotation must be powerful enough to overpower the force of gravity in order for anything to happen. I'm not sure about the 'ripping apart,' but if the rotation does overpower gravity surface material will be ejected from the body, a process known as mass shedding. As neutron stars are extremely dense they can have great amounts of angular velocity, like the pulsar class neutron star mentioned in your question. If you can quickly increase the angular velocity of the body without also increasing the mass and density, perhaps you could rip it apart, but that would be very difficult to do to a neutron star. Mitch GoshornMitch Goshorn $\begingroup$ Is it like a ice skater doing a spin on the ice rink and when she tuck her arms inward she will go faster? $\endgroup$ – user6760 Apr 4 '15 at 9:56 $\begingroup$ This is one of the ways they achieve such great rotational speeds. Generally neutron stars come from stars with about 10x the mass of our Sun. The resulting neutron star likely has a radius of less than 15 kilometers. Not all the mass becomes a part of the neutron star though - plenty is ejected (though this is generally credited to the supernova, not the rotational velocity). $\endgroup$ – Mitch Goshorn Apr 4 '15 at 10:03 $\begingroup$ It would probably also be worth noting that as the strength of gravity increases as the distance from the center of gravity decreases, as the body becomes more dense the escape velocity at the surface will increase. $\endgroup$ – Mitch Goshorn Apr 4 '15 at 10:08 $\begingroup$ Not really related, but young stars spin faster. This makes sense if you think about it. The coalescing of a cloud of gas into a star would greatly increase angular velocity, as angular momentum is conserved. Stars tend to lose rotational speed over time. space.com/28255-star-spin-age-kepler-spacecraft.html $\endgroup$ – userLTK Apr 4 '15 at 21:09 Not the answer you're looking for? Browse other questions tagged neutron-star or ask your own question. Is it possible to break apart a neutron star? Shape of neutron stars Origin of the magnetic field of neutron stars What happens to a white dwarf past the Chandrasekhar limit? When a neutron star accretes matter, will its mass increase? Can life survive on the equator of cooled and fast rotating white dwarf or neutron star? What happens over time as a neutron star cools? What is the final destination of a neutron star? How do we know how fast the surface of a neutron star is rotating? Roche Zone of a black hole vs Roche Zone of Neutron Star A neutron star without a flat surface and an atmosphere?
CommonCrawl
Lecture 40: Final Review Lecture Stony Brook Physics phy142kk:lectures ====== Review Lecture II ====== We will continue the review starting with Faraday's Law /* ===== Video of lecture ===== <html> <video id="video" width="640" height="360" controls="true"/> <source src="lecturevids/phy142finalreview1s2014.mp4" type="video/mp4"></source> <source src="lecturevids/phy142finalreview1s2014.webm" type="video/webm"></source> <object width="640" height="384" type="application/x-shockwave-flash" data="player.swf"> <param name="movie" value="player.swf" /> <param name="flashvars" value="file=lecturevids/phy142finalreview1s2014.mp4" /> </object> </video> </html> */ ===== Magnetic flux ===== $\Phi_{B}=\int\vec{B}\cdot d\vec{A}$ or when the magnetic field is uniform $\Phi_{B}=\vec{B}\cdot\vec{A}$ {{mangeticflux.png}} In the example above the magnetic flux is $\Phi_{B}=\vec{B}\cdot\vec{A}=BA\cos\theta$ where $A=l^{2}$. The unit of magnetic flux, is called a weber $\mathrm{Wb}$, where $1\mathrm{Wb}=1\mathrm{Tm^{2}}$ ===== Faraday's Law of Induction ===== Faraday's law of induction states that the induced emf in a circuit is equal to rate of change of magnetic flux through the circuit. $\mathcal{E}=-\frac{d\Phi_{B}}{dt}$ If the circuit is made of a number of loops $N$ $\mathcal{E}=-N\frac{d\Phi_{B}}{dt}$ The negative sign in Faraday's law represents Lenz's law which states that "An induced emf is always in such a direction as to oppose the change in flux causing it" ===== Motional emf ===== A conductor moving in a magnetic field will experience a induced emf. /* We can consider the situation for a conducting rod moved backwards and forwards on a pair of rails which are connected to galvanometer, as in this [[http://ngsir.netfirms.com/englishhtm/Induction.htm|applet]]. */ The emf produced is given by the change of flux $\mathcal{E}=-\frac{d\Phi_{B}}{dt}=B\frac{dA}{dt}=-\frac{Blv\,dt}{dt}=-Blv$ But what about the case where the rails are not there? In this case the electrons still feel the force and will collect at one end of rod, so there will be a potential difference across it. ===== General form of Faraday's Law ===== It is important to remember that an emf is not a force, but rather a measure of the work done in a circuit, so we can write the emf in terms of an integral over a closed path of the electric field $\mathcal{E}=\oint\vec{E}\cdot d\vec{l}$ and then $\oint\vec{E}\cdot d\vec{l}=-\frac{d\Phi_{B}}{dt}$ Here we are taking an integral around the path that encloses the area in which magnetic flux is changing. We should not that the implication of this is that in the presence of a time varying magnetic field the electric force is no longer a [[phy141:lectures:12&#conservative_forces|conservative force]]. ===== emf produced by a generator ===== Consider the change in flux on a loop as it is rotated by some external torque: $\mathcal{E}=\frac{d\Phi_{B}}{dt}=-\frac{d}{dt}\int\vec{B}\cdot d\vec{A}=-\frac{d}{dt}BA\cos\theta$ If the loop rotates with a constant angular velocity $\omega=\frac{d\theta}{dt}$ then $\theta=\theta_{0}=\omega t$ and we can say that $\mathcal{E}=-BA\frac{d}{dt}(\cos\omega t)=BA\omega\sin\omega t$ of course if there are $N$ loops $\mathcal{E}=-NBA\frac{d}{dt}(\cos\omega t)=NBA\omega\sin\omega t=\mathcal{E}_{0}\sin\omega t$ /* ===== Back emf and Counter Torque===== As an electric motor accelerates under the action of magnetic force the rate of change of flux $\frac{d\Phi_{B}}{dt}$ will get larger, producing an emf which opposes the motion. The total potential driving the current in the circuit is the applied potential and the back emf generated by the changing magnetic flux. When these two are equal the current flowing through the circuit is zero. If there is no other load on the motor the motor would then turn at that speed constantly. Of course any load will reduce the speed, and hence the back emf, so at the constant top speed of a real motor there will still be some current flowing. In a generator the opposite occurs, if the emf generated is used to produce a current then this produces a counter torque that opposes the motion. */ ===== Transformers ===== {{transformer.png}} In a transformer two coils are coupled by an iron core so that the flux through the two coils is the same. When an AC voltage is applied to the primary coil the magnetic flux passing through it is related to the applied field by $V_{P}=N_{P}\frac{d\Phi_{B}}{dt}$ if we assume the coil has no resistance. The voltage induced in the secondary coil will have magnitude $V_{S}=N_{S}\frac{d\Phi_{B}}{dt}$ We can thus see that $\frac{V_{S}}{V_{P}}=\frac{N_{S}}{N_{P}}$ If we assume there is no power loss (which is fairly accurate) then $I_{P}V_{P}=I_{S}V_{S}$ and $\frac{I_{S}}{I_{P}}=\frac{N_{P}}{N_{S}}$ ===== Mutual Inductance ===== In general for two coils the relationship between the flux in one coil due to the current in another is described by a parameter called the mutual inductance. {{mutualinductance.png}} $\Phi_{21}$ is the magnetic flux in each loop of coil 2 created by the current in coil 1. The total flux in the second coil is then $N_{2}\Phi_{21}$ and is related to the current in coil 1, $I_{1}$ by $N_{2}\Phi_{21}=M_{21}I_{1}$ As, from Faraday's Law, the emf induced in coil 2 is $\mathcal{E}_{2}=-N_{2}\frac{d\Phi_{21}}{dt}$ so $\mathcal{E}_{2}=-M_{21}\frac{dI_{1}}{dt}$ The mutual inductance of coil 2 with respect to coil 1, $M_{21}$ does not depend on $I_{1}$, but it does depend on factors such as the size, shape and number of turns in each coil, their position relative to each other and whether there is some ferromagnetic material in the vicinity. In the reverse situation where a current flows in coil 2 $\mathcal{E}_{1}=-M_{12}\frac{dI_{2}}{dt}$ but in fact $M_{12}=M_{21}=M$ The mutual inductance is measured in Henrys ($\mathrm{H}$), $1\mathrm{H}=1\mathrm{\frac{Vs}{A}}=1\mathrm{\Omega s}$ ===== Self-inductance ===== The magnetic flux $\Phi_{B}$ passing through the coil is proportional to the current, and as we did for mutual inductance we can define a constant of proportionality between the current and the flux, the self-inductance $L$ $N\Phi_{B}=LI$ The emf $\mathcal{E}=-N\frac{d\Phi_{B}}{dt}=-L\frac{dI}{dt}$ The self-inductance is also measured in henrys. A component in a circuit that has significant inductance is shown by the symbol. {{inductor.png}} ===== Self-inductance of a solenoid ===== We can can calculate the self-inductance of a solenoid from it's field $B=\mu_{0}\frac{NI}{l}$ The flux in the solenoid is $\Phi_{B}=BA=\mu_{0}\frac{N_{1}IA}{l}$ so $L=\frac{N\Phi_{B}}{I}=\frac{\mu_{0}N^{2}A}{l}$ ===== LR circuits ===== {{LRswitch.png?700}} When we take a resistor an inductor in series and connect it to a battery then Kirchoff's loop rule tells us that $V_{0}-IR-L\frac{dI}{dt}=0$ which we can rearrange and integrate $\int_{I=0}^{I}\frac{dI}{V_{0}-IR}=\int_{0}^{t}\frac{dt}{L}$ $-\frac{1}{R}\ln(\frac{V_{0}-IR}{V_{0}})=\frac{t}{L}$ $I=\frac{V_{0}}{R}(1-e^{-t/\tau})=I_{0}(1-e^{-t/\tau})$ where $\tau=\frac{L}{R}$ If we then switch back to the closed loop that does not include the battery then Kirchoff's loop rule gives us $L\frac{dI}{dt}+RI=0$ $\int_{I_{0}}^{I}\frac{dI}{I}=-\int_{0}^{t}\frac{R}{L}dt$ $\ln\frac{I}{I_{0}}=-\frac{R}{L}t$ $I=I_{0}e^{-t/\tau}$ The current changes exponentially according to $e^{-t/\tau}$. For an RC circuit the time constant is $\tau=RC$, for an LR circuit the time constant is $\tau=\frac{L}{R}$. ===== Analysis of the LC circuit ===== {{lc.png}} For a charged capacitor in series with an inductor, according to Kirchoff's loop rule: $-L\frac{dI}{dt}+\frac{Q}{C}=0$ The current comes from the capacitor so $I=-\frac{dQ}{dt}$ so $\frac{d^{2}Q}{dt^{2}}+\frac{Q}{LC}=0$ This is like a [[phy141:lectures:25&#equations_of_motion_for_shm|simple harmonic oscillator]], but with charge in place of displacement! A solution to this differential equation is $Q=Q_{0}\cos(\omega t+\phi)$ To find $\omega$ we substitute the solution in to the equation $-\omega^{2}Q_{0}\cos(\omega t + \phi)+\frac{Q_{0}}{LC}\cos(\omega t+\phi)=0$ $(-\omega^{2}+\frac{1}{LC})\cos(\omega t+ \phi)=0$ Which requires $\omega=\sqrt{\frac{1}{LC}}$ ===== LRC Circuit Analysis ===== As we saw previously an actual inductor has some resistance, and thus a real LC circuit is better represented by an LRC circuit in which we represent the resistance of the inductor, or other resistances in the circuit by an in series resistor $R$ {{rlc.png}} Now Kirchoff's loop rule when the circuit is closed is $-L\frac{dI}{dt}-IR+\frac{Q}{C}=0$ Which can be written in terms of the charge as before $L\frac{d^{2}Q}{dt^{2}}+R\frac{dQ}{dt}+\frac{Q}{C}=0$ and we can compare this to the differential equation for a [[phy141:lectures:26&#damping|damped harmonic oscillator]]. When $R^{2}<\frac{4L}{C}$ the system is underdamped and the solution to this equation is $Q=Q_{0}e^{-\frac{R}{2L}t}\cos(\omega' t+\phi)$ where $\omega'=\sqrt{\frac{1}{LC}-\frac{R^{2}}{4L^{2}}}$ When $R^{2}>\frac{4L}{C}$ the system is overdamped and the charge will decay slowly. $===== LRC series circuit ===== {{lrc.png?400}} We can begin our analysis of this circuit by applying Kirchoff's loop rule to the find potential at any given time. $V=V_{R}+V_{L}+V_{C}$ However an important consequence of the voltages not being at the same phase in the different components is that the peak voltage is not experienced at the same time in each component, and so the peak source voltage $V_{0}$ $V_{0}\neq V_{R0}+V_{L0}+V_{C0}$ The condition of continuity of current demands that the current throughout the circuit should always be in phase, so at any point in this circuit the current will $I=I_{0}\cos\omega t$ ===== Phasors ===== To understand the Phasor approach to AC circuits we can follow a very nice set of animations from [[http://www.animations.physics.unsw.edu.au/jw/AC.html|Physclips]]. A phasor is a way of representing the voltage across a component taking into account the phase difference between the voltage and the current. ==== Phasor for a resistor ==== As we saw in our last lecture a resistor has a voltage in phase with the current flowing through it. So if we now represent the current as vector moving in a plane we can also represent the voltage across the resistor as a vector of magnitude $V_{R}=I{R}$ which points in the same direction as the current flowing through it. ==== Phasor for a capacitor ==== We saw in our last lecture that in a capacitor the current leads the voltage by 90<sup>o</sup>. Also we saw that the reactance $X_{C}=\frac{1}{\omega C}=\frac{1}{2\pi f C}$ depends on frequency and so the size of the voltage phasor $V_{0}=I_{0}X_{C}$ also should. ==== Phasor for an inductor ==== We saw in our last lecture that in an inductor the current lags the voltage by 90<sup>o</sup>. Also we saw that the reactance $X_{L}=\omega L=2\pi f L$ depends on frequency and so the size of the voltage phasor $V_{0}=I_{0}X_{L}$ also should. {{figure_30_20.jpg?200}} ===== Phasor approach to LRC series circuit ===== To find the total voltage in the circuit at time $t$ we add the phasors for the different components together as we would vectors. We can see that for a current $I=I_{0}\cos\omega t$ flowing through through the circuit the voltage will be offset by a phase $\phi$ $V=V_{0}\cos(\omega t + \phi)$ The peak voltage $V_{0}$ is linked to the peak current $I_{0}$ through the impedance $Z$ $V_{0}=I_{0}Z$ and we can also say $V_{rms}=I_{rms}Z$ The value of $Z$ is found by considering the vector sum of the voltages $V_{0}=\sqrt{V_{R0}^{2}+(V_{L0}-V_{C0})^{2}}=I_{0}\sqrt{R^{2}+(X_{L}-X_{C})^2}$ so $Z=\sqrt{R^{2}+(X_{L}-X_{C})^{2}}=\sqrt{R^{2}+(\omega L-\frac{1}{\omega C})^{2}}$ The phase difference between the current and voltage $\phi$ is obtained from $\tan \phi=\frac{V_{L0}-V_{C0}}{V_{R0}}=\frac{I_{0}(X_{L}-X_{C})}{I_{0}R}=\frac{X_{L}-X_{C}}{R}$ {{figure_30_21.jpg?600}} ===== Phasor approach to LRC series circuit ===== {{lrc.png?400}} To find the total voltage in the circuit at time $t$ we add the phasors for the different components together as we would vectors. We can see that for a current $I=I_{0}\cos\omega t$ flowing through through the circuit the voltage will be offset by a phase $\phi$ $V=V_{0}\cos(\omega t + \phi)$ The peak voltage $V_{0}$ is linked to the peak current $I_{0}$ through the impedance $Z$ $V_{0}=I_{0}Z$ and we can also say $V_{rms}=I_{rms}Z$ The value of $Z$ is found by considering the vector sum of the voltages $V_{0}=\sqrt{V_{R0}^{2}+(V_{L0}-V_{C0})^{2}}=I_{0}\sqrt{R^{2}+(X_{L}-X_{C})^2}$ so $Z=\sqrt{R^{2}+(X_{L}-X_{C})^{2}}=\sqrt{R^{2}+(\omega L-\frac{1}{\omega C})^{2}}$ The phase difference between the current and voltage $\phi$ is obtained from $\tan \phi=\frac{V_{L0}-V_{C0}}{V_{R0}}=\frac{I_{0}(X_{L}-X_{C})}{I_{0}R}=\frac{X_{L}-X_{C}}{R}$ {{figure_30_21.jpg?400}} ===== Power dissipated in a LRC circuit ===== Power in an LRC circuit is only dissipated in the resistor, and so the average power dissipated is given by $\bar{P}=I_{RMS}^{2}R$ but we may want to express this in terms of the impedance of the circuit or the $V_{RMS}$ which is applied. To do this we write $\cos \phi =\frac{V_{R0}}{V_{0}}=\frac{I_{0}R}{I_{0}Z}=\frac{R}{Z}$ which means that $R=Z\cos\phi$ and $\bar{P}=I_{RMS}^{2}Z\cos\phi$ or, as $V_{RMS}=I_{RMS}Z$ $\bar{P}=I_{RMS}V_{RMS}\cos\phi$ ===== Resonance in an LRC circuit ===== The RMS current in the circuit we are considering is given by $I_{RMS}=\frac{V_{RMS}}{Z}=\frac{V_{RMS}}{\sqrt{R^{2}+(\omega L-\frac{1}{\omega C})^{2}}}$ We can see that the current should be frequency dependent and have a maximum when $(\omega L - \frac{1}{\omega C})=0$ which gives the resonant frequency $\omega_{0}=\sqrt{\frac{1}{LC}}$ {{lrcresonance.png}} One can get a feel for the circuit response [[http://demonstrations.wolfram.com/SeriesRLCCircuits/|here]] and [[http://demonstrations.wolfram.com/FrequencyResponseOfAnLCRCircuit/|here]]. Here is a [[https://www.youtube.com/watch?v=ZYgFuUl9_Vs|demo]] that shows how the resonant frequency is varied by the inductor and capacitor values. ===== Maxwell's equations ===== [[wp>Maxwell's_equations|Maxwell's equations]], named after [[wp>James_Clerk_Maxwell|James Clerk Maxwell]] who first expressed them together are a set of four equations from which all electromagnetic theory can be derived. The integral form of Maxwell's equations in free space (ie., in the absence of dielectric or magnetic materials) are $\oint\vec{E}\cdot d\vec{A}=\frac{Q}{\varepsilon_{0}}$ (Gauss's Law) $\oint \vec{B}\cdot d\vec{A}=0$ (Magnetic equivalent of Gauss's Law) $\oint\vec{E}\cdot d\vec{l}=-\frac{d\Phi_{B}}{dt}$ (Faraday's Law) $\oint\vec{B}\cdot d\vec{l}=\mu_{0}I+\mu_{0}\varepsilon_{0}\frac{d\Phi_{E}}{dt}$ (Modified form of Ampere's Law). ===== Wave equation ===== From Maxwell's equations $\frac{\partial^{2} E_{y}}{\partial t^{2}}=\frac{1}{\mu_{0}\varepsilon_{0}}\frac{\partial^{2} E_{y}}{\partial x^{2}}$ and $\frac{\partial^{2} B_{z}}{\partial t^{2}}=\frac{1}{\mu_{0}\varepsilon_{0}}\frac{\partial^{2} B_{z}}{\partial x^{2}}$ tell that electromagnetic waves can be described by an electric and magnetic field in phase with one another, but at right angles to one another $E_{y}(x,t)=E_{0}\sin\frac{2\pi}{\lambda}(x-\sqrt{\frac{1}{\mu_{0}\varepsilon_{0}}}t)$ $B_{z}(x,t)=B_{0}\sin\frac{2\pi}{\lambda}(x-\sqrt{\frac{1}{\mu_{0}\varepsilon_{0}}}t)$ with velocity $v=\sqrt{\frac{1}{\mu_{0}\varepsilon_{0}}}=3.00\times10^{8}\,\mathrm{m/s}=c$ ===== Energy in an EM wave ===== The energy stored in an electric field is $u_{E}=\frac{1}{2}\varepsilon_{0}E^{2}$ and in a magnetic field $u_{B}=\frac{1}{2}\frac{B^{2}}{\mu_{0}}$ The total energy of an EM wave is $u=u_{E}+u_{B}=\frac{1}{2}\varepsilon_{0}E^{2}+\frac{1}{2}\frac{B^{2}}{\mu_{0}}$ and using $\frac{E}{B}=c=\sqrt{\frac{1}{\mu_{0}\varepsilon_{0}}}$ $u=\varepsilon_{0}E^{2}=\frac{B^{2}}{\mu_{0}}=\sqrt{\frac{\varepsilon_{0}}{\mu_{0}}}EB$ In an electromagnetic wave the fields are moving with velocity $c$ the amount of energy passing through a unit area at any given time is $S=\varepsilon_{0}cE^{2}=\frac{cB^{2}}{\mu_{0}}=\sqrt{\frac{EB}{\mu_{0}}}$ More generally, the [[wp>Poynting_vector|Poynting vector]] is a vector $\vec{S}$ which represents the flux of energy in an electromagnetic field $\vec{S}=\frac{1}{\mu_{0}}\vec{E}\times\vec{B}$ ===== Reflection ===== We define an angle of incidence $\theta_{i}$ relative to the surface normal and find that the angle of reflection $\theta_{r}$, also defined relative to the surface normal is given by $\theta_{r}=\theta_{i}$ {{reflection.png}} ===== Forming an image in a plane mirror ===== {{planemirrorimage.png}} ===== Mirror equation ===== {{mirrorequation.png}} $\frac{1}{d_{o}}+\frac{1}{d_{i}}=\frac{1}{f}$ The magnification of a mirror $m$ $m=\frac{h_{i}}{h_{o}}=-\frac{d_{i}}{d_{o}}$ **Sign Conventions:** * The image height $h_i$ is positive if the image is upright, and negative if inverted. * $d_i$ or $d_o$ is positive if image or object is in front of the mirror (real) or object is behind the mirror (virtual). * Magnification is positive for an upright image and negative for an inverted image * $r$ and $f$ are also negative when behind the mirror ===== Refractive index ===== A useful way to describe a material in terms of it's [[wp>Refractive_index|refractive index]]. The velocity of light in a medium is related to it's velocity on free space by the refractive index $n$ through the equation $v=\frac{c}{n}$ [[wp>List_of_refractive_indices|Refractive indices of materials]] are typically somewhere between 1 (vacuum) and ~2.5 (diamond, strontium titanate). Glass will typically have a refractive index of about 1.5 though the exact value depends on the type of glass. We can recall that the velocity of a wave is given $v=f\lambda$, when light is traveling in a medium, the frequency $f$ does not change, the wavelength $\lambda$ changes according to $\lambda=\frac{\lambda_{0}}{n}$ ===== Snell's Law ===== $n_{1}\sin\theta_{1}=n_{2}\sin\theta_{2}$ The refractive index of the medium the light is traveling out of is $n_{1}$, the refractive index of the material the light is traveling in to is $n_{2}$ and both the angle of incidence $\theta_{1}$ and refraction $\theta_{2}$ are defined relative to the normal to the surface. {{snellslaw.png}} ===== Total internal reflection ===== For light leaving a more optically dense medium and entering a less optically dense one there is a maximum incident angle, the critical angle $\theta_{C}$ above which light is completely reflected, which we refer to as [[wp>Total_internal_reflection|total internal reflection]]. The critical angle can be found from the condition that the refracted angle is $90^{o}$ $\sin\theta_{C}=\frac{n_{2}}{n_{1}}\sin 90^{o}=\frac{n_{2}}{n_{1}}$ {{tir.png}} ===== Converging lens ===== A lens which is thicker in the center than at the edges is a converging lens, an incoming parallel beam of light will be focused to a point $F$ at $x=f$ from the center of the lens. To obtain a parallel beam of light light should be radially propagating outwards from the point $F'$ a distance $x=-f$ from the center of the lens. {{converginglensnew.png}} ===== Diverging Lens ===== A lens which is thinner in the center than at the edges is a diverging lens, an incoming parallel beam of light will be focused to a point $F$ at $x=-f$ from the center of the lens, which means that the rays appear to diverge outwards from that point. To obtain a parallel beam of light light the incoming rays should have a path such that they would go to the point $F'$ a distance $x=f$ from the center of the lens if the lens were not there.. {{diverginglensnew.png}} ===== Raytracing for a converging lens ===== To find the image position for a lens we can use a technique called raytracing. We only need to use 3 rays to find an image for a given object (provided we know the focal length of the lens). - A ray that leaves the object parallel to the axis and then goes through $F$ - A ray that passes through the center of the lens and is not bent - A ray that passes through $F'$ and exits parallel to the axis {{raytraceconverge.png}} ===== Raytracing for a diverging lens ===== To find the image position for a lens we can use a technique called raytracing. We only need to use 3 rays to find an image for a given object (provided we know the focal length of the lens). - A ray that leaves the object parallel to the axis and then goes through $F$ - A ray that passes through the center of the lens and is not bent - A ray that passes through $F'$ and exits parallel to the axis {{raytracingdiverge.png}} ===== Lens equation ===== {{lensequation.png}} $\frac{1}{d_{o}}+\frac{1}{d_{i}}=\frac{1}{f}$ This equation also works for diverging lens (with $d_{i}$ and $f$ negative). As with mirrors, the magnification $m$ is $m=\frac{h_{i}}{h_{0}}=-\frac{d_{i}}{d_{o}}$ ===== Two converging lenses ===== {{twoconverginglens.png}} Combinations of lens can be treated sequentially, by first finding the image produced by the first lens and then then using it as the object for the next lens. Applying the lens equation to the first lens $\frac{1}{d_{iA}}=\frac{1}{f_{A}}-\frac{1}{d_{oA}}$ and to the second lens $\frac{1}{d_{iB}}=\frac{1}{f_{B}}-\frac{1}{d_{oB}}$ and then using $d_{oB}=l-d_{iA}$ allows the determination of the final image position. The first lens produces an image that has height $-\frac{d_{iA}}{d_{oA}}h_{o}$, which will then be used as the object height in the next lens, so the final image has height $\frac{d_{iA}}{d_{oA}}\frac{d_{iB}}{d_{oB}}h_{o}$ When considering the multiplying power of lens combinations we can simply multiply the effects of the individual lens. ===== Magnifying Glass ===== The biggest possible size of an object on the eye without the aid of an optical instrument is obtained by placing it at the near point of the eye $N$. If we bring an object closer than the near point then it will occupy a larger angle, but we won't be able to focus on it. This problem can be addressed by a magnifying glass. If we place an object closer than the focal length of the lens, it will produce a virtual image at a distance $d_{i}$. The maximum magnification is achieved by bringing the lens right up to your eye and then arranging the lens, object and your head so that the image is at the near point. To find the magnification we need to know $d_{o}$ which can be found from the lens equation, taking $d_{i}=-N$ $\frac{1}{d_{o}}=\frac{1}{f}-\frac{1}{d_{i}}=\frac{1}{f}+\frac{1}{N}$ In the small angle approximation $\theta=\frac{h}{N}$ and $\theta'=\frac{h}{d_{o}}$ The angular magnification of the lens, also called the magnifying power is defined as $M=\frac{\theta'}{\theta}$ which we can see here is $M=\frac{N}{d_{o}}=N(\frac{1}{f}+\frac{1}{N})=\frac{N}{f}+1$ {{magglassatnearpoint.png}} ===== Magnifying glass with image at infinity ===== It is not very convenient to use a magnifying glass with the eye focused at the near point, firstly as we are required to constantly maintain the correct positioning of lens, object and head, but also because our eye muscles are at maximum exertion, which is not very comfortable over long periods of time. An alternative way to use a magnifying glass is to place the object at the focal point of the lens producing an image at $\infty$. In this case $\theta'=\frac{h}{f}$ and the magnifying power is $M=\frac{\theta'}{\theta}=\frac{N}{f}$ {{magglassatinfinity.png}} ===== Telescopes ===== A magnifying glass is only useful for looking at objects nearby (the maximum object distance from the lens is the focal length). To view distant objects we need to use a [[wp>Telescope|telescope]]. A refracting telescope uses two lens, and objective and an eyepiece. The objective produces an image which is then magnified by the eyepiece. {{refractingtelescope.png}} The original apparent object size is $\theta\approx\frac{h}{f_{o}}$ If we consider an object at infinity and an eyepiece which is adjusted so that the focus of the eyepiece is at the focus of the objective (this produces a final image at infinity, which is why I chose not to draw it this way on the diagram), then the apparent size of the final image is $\theta'\approx\frac{h}{f_{e}}$ giving the magnification power of the telescope as $M=\frac{\theta'}{\theta}=-\frac{f_{o}}{f_{e}}$ with the minus sign signifying that the image is inverted. ===== Young's double slit experiment ===== If we treat each of the slits as a point source of circular wavefronts. The condition for constructive interference (bright fringes) is $d\sin\theta=m\lambda$ <html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html> (m=0,1,2,..) and for destructive interference (dark fringes) $d\sin\theta=(m+\frac{1}{2})\lambda$ <html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html> (m=0,1,2,..) {{youngsdoubleslit.png}} ===== Reflection from a transparent medium ===== As in the case of a wave on a rope that is incident on a heavier rope and is reflected with a 180<sup>o</sup> phase change when a light wave is reflected from a more optically dense media a 180<sup>o</sup> phase change occurs. This effect is important when we want to consider interference effects in thin films. {{reflectionphasechange.png}} ===== Anti reflective coating ===== {{antreflectivecoating.png}} As the two reflections both occur from more optically dense media they both experience a $\frac{\lambda}{2}$ phase change on reflection. To have the light be out of phase we need to light that goes through the coating to have advanced by $\frac{\lambda}{2}$ for destructive interference to occur. Critically, when destructive interference occurs the light is not lost, but is instead transmitted. As the wavelength of light in a medium is given by $\lambda=\frac{\lambda_{0}}{n}$ where $n$ is the refractive index of the medium and $\lambda_{0}$ is the wavelength of the light in free space, the thickness of the coating should be $\frac{\lambda}{4n_{2}}$. In practice the light incident will not all be the same wavelength, so the thickness of the coating is typically chosen to work optimally in the center of the visible band (~550nm). ===== Air wedge ===== {{airwedge.png}} For a single wavelength dark rings will occur whenever $2t=m\lambda$<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html> (m=0,1,2,..) and bright rings will occur whenever $2t=(m+\frac{1}{2})\lambda$<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html> (m=0,1,2,..) For white light different colors will experience constructive interference at different thicknesses, leading to the colorful lines we see when an air gap is under normal light. ===== Intensity pattern for a single slit ===== Rather than writing our equation in terms of k $I_{\theta}=I_{0}\mathrm{sinc^{2}}(\frac{Dk}{2}\sin\theta)$ it is convenient to write it in terms of the wavelength, using $k=\frac{2\pi}{\lambda}$ $I_{\theta}=I_{0}\mathrm{sinc^{2}}(\frac{D\pi}{\lambda}\sin\theta)$ This function has a central maxima and then minima at $D\sin\theta=m\lambda$ <html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html> $m=\pm 1,\pm 2,\pm 3,..$ {{slitintensity.png}} ===== Diffraction in the double slit ===== Now we have an expression for the intensity from a slit of width $D$ $I_{\theta}=I_{0}\mathrm{sinc^{2}}(\frac{D\pi}{\lambda}\sin\theta)$ we can use consider slits as sources instead of the point sources we considered earlier which gave $I_{\theta}=I_{0}\cos^{2}(\frac{d \pi}{\lambda}\sin \theta)$ The combination of these gives $I_{theta}=I_{0}\mathrm{sinc^{2}}(\frac{D\pi}{\lambda}\sin\theta)\cos^{2}(\frac{d\pi}{\lambda}\sin \theta)$ {{twoslitwithdiffraction.png}} =====Diffraction grating ===== Irrespective of the number of slits $n$ the condition for maxima is the same as for the double slit $d\sin\theta=m\lambda$ <html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html> (m=0,1,2,..) but that the larger the number of slits from which diffraction occurs the sharper the maxima will be. ===== Polarizers ===== The polarization of the light after it has passed through the polarizer is the direction defined by the polarizer, so the way to approach the calculation of the magnitude of the intensity that passes through the polarizer is by finding the component of the electric field which is in that direction $E=E_{0}\cos\theta$ the intensity is then $I=E^{2}=E_{0}^{2}\cos^{2}\theta=I_{0}\cos^{2}\theta$ This formula is known as [[wp>Polarizer#Malus.27_law_and_other_properties|Malus' Law]]. {{polarizers.png}} phy142kk/lectures/fr1.1430917478.txt · Last modified: 2015/05/06 09:04 (external edit)
CommonCrawl
Sat, 11 Apr 2020 20:48:38 GMT 15.1: Precipitation and Hail [ "article:topic", "showtoc:no", "license:ccbyncsa", "authorname:rstull" ] https://geo.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fgeo.libretexts.org%2FBookshelves%2FMeteorology_and_Climate_Science%2FBook%253A_Practical_Meteorology_(Stull)%2F15%253A_Thunderstorm_Hazards%2F15.00%253A_Section_1- Book: Practical Meteorology (Stull) 15: Thunderstorm Hazards Contributed by Roland Stull Professor & Director, Geophysical Disaster Computational Fluid Dynamics Center at University of British Columbia 15.1.1. Heavy Rain 15.1.2. Hail 15.1.2.1. Hail Damage 15.1.2.2. Hail Formation 15.1.2.3. Hail Forecasting 15.1.2.4. Hail Locations 15.1.2.5. Hail Mitigation Thunderstorms are deep clouds that can create: large raindrops (2 - 8 mm diameter), in scattered showers (order of 5 to 10 km diameter rain shafts moving across the ground, resulting in brief-duration rain [1 - 20 min] over any point), of heavy rainfall rate (10 to over 1000 mm h–1 rainfall rates). The Precipitation Processes chapter lists worldrecord rainfall rates, some of which were caused by thunderstorms. Compare this to nimbostratus clouds, that create smaller-size drizzle drops (0.2 - 0.5 mm) and small rain drops (0.5 - 2 mm diameter) in widespread regions (namely, regions hundreds by thousands of kilometers in size, ahead of warm and occluded fronts) of light to moderate rainfall rate that can last for many hours over any point on the ground. Why do thunderstorms have large-size drops? Thunderstorms are so tall that their tops are in very cold air in the upper troposphere, allowing coldcloud microphysics even in mid summer. Once a spectrum of different hydrometeor sizes exists, the heavier ice particles fall faster than the smaller ones and collide with them. If the heavier ice particles are falling through regions of supercooled liquid cloud droplets, they can grow by riming (as the liquid water instantly freezes on contact to the outside of ice crystals) to form dense, conical-shaped snow pellets called graupel (< 5 mm diameter). Alternately, if smaller ice crystals fall below the 0°C level, their outer surface partially melts, causing them to stick to other partially-melted ice crystals and grow into miniature fluffy snowballs by a process called aggregation to sizes as large as 1 cm in diameter. The snow aggregates and graupel can reach the ground still frozen or partially frozen, even in summer. This occurs if they are protected within the cool, saturated downdraft of air descending from thunderstorms (downbursts will be discussed later). At other times, these large ice particles falling through the warmer boundary layer will melt completely into large raindrops just before reaching the ground. These rain drops can make a big splat on your car windshield or in puddles on the ground. Why do thunderstorms have scattered showers? Often large-size, cloud-free, rain-free subsidence regions form around and adjacent to thunderstorms due to air-mass continuity. Namely, more air mass is pumped into the upper troposphere by thunderstorm updrafts than can be removed by in-storm precipitation-laden downdrafts. Much of the remaining excess air descends more gently outside the storm. This subsidence (Fig. 15.1) tends to suppress other incipient thunderstorms, resulting in cumulonimbus clouds that are either isolated (surrounded by relatively cloud-free air), or are in a thunderstorm line with subsidence ahead and behind the line. Why do thunderstorms often have heavy rain? First, the upper portions of the cumulonimbus cloud are so high that the rising air parcels become so cold (due to the moist-adiabatic cooling rate) that virtually all of the water vapor carried by the air is forced to condense, deposit, or freeze out. Second, the vertical stacking of the deep cloud allows precipitation forming in the top of the storm to grow by collision and coalescence or accretion as it falls through the middle and lower parts of the cloud, as already mentioned, thus sweeping out a lot of water in a short time. Third, long-lasting storms such as supercells or orographic storms can have continual inflow of humid boundary-layer air to add moisture as fast as it rains out, thereby allowing the heavy rainfall to persist. As was discussed in the previous chapter, the heaviest precipitation often falls closest to the main updraft in supercells (see Fig. 15.5). Rainbows are a by-product of having large numbers of large-diameter drops in a localized region surrounded by clear air (Fig. 15.1). Because thunderstorms are more likely to form in late afternoon and early evening when the sun angle is relatively low in the western sky, the sunlight can shine under cloud base and reach the falling raindrops. In North America, where thunderstorms generally move from the southwest toward the northeast, this means that rainbows are generally visible just after the thundershowers have passed, so you can find the rainbow looking toward the east (i.e., look toward your shadow). Rainbow optics are explained in more detail in the last chapter. Any rain that reached the ground is from water vapor that condensed and did not re-evaporate. Thus, rainfall rate (RR) can be a surrogate measure of the rate of latent heat release: \(\ \begin{align} H_{R R}=\rho_{L} \cdot L_{v} \cdot R R\tag{15.1}\end{align}\) where HRR = rate of energy release in the storm over unit area of the Earth's surface (J·s–1·m–2), ρL is the density of pure liquid water, Lv is the latent heat of vaporization (assuming for simplicity that all of the precipitation falls out in liquid form), and RR = rainfall rate. Ignoring variations in the values of water density and latent heat of vaporization, this equation reduces to: \(\ \begin{align} H_{R R}=a \cdot R R\tag{15.2}\end{align}\) where a = 694 (J·s–1·m–2) / (mm·h–1) , for rainfall rates in mm h–1. The corresponding warming rate averaged over the tropospheric depth (assuming the thunderstorm fills the troposphere) was shown in the Heat chapter to be: \(\ \begin{align} \Delta T / \Delta \mathrm{t}=b \cdot R R\tag{15.3}\end{align}\) where b = 0.33 K (mm of rain)–1. From the Water Vapor chapter recall that precipitable water, dw, is the depth of water in a rain gauge if all of the moisture in a column of air were to precipitate out. As an extension of this concept, suppose that pre-storm boundary-layer air of mixing ratio 20 g kg–1 was drawn up into a column filling the troposphere by the action of convective updrafts (Fig. 15.2). If cloud base was at a pressure altitude of 90 kPa and cloud top was at 30 kPa, and if half of the water in the cloudy domain were to condense and precipitate out, then eq. (4.33) says that the depth of water in a rain gauge is expected to be dw = 61 mm. Figure 15.2 The thunderstorm updraft draws in a larger area of warm, humid boundary-layer air, which is fuel for the storm. The ratio of amount of rain falling out of a thunderstorm to the inflow of water vapor is called precipitation efficiency, and ranges from 5 to 25% for storms in an environment with strong wind shear to 80 to 100% in weakly-sheared environments. The average efficiency of thunderstorms is roughly 50%. Processes that account for the non-precipitating water include anvil outflow of ice crystals that evaporate, evaporation of hydrometeors with entrained air from outside the storm, and evaporation of some of the precipitation before reaching the ground (i.e., virga). Extreme precipitation producing rainfall rates over 100 mm h–1 are unofficially called cloudbursts. A few cloudbursts or rain gushes have been observed with rainfall rates of 1000 mm h–1, but they usually last for only a few minutes. The greater-intensity rainfall events occur less frequently, and have return periods (average time between occurrence) of order hundreds of years (see the Rainfall Rates subsection in the Precipitation chapter). For example, a stationary orographic thunderstorm over the eastern Rocky Mountains in Colorado produced an average rainfall rate of 76 mm h–1 for 4 hours during 31 July 1976 over an area of about 11 x 11 km. A total of about 305 mm of rain fell into the catchment of the Big Thompson River, producing a flash flood that killed 139 people in the Big Thompson Canyon. This amount of rain is equivalent to a tropospheric warming rate of 25°C h–1, causing a total latent heat release of about 9.1x1016 J. This thunderstorm energy (based only on latent heat release) was equivalent to the energy from 23 one-megaton nuclear bomb explosions (given about 4x1015 J of heat per one-megaton nuclear bomb). Sample Application A thunderstorm near Holt, Missouri, dropped 305 mm of rain during 0.7 hour. How much net latent heat energy was released into the atmosphere over each square meter of Earth's surface, and how much did it warm the air in the troposphere? Find the Answer Given: RR = 305 mm / 0.7 h = 436 mm h–1. Duration ∆t = 0.7 h. Find: HRR ·∆t = ? (J·m–2) ; ∆T = ? (°C) First, multiply HRR in eq. (15.2) by ∆t: HRR·∆t = [694 (J·s–1·m–2)/(mm·h–1)]·[436 mm h–1]·[0.7 h]· [3600s/h] = 762.5 MJ·m–2 Next, use eq. (15.3): ∆T = b·RR·∆t = (0.33 K mm–1)·(305 mm) = 101 °C Check: Units OK, but values seem too large??? Exposition: After the thunderstorm has finished raining itself out and dissipating, why don't we observe air that is 101°C warmer where the storm used to be? One reason is that in order to get 305 mm of rain out of the storm, there had to be a continual inflow of humid air bringing in moisture. This same air then carries away the heat as the air is exhausted out of the anvil of the storm. Thus, the warming is spread over a much larger volume of air than just the air column containing the thunderstorm. Using the factor of 5 as estimated by the needed moisture supply, we get a much more reasonable estimate of (101°C)/5 ≈ 20°C of warming. This is still a bit too large, because we have neglected the mixing of the updraft air with additional environmental air as part of the cloud dynamics, and have neglected heat losses by radiation to space. Also, the Holt storm, like the Big Thompson Canyon storm, was an extreme event — many thunderstorms are smaller or shorter lived. The net result of the latent heating is that the upper troposphere (anvil level) has warmed because of the storm, while the lower troposphere has cooled as a result of the rain-induced cold-air downburst. Namely, the thunderstorm did its job of removing static instability from the atmosphere, and leaving the atmosphere in a more stable state. This is a third reason why the first thunderstorms reduce the likelihood of subsequent storms. In summary, the three reasons why a thunderstorm suppresses neighboring storms are: (1) the surrounding environment becomes stabilized (smaller CAPE, larger CIN), (2) sources of nearby boundary-layer fuel are exhausted, and (3) subsidence around the storm suppresses other incipient storm updrafts. But don't forget about other thunderstorm processes such as gust fronts that tend to trigger new storms. Thus, competing processes work in thunderstorms, making them difficult to forecast. This amount of rain was possible for two reasons: (1) the continual inflow of humid air from the boundary layer into a well-organized (long lasting) orographic thunderstorm (Fig 14.11), and (2) the weakly sheared environment allowed a precipitation efficiency of about 85%. Comparing 305 mm observed with 61 mm expected from a single troposphere-tall column of humid air, we conclude that the equivalent of about 5 troposphere-thick columns of thunderstorm air were consumed by the storm. Since the thunderstorm is about 6 times as tall as the boundary layer is thick (in pressure coordinates, Fig. 15.2), conservation of air mass suggests that the Big Thompson Canyon storm drew boundary-layer air from an area about 5·6 = 30 times the cross-sectional area of the storm updraft (or 12 times the updraft radius). Namely, a thunderstorm updraft core of 5 km radius would ingest the fuel supply of boundary-layer air from within a radius of 60 km. This is another reason why subsequent storms are less likely in the neighborhood of the first thunderstorm. Namely, the "fuel tank" is empty after the first thunderstorm, until the fuel supply can be re-generated locally via solar heating and evaporation of surface water, or until fresh fuel of warm humid air is blown in by the wind. (See the Sample Application on the previous page for other reasons why a first thunderstorm can suppress later ones.) Hailstones are irregularly shaped balls of ice larger than 0.5 cm diameter that fall from severe thunderstorms. The event or process of hailstones falling out of the sky is called hail. The damage path on the ground due to a moving hail storm is called a hail swath. Figure 15.3 Large hailstones and damage to car windshield. Most hailstones are in the 0.5 to 1.5 cm diameter range, with about 25% of the stones greater than 1.5 cm. Hailstones are called giant hail (or large or severe hail) if their diameters are between 1.9 and 5 cm. Hailstones with diameters ≥ 5 cm are called significant hail or enormous hail (Fig. 15.3). Giant and enormous hail are rare. One stone of diameter 17.8 cm was found in Nebraska, USA, in June 2003. The largest recorded hailstone had diameter 20.3 cm and weighed 878.8 g — it fell in Vivian, South Dakota, USA, on 23 July 2010. Hailstone diameters are sometimes compared to standard size balls (ping-pong ball = 4 cm; tennis ball ≈ 6.5 cm). They are also compared to nonstandard sizes of fruit, nuts, and vegetables. One such classification is the TORRO Hailstone Diameter relationship (Table 15-1). Table 15-1. TORRO Hailstone Size Classification. Size Code Max. Diameter (cm) Description 0 0.5 - 0.9 Pea 1 1.0 - 1.5 Mothball 2 1.6 - 2.0 Marble, grape 3 2.1 - 3.0 Walnut 4 3.1 - 4.0 Pigeon egg to golf ball 5 4.1 - 5.0 Pullet egg 6 5.1 - 6.0 Hen egg 7 6.1 - 7.5 Tennis ball to cricket ball 8 7.6 - 9.0 Large orange to soft ball 9 9.1 - 10.0 Grapefruit 10 > 10.0 Melon Large diameter hailstones can cause severe damage to crops, tree foliage, cars, aircraft, and sometimes buildings (roofs and windows). Damage is often greater if strong winds cause the hailstones to move horizontally as they fall. Most humans are smart enough not to be outside during a hail storm, so deaths due to hail in North America are rare, but animals can be killed. Indoors is the safest place for people in a hail storm, although inside a metal-roofed vehicle is also relatively safe. Stay away from windows, which can break. The terminal fall velocity of hail increases with hailstone size, and can reach magnitudes greater than 50 m s–1 for large hailstones. An equation for hailstone terminal velocity was given in the Precipitation chapter, and a graph of it is shown here in Fig. 15.4. Hailstones have different shapes (smooth and round vs. irregular shaped with protuberances) and densities (average ρice is 900 kg m–3, but this varies depending on the amount of air bubbles). This causes a range of air drags (0.4 to 0.8, with average 0.55) and a corresponding range of terminal fall speeds. Hailstones that form in the updraft vault region of a supercell thunderstorm are so heavy that most fall immediately adjacent to the vault (Fig. 15.5). Figure 15.4 Hailstone fall-velocity magnitude relative to the air at pressure height of 50 kPa, assuming an air density of 0.69 kg m–3. Two stages of hail development are embryo formation, and then hailstone growth. A hail embryo is a large frozen raindrop or graupel particle (< 5 mm diameter) that is heavy enough to fall at a different speed than the surrounding smaller cloud droplets. It serves as the nucleus of hailstones. Like all normal (non-hail) precipitation, the embryo first rises in the updraft as a growing cloud droplet or ice crystal that eventually becomes large enough (via collision and accretion, as discussed in the Precipitation chapter) to begin falling back toward Earth. While an embryo is being formed, it is still so small that it is easily carried up into the anvil and out of the thunderstorm, given typical severe thunderstorm updrafts of 10 to 50 m s–1. Most potential embryos are removed from the thunderstorm this way, and thus cannot then grow into hailstones. The few embryos that do initiate hail growth are formed in regions where they are not ejected from the storm, such as: (1) outside of the main updraft in the flanking line of cumulus congestus clouds or in other smaller updrafts, called feeder cells; (2) in a side eddy of the main updraft; (3) in a portion of the main updraft that tilts upshear; or (4) earlier in the evolution of the thunderstorm while the main updraft is still weak. Regardless of how they are formed, it is believed that the embryos then move or fall into the main updraft of the severe thunderstorm a second time. Figure 15.5 Plan view of classic (CL) supercell in the N. Hemisphere (copied from the Thunderstorm chapter). Low altitude winds are shown with light-colored arrows, high altitude with darker arrows, and ascending/descending air with dashed lines. T indicates tornado location. Precipitation is at the ground. Cross section A-B is used in Fig. 15.10. The hailstone grows during this second trip through the updraft. Even though the embryo is initially rising in the updraft, the smaller surrounding supercooled cloud droplets are rising faster (because their terminal fall velocity is slower), and collide with the embryo. Because of this requirement for abundant supercooled cloud droplets, hail forms at altitudes where the air temperature is between –10 and –30°C. Most growth occurs while the hailstones are floating in the updraft while drifting horizontally across the updraft in a narrow altitude range having temperatures of –15 to –20°C. In pockets of the updraft happening to have relatively low liquid water content, the supercooled cloud droplets can freeze almost instantly when they hit the hailstone, trapping air in the interstices between the frozen droplets. This results in a porous, brittle, white layer around the hailstone. In other portions of the updraft having greater liquid water content, the water flows around the hail and freezes more slowly, resulting in a hard clear layer of ice. The result is a hailstone with 2 to 4 visible layers around the embryo (when the hailstone is sliced in half, as sketched in Fig. 15.6), although most hailstones are small and have only one layer. Giant hail can have more than 4 layers. Figure 15.6 Illustration of slice through a hailstone, showing a graupel embryo surrounded by 4 layers of alternating clear ice (indicated with blue shading) and porous (white) ice. As the hailstone grows and becomes heavier, its terminal velocity increases and eventually surpasses the updraft velocity in the thunderstorm. At this point, it begins falling relative to the ground, still growing on the way down through the supercooled cloud droplets. After it falls into the warmer air at low altitude, it begins to melt. Almost all strong thunderstorms have some small hailstones, but most melt into large rain drops before reaching the ground. Only the larger hailstones (with more frozen mass and quicker descent through the warm air) reach the ground still frozen as hail (with diameters > 5 mm). If a supercooled cloud droplet of radius 50 µm and temperature –20°C hits a hailstone, will it freeze instantly? If not, how much heat must be conducted out of the droplet (to the hailstone and the air) for the droplet to freeze? Given: r = 50 µm = 5x10–5 m, T = –20°C Find: ∆QE = ? J , ∆QH = ? J, Is ∆QE < ∆QH ? If no, then find ∆QE – ∆QH . Use latent heat and specific heat for liquid water, from Appendix B. Assume a spherical droplet of mass mliq = ρliq·Vol = ρliq·(4/3)·π·r3 = (1000 kg m–3)·(4/3)·π·(5x10–5 m)3 = 5.2x10–10 kg Use eq. (3.3) to determine how much heat must be removed to freeze the whole droplet (∆m = mliq): ∆QE = Lf ·∆m = (3.34x105 J kg–1)·(5.2x10–10 kg) = 1.75x10–4 J . Use eq. (3.1) to find how much can be taken up by allowing the droplet to warm from –20°C to 0°C: ∆QH = mliq·Cliq ·∆T = (5.2x10–10 kg)·[4217.6 J (kg·K)–1]·[0°C–(–20°C)] = 0.44x10–4 J . Thus ∆QE > ∆QH , so the sensible-heat deficit associated with –20°C is not enough to compensate for the latent heat of fusion needed to freeze the drop. The droplet will NOT freeze instantly. The amount of heat remaining to be conducted away to the air or the hailstone to allow freezing is: ∆Q = ∆QE – ∆QH = (1.75x10–4 J) – (0.44x10–4 J) = 1.31x10–4 J . Check: Units OK. Physics OK. Exposition: During the several minutes needed to conduct away this heat, the liquid can flow over the hailstone before freezing, and some air can escape. This creates a layer of clear ice. Forecasting large-hail potential later in the day is directly tied to forecasting the maximum updraft velocity in thunderstorms, because only in the stronger updrafts can the heavier hailstones be kept aloft against their terminal fall velocities (Fig. 15.4). CAPE is an important parameter in forecasting updraft strength, as was given in eqs. (14.7) and (14.8) of the Thunderstorm chapter. Furthermore, since it takes about 40 to 60 minutes to create hail (including both embryo and hail formation), large hail would be possible only from long-lived thunderstorms, such as supercells that have relatively steady organized updrafts (which can exist only in an environment with appropriate wind shear). However, even if all these conditions are satisfied, hail is not guaranteed. So national forecast centers in North America do not issue specific hail watches, but include hail as a possibility in severe thunderstorm watches and warnings. Figure 15.7 Shaded pink is the portion of CAPE area between altitudes where the environment is between –10 and –30°C. Greater areas indicate greater hail likelihood. To aid in hail forecasting, meteorologists sometimes look at forecast maps of the portion of CAPE between altitudes where the environmental air temperature is –30 ≤ T ≤ –10°C, such as sketched in Fig. 15.7. Larger values (on the order of 400 J kg–1 or greater) of this portion of CAPE are associated with more rapid hail growth. Computers can easily calculate this portion of CAPE from soundings produced by numerical forecast models, such as for the case shown in Fig. 15.8. Within the shaded region of large CAPE on this figure, hail would be forecast at only those subsets of locations where thunderstorms actually form. Figure 15.8 Portion of CAPE (J kg–1) between altitudes where the environment is between –10 and –30°C. Larger values indicate chance of greater hail growth rates. Case: 22 UTC on 24 May 2006 over the USA and Canada. Weather maps of freezing-level altitude and wind shear between 0 to 6 km are also used by hail forecasters. More of the hail will reach the ground without melting if the freezing level is at a lower altitude. Environmental wind shear enables longer-duration supercell updrafts, which favor hail growth. Research is being done to try to create a single forecast parameter that combines many of the factors favorable for hail. One example is the Significant Hail Parameter (SHIP): \(\begin{align} S H I P &=\left\{\operatorname{MUCAPE}\left(\mathrm{J} \mathrm{kg}^{-1}\right) \cdot r_{\operatorname{MUP}}\left(\mathrm{g} \mathrm{kg}^{-1}\right)\right.\cdot \gamma_{70-50 k P a}\left(^{\circ} \mathrm{Ckm}^{-1}\right) \cdot\left[-T_{50 k P a}\left(^{\circ} \mathrm{C}\right)\right] \cdot\left.T S M_{0-6 k m}\left(\mathrm{m} \mathrm{s}^{-1}\right)\right\} / a \tag{15.4}\end{align}\) where rMUP is the water vapor mixing ratio for the most-unstable air parcel, γ70-50kPa is the average environmental lapse rate between pressure heights 70 and 50 kPa, T50kPa is the temperature at a pressure height of 50 kPa, TSM0-6km is the total shear magnitude between the surface and 6 km altitude, and empirical parameter a = 44x106 (with dimensions equal to those shown in the numerator of the equation above, so as to leave SHIP dimensionless). This parameter typically ranges from 0 to 4 or so. If SHIP > 1, then the prestorm environment is favorable for significant hail (i.e., hail diameters ≥ 5 cm). Significant hail is frequently observed when SHIP ≥ 1.5. Fig. 15.9 shows a weather map of SHIP for the 22 UTC 24 May 2006 case study. Figure 15.9 Values of significant hail parameter (SHIP) over the USA for the same case as the previous figure. This parameter is dimensionless. Nowcasting (forecasting 1 to 30 minutes ahead) large hail is aided with weather radar: Large hailstones cause very large radar reflectivity (order of 60 to 70 dBZ) compared to the maximum possible from very heavy rain (up to 50 dBZ). Some radar algorithms diagnose hail when it finds reflectivities ≥ 40 dBZ at altitudes where temperatures are below freezing, with greater chance of hail for ≥ 50 dBZ at altitudes above the –20°C level. Doppler velocities can show if a storm is organized as a supercell, which is statistically more likely to support hail. Polarimetric methods (see the Satellites & Radar chapter) allow radar echoes from hail to be distinguished from echoes from rain or smaller ice particles. The updrafts in some supercell thunderstorms are so strong that only small cloud droplets exist, causing weak (<25 dBZ) radar reflectivity, and resulting in a weak-echo region (WER) on the radar display. Sometimes the WER is surrounded on the top and sides by strong precipitation echoes, causing a bounded weak-echo region (BWER), also known as an echo-free vault. This enables very large hail, because embryos falling from the upshear side of the bounding precipitation can re-enter the updraft, thereby efficiently creating hail (Fig. 15.10). Suppose a pre-storm environmental sounding has the following characteristics over a corn field: MUCAPE = 3000 J kg–1, rMUP = 14 g kg–1, γ70-50kPa = 5 °C km–1, T50kPa = –10°C TSM0-6km = 45 m s–1 If a thunderstorm forms in this environment, would significant hail (with diameters ≥ 5 cm) be likely? Given: values listed above Find: SHIP = ? . Use eq. (15.4): SHIP = [ (3000 J kg–1) · (14 g kg–1) · (5 °C km–1) · (10°C) · (45 m s–1) ] / (44x106) = 99.5x106 / (44x106) = 2.15 Check: Units are dimensionless. Value reasonable. Exposition: Because SHIP is much greater than 1.0, significant (tennis ball size or larger) hail is indeed likely. This would likely totally destroy the corn crop. Because hail forecasting has so many uncertainties and often short lead times, the farmers don't have time to take action to protect or harvest their crops. Thus, their only recourse is to purchase crop insurance. The hail that does fall often falls closest to the main updraft (Figs. 15.5 & 15.10), and the resulting hail shaft (the column of falling hailstones below cloud base) often looks white or invisible to observers on the ground. Most hail falls are relatively short lived, causing small (10 to 20 km long, 0.5 to 3 km wide) damage tracks called hailstreaks. Sometimes long-lived supercell thunderstorms can create longer hailswaths of damage 8 to 24 km wide and 160 to 320 km long. Even though large hail can be extremely damaging, the mass of water in hail at the ground is typically only 2 to 3% of the mass of rain from the same thunderstorm. In the USA, most giant hail reaching the ground is found in the central and southern plains, centered in Oklahoma (averaging 6 to 9 giant-hail days yr–1), and extending from Texas north through Kansas and Nebraska (3 or more giant-hail days yr–1). Hail is also observed less frequently (1 to 3 giant-hail days yr–1) eastward across the Mississippi valley and into the southern and mid-Atlantic states. Although hail is less frequent in Canada than in the USA, significant hail falls are found in Alberta between the cities of Calgary and Edmonton, particularly near the town of Red Deer. Hail is also found in central British Columbia, and in the southern prairies of Saskatchewan and Manitoba. In the S. Hemisphere, hail falls often occur over eastern Australia. The 14 April 1999 hailstorm over Sydney caused an estimated AUS$ 2.2 billion in damage, the second largest weather-related damage total on record for Australia. Hailstorms have been observed over North and South America, Europe, Australia, Asia, and Africa. Figure 15.10 Vertical cross section through a classic supercell thunderstorm along slice A-B from Fig. 15.5. Thin dashed line shows visible cloud boundary, and colors mimic the intensity of precipitation as viewed by radar. BWER = bounded weak echo region of supercooled cloud droplets. Grey triangle represents graupel on the upshear side of the storm, which can fall (dotted line) and re-enter the updraft to serve as a hail embryo. Thick dashed orange line is the tropopause. Isotherms are thin solid horizontal black lines. Curved thick black lines with arrows show air flow. Attempts at hail suppression (mitigation) have generally been unsuccessful, but active hail-suppression efforts still continue in most continents to try to reduce crop damage. Five approaches have been suggested for suppressing hail, all of which involve cloud seeding (adding particles into clouds to serve as additional or specialized hydrometeor nuclei), which is difficult to do with precision: beneficial competition - to create larger numbers of embryos that compete for supercooled cloud water, thereby producing larger numbers of smaller hailstones (that melt before reaching the ground). The methods are cloud seeding with hygroscopic (attracts water; e.g., salt particles) cloud nuclei (to make larger rain drops that then freeze into embryos), or seeding with glaciogenic (makes ice; e.g., silver iodide particles) ice nuclei to make more graupel. early rainout - to cause precipitation in the cumulus congestus clouds of the flanking line, thereby reducing the amount of cloud water available before the updraft becomes strong enough to support large hail. The method is seeding with ice nuclei. trajectory altering - to cause the embryos to grow to greater size earlier, thereby following a lower trajectory through the updraft where the temperature or supercooled liquid water content is not optimum for large hail growth. This method attempts to increase rainfall (in drought regions) while reducing hail falls. dynamic effects - to consume more CAPE earlier in the life cycle of the updraft (i.e., in the cumulus congestus stage), thereby leaving less energy for the main updraft, causing it to be weaker (and supporting only smaller hail). glaciation of supercooled cloud water - to more quickly convert the small supercooled cloud droplets into small ice crystals that are less likely to stick to hail embryos and are more likely to be blown to the top of the storm and out via the anvil. This was the goal of most of the early attempts at hail suppression, but has lost favor as most hail suppression attempts have failed. INFO • Hail Suppression For many years there has been a very active cloud seeding effort near the town of Red Deer, Alberta, Canada, in the hopes of suppressing hail. These activities were funded by some of the crop-insurance companies, because their clients, the farmers, demanded that something be done. Although the insurance companies knew that there is little solid evidence that hail suppression actually works, they funded the cloud seeding anyway as a public-relations effort. The farmers appreciated the efforts aimed at reducing their losses, and the insurance companies didn't mind because the cloudseeding costs were ultimately borne by the farmers via increased insurance premiums. During cloud seeding, how many silver iodide particles need to be introduced into a thunderstorm to double the number of ice nuclei? Assume the number density of natural ice nuclei is 10,000 per cubic meter. Given: nice nuclei/Volume = 10,000 m–3 Find: Ntotal = ? total count of introduced nuclei To double ice nuclei, the count of introduced nuclei must equal the count of natural nuclei: Ntotal = ( nice nuclei/Volume) · Volume Estimate the volume of a thunderstorm above the freezing level. Assume freezing level is at 3 km altitude, and the anvil top is at 12 km. Approximate the thunderstorm by a box of bottom surface area 12 x 12 km, and height 9 km (= 12 – 3). Volume ≈ 1300 km3 = 1.3x1012 m3 Ntotal = ( nice nuclei/Volume) · Volume = (10,000 m–3) · (1.3x1012 m3) = 1.3x1016 Exposition: Cloud seeding is often done by an aircraft. For safety reasons, the aircraft doesn't usually fly into the violent heart of the thunderstorm. Instead, it flies under the rain-free portion of cloud base, releasing the silver iodide particles into the updraft in the hopes that the nuclei get to the right part of the storm at the right time. It is not easy to do this correctly, and even more difficult to confirm if the seeding caused the desired change. Seeding thunderstorms is an uncontrolled experiment, and one never knows how the thunderstorm would have changed without the seeding. 15.2: Gust Fronts and Downbursts Roland Stull
CommonCrawl
PD-1 suppresses TCR-CD8 cooperativity during T-cell antigen recognition LAG-3 inhibits the activation of CD4+ T cells that recognize stable pMHCII through its conformation-dependent recognition of pMHCII Takumi Maruhashi, Il-mi Okazaki, … Taku Okazaki LAG3 associates with TCR–CD3 complexes and suppresses signaling by driving co-receptor–Lck dissociation Clifford Guy, Diana M. Mitrea, … Dario A. A. Vignali VISTA is an acidic pH-selective ligand for PSGL-1 Robert J. Johnston, Linhui Julie Su, … Alan J. Korman A single-domain bispecific antibody targeting CD1d and the NKT T-cell receptor induces a potent antitumor response Roeland Lameris, Adam Shahine, … Hans J. van der Vliet Catch and release: freeing up PD-L1 ameliorates autoimmunity Stephanie Grebinoski, Angela M. Gocher-Demske & Dario A. A. Vignali Targeting co-stimulatory molecules in autoimmune disease Natalie M. Edner, Gianluca Carlesso, … Lucy S. K. Walker Structural characterization of the ICOS/ICOS-L immune complex reveals high molecular mimicry by therapeutic antibodies Edurne Rujas, Hong Cui, … Jean-Philippe Julien SUSD2 suppresses CD8+ T cell antitumor immunity by targeting IL-2 receptor signaling Bao Zhao, Weipeng Gong, … Haitao Wen A dynamic CD2-rich compartment at the outer edge of the immunological synapse boosts and integrates signals Philippos Demetriou, Enas Abu-Shah, … Michael L. Dustin Kaitao Li ORCID: orcid.org/0000-0003-0617-35461,2, Zhou Yuan2,3, Jintian Lyu1,2, Eunseon Ahn4,5, Simon J. Davis6, Rafi Ahmed ORCID: orcid.org/0000-0002-9591-26214,5 & Cheng Zhu ORCID: orcid.org/0000-0002-1718-565X1,2,3 Nature Communications volume 12, Article number: 2746 (2021) Cite this article Cell death and immune response Lymphocytes Despite the clinical success of blocking its interactions, how PD-1 inhibits T-cell activation is incompletely understood, as exemplified by its potency far exceeding what might be predicted from its affinity for PD-1 ligand-1 (PD-L1). This may be partially attributed to PD-1's targeting the proximal signaling of the T-cell receptor (TCR) and co-stimulatory receptor CD28 via activating Src homology region 2 domain-containing phosphatases (SHPs). Here, we report PD-1 signaling regulates the initial TCR antigen recognition manifested in a smaller spreading area, fewer molecular bonds formed, and shorter bond lifetime of T cell interaction with peptide-major histocompatibility complex (pMHC) in the presence than absence of PD-L1 in a manner dependent on SHPs and Leukocyte C-terminal Src kinase. Our results identify a PD-1 inhibitory mechanism that disrupts the cooperative TCR–pMHC–CD8 trimolecular interaction, which prevents CD8 from augmenting antigen recognition, explaining PD-1's potent inhibitory function and its value as a target for clinical intervention. Despite the great success of targeting programmed cell death 1 (PD-1) or PD-1 ligand 1 (PD-L1) to modulate T cell functions for immunotherapy1,2,3, the mechanisms of how PD-1 suppresses antigen-specific T cell responses are not fully understood. PD-1 signaling is initiated by the phosphorylation of its immunoreceptor tyrosine-based inhibitory motif (ITIM) and immunoreceptor tyrosine-based switch motif (ITSM) when PD-1 and the T-cell receptor (TCR) are co-engaged with their respective ligands. This leads to the recruitment and activation of Src homology region 2 domain-containing phosphatases (SHPs), which attenuates the phosphorylation-dependent signaling cascades downstream of the TCR and the co-stimulatory receptor CD28, and thereby suppresses cellular functions such as activation, proliferation, metabolic regulation, cytotoxicity, and cytokine production4,5,6,7,8. Titration of PD-1 expression on T cells demonstrated its potent inhibitory signaling, such that even very low PD-1 expression is able to inhibit TNF-α and IL-2 production as well as T-cell proliferation9. The potency exceeds what might be predicted from its ligand-binding affinity, which is 1–2 log lower than that of B7-1 interacting with cytotoxic T-lymphocyte-associated protein 4 (CTLA-4), another inhibitory receptor of the same family10,11. Such high potency might be partially attributed to PD-1's inhibition of the early activating signals such as the phosphorylation of CD3ζ and ZAP706,8,12, the immediate steps following TCR triggering, and the phosphorylation of CD2813. Yet, it remains unclear whether PD-1 could function at even earlier stages to perturb the initial antigen recognition process. T-cell antigen recognition requires the specific interaction with the cognate peptide-major histocompatibility complex (pMHC) by the TCR, which leads to the phosphorylation of CD3 immunoreceptor tyrosine-based activation motifs (ITAMs) and a variety of proximal signaling molecules14. Far more complex than kinetic analysis in fluid phase using purified proteins, which reports primarily the effect of the physiochemical properties of the intermolecular interface, in situ measurement of interactions across the cell-cell junction reveals highly dynamic binding characteristics under force-free or force-loaded conditions15,16,17,18,19,20,21,22,23. Moreover, intercellular interactions are regulated by both cell-intrinsic and -extrinsic factors from the local environment that tune the antigen recognition process15,16,20. T-cell antigen recognition is greatly enhanced by the engagement of the co-receptor CD8 with pMHC, which stabilizes the TCR–pMHC complex and keeps the Leukocyte C-terminal Src kinase (Lck) in the proximity of TCR-CD3. Underlying this enhancement is the TCR-signaling-induced upregulation of CD8 binding and formation of cooperative TCR–pMHC–CD8 trimolecular bonds, most likely via the recruitment of CD8 to TCR-CD317,24,25,26 through a Lck bridge27,28,29,30,31. This "inside-out" signaling augmentation suggests an additional potential mechanism to regulate T-cell antigen recognition. In this study, we analyzed the effect of PD-1 on T-cell antigen recognition, utilizing P14 TCR-transgenic CD8+ T cells and the cognate antigenic peptide Lymphocytic Choriomeningitis Virus (LCMV) gp33-41 bound to H2-Db MHC. We observed reduced pMHC-mediated cell spreading when co-engaging PD-1 with PD-L1 despite more ligands being presented on the surface, suggesting PD-1 suppression of T-cell interacting with its antigen. In situ kinetic analysis of T cell interactions with pMHC, PD-L1, or both separately and concurrently further demonstrated fewer numbers and shorter lifetimes of bonds formed concurrently with both ligands than the sum of bonds formed separately with each, revealing "negative cooperativity", a phenomenon opposite to synergy. Furthermore, the negative cooperativity depended on SHP and Lck activities, extracellular pMHC–CD8 binding, and intracellular CD8–Lck association, suggesting that PD-1 primarily disrupts the positive cooperativity between TCR and CD8 during antigen recognition. Moreover, the negative cooperativity impaired CD8 augmentation of downstream Ca2+ signaling. These data reveal a mechanism in which PD-1 fine-tunes antigen recognition via "inside-out" negative feedback. PD-1 suppresses T cell spreading on pMHC surface We first examined T cell spreading on coverslips functionalized by pMHC with or without PD-L1 (Fig. 1). CD8+ T cells from PDCD1−/− P14 TCR-transgenic mice re-expressing PD-1 or vehicle spread on gp33:H2-Db (but not BSA) surface (Fig. 1a–c), indicating that the antigen recognition machinery alone is able to support spreading without adhesion molecules (e.g. integrins) used in most immunological synapse (IS) studies32. Spreading area on PD-L1 was much smaller, but above background as controlled using BSA surface or T cells expressing vehicle (Fig. 1a–c). Although each ligand (gp33:H2-Db or PD-L1) alone was able to mediate cell spreading individually, when both ligands was co-presented the spreading area was greatly reduced for PD-1-expressing cells but unchanged for vehicle-expressing cells (Fig. 1a–c). Despite that the total ligand density in the co-presentation case was the sum of those in individual coating, the cell spreading area was much smaller than that on gp33:H2-Db alone. This effect was further confirmed for activated CD8+ T cells from wild-type (WT) P14 mice, which expresses endogenous PD-1 (Fig. 1d, e). In contrast, spreading areas on surfaces co-presenting gp33:H2-Db with Intercellular Adhesion Molecule-1 (ICAM-1) were larger than those on surfaces presenting gp33:H2-Db or ICAM-1 alone (Fig. 1f, g), consistent with the previous reports of ICAM-1 enhancement of Jurkat cell spreading on anti-CD3 surface33 and of the positive cooperativity between TCR and LFA-1 in binding to their respective ligands34. Together these data suggest that PD-1 suppresses the antigen recognition process that relies on binding of the TCR-CD8 axis to the cognate pMHC, an opposite effect to LFA-1 enhancement of IS formation. Fig. 1: PD-1 inhibits T-cell spreading on surface co-presenting pMHC and PD-L1. a Representative images by reflection interference contrast microscopy (RICM). PDCD1 −/− P14 transgenic CD8+ T cells re-expressing PD-1 (left column) or vehicle (right column) were added onto glass coverslip coated with BSA (1st row), gp33:H2-Db (2nd row), gp33:H2-Db and PD-L1 (3rd row), or OVA:H2-Kb and PD-L1 (4th row). Cell spreading was imaged by RICM 20 min after their addition. b Quantification of a showing reduced spreading of PD-1-expressing cells on surfaces co-presenting gp33:H2-Db and PD-L1 (n = 31, 31, 32, and 31 cells). c Quantification of a showing no change in spreading area of vehicle-expressing cells on surfaces co-presenting gp33:H2-Db and PD-L1 (n = 28, 37, 68, and 27 cells). d Representative RICM images showing the spreading of in vitro activated wild-type P14 transgenic CD8+ T cells expressing endogenous PD-1 20 min after adding onto glass coverslip coated with gp33:H2-Db (top) or gp33:H2-Db and PD-L1 (bottom). e Quantification of d showing reduced spreading of endogenous PD-1-expressing cells on surfaces co-presenting gp33:H2-Db and PD-L1 (n = 56 and 56 cells). f Representative RICM images showing the spreading of in vitro activated wild-type P14 transgenic CD8+ T cells 20 min after adding onto glass coverslip coated with BSA, gp33:H2-Db, gp33:H2-Db and ICAM-1, or OVA:H2-Kb and ICAM-1 (indicated). g Quantification of f showing larger spreading area on surfaces co-presenting gp33:H2-Db and ICAM-1 than on gp33:H2-Db or ICAM-1 surfaces alone (n = 62, 83, 61, and 56 cells). Data are presented by box-whisker plots with the center line labels median, the box contains the two middle quantiles, and the whiskers mark the min and the max. p values were calculated using Mann–Whitney test by comparing each group with BSA control or two groups as labeled. PD-1 reduces T cell bond formation with pMHC To define the mechanisms causing T cells to spread less on pMHC and PD-L1 than on pMHC alone, we analyzed two-dimensional (2D) interactions between a T cell and pMHC, PD-L1, or both presented by a human red blood cell (RBC) as a surrogate antigen-presenting cell (APC) using the adhesion frequency assay11,35,36. An activated P14 T cell was aspirated by a micropipette (Fig. 2a, right) to interact in random sequence with three RBCs coated with known densities of gp33:H2-Db, PD-L1, or both, aspirated by another micropipette (Fig. 2a, left). For each RBC, the T cell was driven by a piezoelectric motor to make 5-s contact cycles repeatedly. The outcome of each cycle is a binary score: 1 or 0 for adhesion or no adhesion (as detected by the presence or absence of RBC elongation when the two cells were separated). An adhesion frequency (Pa = average adhesion scores) was obtained for each RBC after 30–50 cycles of T cell contacts and three Pa values were obtained for each T cell (Fig. 2a, b). The readout is highly specific, as coating streptavidin (SA) alone or replacing the cognate peptide gp33 with gp276 on H2-Db abolished RBC binding to P14 T cells (Supplementary Fig. 1). We calculated the average number of bonds per contact <n> from each Pa (= 1 – e−<n>) using a logarithmic transformation (Fig. 2a) because for adhesions mediated by dual species molecular interactions, the <n>, but not Pa, values of the two species are additive, provided that the two ligand species interact with their respective receptors independently rather than cooperatively37,38,39. As such, cooperativity of T cell binding to pMHC and PD-L1 could be detected by comparing the bond number measured from the dual-ligand RBC to the sum of bond numbers measured from the two single-ligand RBCs after matching the ligand site densities (Fig. 2a, c). Strikingly, the measured dual-species bond number was significantly smaller than the sum of single-species bond numbers, such that the "whole" was ~20% smaller than the sum of the "parts" (Δ<n>/<n>pred, Fig. 2c, f), resembling the observation in the cell spreading assay. We call this "negative cooperativity" because it is the inverse of the positive cooperativity between TCR and CD8, as well as between TCR and LFA-1, where dual-species bonds are more than the sum of single-species bonds17,24,34. Importantly, shortening the contact time to 0.5 s substantially reduced the adhesion frequency to PD-L1 and abolished the negative cooperativity (Fig. 2d–f), similar to the previously observed elimination of positive cooperativity between TCR and CD8 for pMHC binding17,24 and between TCR–pMHC binding and LFA-1–ICAM-1 binding by reducing the contact time34. This suggests no direct physical interference due to ligand co-presentation, but instead a temporal requirement most likely involving the crosstalk of these two signaling axes. Moreover, PD-1 deficient P14 T cells showed negligible adhesion frequencies to PD-L1 RBCs (Fig. 2g) and formed indistinguishable numbers of bonds with RBCs coated with gp33:H2-Db alone and both gp33:H2-Db and PD-L1 (Fig. 2h), further confirming that the suppressed binding was mediated by PD-1 (Fig. 2i). We also found similar normalized bond numbers of CD8+ T cells from P14 mice infected with LCMV strain Armstrong (5 days post infection) or Clone 13 (5 or 8 days post infection) to RBCs bearing PD-L1 (Supplementary Fig. 2a) or gp33:H2-Db (Supplementary Fig. 2b), yet fewer bonds to dual-ligand RBCs than the sum of bonds to single-ligand RBCs for all conditions (Supplementary Fig. 2c), suggesting PD-1-mediated negative cooperativity for in vivo activated T cells during responses to antigen. Fig. 2: 2D kinetic analysis of negative cooperativity. a Schematic of the micropipette assay (top) and the molecules to be analyzed (middle). A T cell expressing TCR, CD8 and PD-1 (right) was tested against three RBCs bearing pMHC, PD-L1, or both (mix) in random order (left) to generate three adhesion frequencies (Pa's), one for each RBC after 30-50 repeated touches with the T cell. The bottom row shows the workflow of using the three Pa's to determine the bond numbers (<n> 's) and the differential bond number (Δ <n>). b, d Representative Pa's of individual in vitro activated P14 CD8+ T cells binding to RBCs coated with gp33:H2-Db, PD-L1, or both at 5-s (b) or 0.5-s (d) contact time. Data points measured by testing the same T cell against three RBCs bearing different ligands are connected by a dashed line. c, e Comparisons of predicted and measured <n>'s from b (c) and d (e), respectively. f Normalized change of bond number (Δ<n>/<n>pred). The reduction of bond formation observed at 5-s contact tests (n = 17 cells) was abolished when the contact time was shortened to 0.5 s (n = 8 cells). g Representative Pa's of individual activated PDCD1−/− P14 CD8+ T cells to RBCs bearing gp33:H2-Db, PD-L1, or both at 5-s contacts. h Comparison of predicted and measured <n>'s from g. i Δ<n>/<n>pred showing the reduction of bond formation observed when PD-1 expressing cells were used was abolished when PDCD1−/− cells were used (n = 16 cells). Data are presented by box-whisker plots with the center line labels median, the box contains the two middle quantiles, and the whiskers mark the min and the max. p values were calculated using paired Student's t test (c, e, h) or Mann–Whitney test (f, i). The negative cooperativity depends on signaling of both PD-1 and TCR To investigate the intracellular underpinnings of the negative cooperativity involving dual ligand binding to the extracellular domains of PD-1 and TCR-CD8 axis, we perturbed PD-1 signaling by inhibiting SHP1 and SHP2 with NSC87877. No difference was observed in the number of bonds between NSC87877-treated and untreated T cells formed with RBCs bearing PD-L1 (Fig. 3a, normalized by the PD-1 and PD-L1 densities) or gp33:H2-Db (Fig. 3b, normalized by the TCR and pMHC densities), indicating that the separate binding of T cells to each ligand requires no SHP1 or SHP2 activity. However, the NSC87877-treatment eliminated the negative cooperativity as T cells formed as many bonds with RBCs bearing both ligands as the sum of bonds formed with RBCs bearing each ligand (Fig. 3c, d), indicating the dependence of negative cooperativity on SHP-mediated PD-1 signaling. Fig. 3: Negative cooperativity depends on signaling of both PD-1 and TCR. a, b Comparisons of normalized numbers of single-ligand bonds formed between PD-L1 (a, n = 42 and 14 cells) or pMHC (b, n = 27 and 14 cells) coated RBCs and in vitro activated T cells that were not treated (NT) or treated with 20 µM SHP1 and SHP2 inhibitor NSC87877 (SHP In). c Representative results comparing predicted and measured <n>'s determined from binding of NSC87822-treated T cells to RBCs bearing pMHC, PD-L1, or both. d Comparison of normalized bond reduction (Δ<n>/<n>pred) between the not treated and NSC87877-treated groups (n = 17 and 14 cells). e–h Similar to a–d except that the not treated group was replaced by the DMSO group and NSC87877 was replaced by the Lck inhibitor (2 µM). n = 14 and 15 cells in e. n = 14 and 16 cells in f. n = 14 and 16 cells in h. Data are presented by box-whisker plots with the center line labels median, the box contains the two middle quantiles, and the whiskers mark the min and the max. p values were calculated using paired t-test (c, g) or Mann–Whitney test (the rest). To examine the interplay between activating and inhibitory signals of TCR and PD-1 manifested at the level of ligand-binding, we analyzed cells treated with Lck kinase inhibitor 7-Cyclopentyl-5-(4-phenoxyphenyl)-7H-pyrrolo[2,3‑d]pyrimidin-4-ylamine26,40,41. As expected, inhibition of Lck had no effect on PD-1–PD-L1 binding (Fig. 3e) but significantly reduced T cell binding to gp33:H2-Db-coated RBCs by ~47% (Fig. 3f), most likely due to the loss of Lck-dependent TCR-CD8 positive cooperativity as we previously reported17,24. Inhibiting Lck also abolished the negative cooperativity of T- cell binding to RBCs bearing dual ligand species (Fig. 3g). The requirement of both PD-1 inhibitory signals (Fig. 3d) and TCR activating signals (Fig. 3h) for the negative cooperativity rules out a simple linear model for the dual-ligand binding of T cells. Instead, these data suggest an "inside-out" signaling mechanism, by which concurrent ligand binding of PD-1 and the TCR-CD8 axis is regulated by the interplay of the signals they each trigger. It is also consistent with the previous observation that PD-1 phosphorylation and its subsequent SHP2 recruitment can be greatly enhanced by co-engagement of PD-1 and TCR with their respective ligands8. The negative cooperativity requires CD8 binding to pMHC Since the readouts of the negative cooperativity – cell spreading and 2D binding – are based on receptor–ligand interactions, it begs the question as to which molecular interaction is suppressed. It seems unlikely that the PD-1–PD-L1 interaction is suppressed since PD-1 ligand-binding was not affected by inhibition of the key phosphatase SHP1 and SHP2 (Fig. 3a) or kinase Lck (Fig. 3e). On the other hand, the synergistic binding of P14 TCR and CD8 to gp33:H2-Db is sensitive to Lck activity (Fig. 3f). Previous studies from our group and others have demonstrated that cooperative binding of TCR and CD8 to cognate pMHC is a highly dynamic process with positive feedback relying on Lck and the signal triggered by TCR–pMHC binding17,24,25,26. Therefore, the elimination of negative cooperativity by inhibiting Lck could also be explained, at least partially, by the elimination of the target of the negative regulation. To test this possibility, we presented gp33 with H2-Dbα3A2, a mutated form of H2-Db where the mouse α3 domain was replaced with that from human HLA-A2 to abolish CD8 binding while preserving TCR binding42. As expected, preventing CD8 binding yielded significantly fewer (~57%) bonds from T cells interacting with RBCs bearing gp33:H2-Dbα3A2 than gp33:H2-Db (Fig. 4a). Supporting our hypothesis, no negative cooperativity was observed when co-presenting T cells with PD-L1 and gp33:H2-Dbα3A2 despite normal Lck activity (Fig. 4b, c). These data support our hypothesis that the TCR-CD8 cooperativity is a target of PD-1's inhibitory signal. Fig. 4: Negative cooperativity can be explained by the PD-1 suppression of TCR-CD8 cooperativity. a Comparison of normalized numbers of bonds formed between P14 CD8+ T cells and RBCs bearing gp33:H2-Db (n = 15 cells) or gp33:H2-Dbα3A2 (n = 21 cells) at 2-s contact time. b Representative experiment comparing predicted and measured <n>'s determined from binding of T cells to RBCs bearing gp33:H2-Dbα3A2, PD-L1, or both. c Comparison of normalized bond reductions (Δ<n>/<n>pred) between the gp33:H2-Db (n = 17 cells) and gp33:H2-Dbα3A2 (n = 12 cells) groups. d Schematics of the BFP setup (top) and the molecules to be analyzed (bottom). As an advanced version of the micropipette system, the BFP uses a glass bead to present pMHC, PD-L1, or both (left) to interact with the TCR, CD8, PD-1, or all of them (right). e Representative force-time trace recorded in a test cycle of a force-clamp assay. The level of clamped force and the duration of bond lifetime are indicated. f Plots of mean ± s.e.m. lifetime vs. mean force of single bonds of PD-L1, gp33:H2-Db, or gp33:H2-Dbα3A2 interaction with untreated P14 CD8+ T cells and of gp33:H2-Db interaction with Lck inhibitor-treated P14 CD8+ T cells. g Bond lifetime distributions of the molecular interactions in f at 3 pN showing as survival probability in semi-log plots. h–m Comparisons between predicted and measured mean ± s.e.m. bond lifetimes (h, j, l; 3- and 6-pN bins) and their distributions (i, k, m; 3-pN bin). Untreated (h–k) or Lck inhibitor-treated (l, m) CD8+ T cells expressing P14 TCR and PD-1 were analyzed by the force-clamp assay against BFP beads bearing PD-L1 plus gp33:H2-Db (h, i, l, m) or PD-L1 plus gp33:H2-Dbα3A2 (j, k) at 2-s contact time. The predictions were based on the percentages of single-bond events formed by each of the two ligands in each lifetime ensemble, as indicated for each pair of panels (h–j and k–m), which were determined from the ligand densities coated on each group of beads and the TCR and PD-1 expressed on each batch of T cells. The bond lifetime distributions of two single-ligand bonds were also shown in i, k and m for comparison. The weighted sum of these single-specie bond lifetimes (distributions) were used to calculate the predicted lifetime (distribution). For the box-whisker plots in a and c, the center line labels median, the box contains the two middle quantiles, and the whiskers mark the min and the max. p values were calculated using Mann–Whitney test (a, c), paired t-test (b), or standard t-test (h, j, l). The sample sizes of bond lifetime events are summarized in Supplementary Table 1. Force-lifetime spectroscopy confirms PD-1 suppression of TCR-CD8 cooperativity To further validate our hypotheses regarding the negative cooperativity, we performed force spectroscopic analysis using the biomembrane force probe (BFP) technique by which force-dependent bond lifetimes were measured as interaction characteristics orthogonal to the kinetic parameters measured in the absence of force18,19,26,43. Gp33:H2-Db, PD-L1, or both were coated on a glass bead attached to the apex of a micropipette-aspirated RBC (Fig. 4d), which translates bead displacements into force data by multiplying by the calibrated spring constant of the BFP36,44, enabling force measurement with sub-piconewton precision over time. The force-clamp assay was performed in repetitive cycles sampling single-bond lifetimes defined as the duration of the clamped phase of a cycle where force is constant (Fig. 4e). Approximately 2000 bond lifetimes were pooled and binned according to the force levels to generate four curves of force vs. lifetime of PD-L1, gp33:H2-Db and gp33:H2-Dbα3A2 with untreated T cells and of gp33:H2-Db with T cells treated with the Lck inhibitor (Fig. 4f). The average bond lifetime measured by the gp33:H2-Db probes displays a monotonic decay from 1.5 to 0.2 s as force increased from 3 to 18 pN, a pattern known to reflect the formation of "slip-bonds". Focusing on the lifetime events in the first force bin (~3 pN), histogram analysis reveals the presence of long lifetimes above average with >10% of bonds having >1 s lifetimes (Fig. 4g). We note that the lifetime pool for gp33:H2-Db probes consists of three types of molecular interactions: TCR–pMHC and CD8–pMHC bimolecular bonds and TCR–pMHC–CD8 trimolecular bonds. Due to the low affinity and short bond lifetime of the CD8–pMHC interaction, the TCR–pMHC interaction dominates the bimolecular bonds and drives the formation of the more stable TCR–pMHC–CD8 trimolecular bonds in a signaling-dependent manner24,26. Consistently, eliminating TCR–pMHC–CD8 trimolecular interactions by inhibiting Lck significantly reduced the average bond lifetime (0.7 vs. 1.5 s at 3 pN; Fig. 4f). Comprising of only TCR–pMHC and CD8–pMHC bimolecular bonds, the lifetime histogram also shifted toward left with fewer long-lived bonds (Fig. 4g). Similarly, when using gp33:H2-Dbα3A2 probes to eliminate CD8 binding, the pool contains only TCR–pMHC bonds with lifetimes much shorter than those generated using gp33:H2-Db probes (Fig. 4f, g). Moreover, the force vs lifetime curve and the lifetime histogram of Lck inhibitor-treated T cells probed by gp33:H2-Db are indistinguishable from their respective counterparts of untreated T cells probed by gp33:H2-Dbα3A2 (Fig. 4f, g), further confirming the need of Lck activity for TCR–pMHC–CD8 trimolecular interactions and the minimal contribution from the CD8–pMHC bimolecular interaction. In sharp contrast to the gp33:H2-Dbα3A2–TCR bonds, the PD-1–PD-L1 bonds have much shorter lifetimes across the force range tested, with the largest differences seen at low forces (0.08 vs. 0.70 s at 3 pN, Fig. 4f, g). The large difference in bond lifetime would allow us to resolve the altered occurrence of short vs long bond lifetime events when both pMHC and PD-L1 are co-presented to the T cells. Similar to the bond number cooperativity analysis, the predicted bond lifetime (distribution) for the dual-ligand probes is calculated as the weight sum of bond lifetimes (distributions) from single-ligand probes assuming that prior formation of one bond species does not affect the formation of the other bond species. When the two ligands were mixed with predicted fractions of forming 29% and 71% of gp33:H2-Db and PD-L1 bonds, respectively, the predicted average bond lifetimes were 0.45 s at 3 pN and 0.30 s at 6 pN. Yet, the measured dual-species bond lifetimes were significantly shorter: 0.22 s at 3 pN and 0.19 s at 6 pN (Fig. 4h), again, showing similar negative cooperativity as in the previous bond number analysis. The bond lifetime histogram of dual-ligand probes was also left-shifted away from the predicted distribution, revealing the suppression of longer-lived (i.e. TCR–pMHC and/or TCR–pMHC–CD8) bonds (Fig. 4i). To further elucidate the role of CD8, we tested probes co-presenting gp33:H2-Dbα3A2 and PD-L1. No significant difference between predicted and measured average bond lifetimes or their distributions were observed (Fig. 4j, k), implying that the reduced fraction of long-lived lifetimes from the gp33:H2-Db probes comes most likely from the TCR–pMHC–CD8 trimolecular bonds. This is further confirmed by analyzing Lck inhibitor-treated T cells with probes co-presenting gp33:H2-Db and PD-L1. Consistent with its effect of abolishing TCR–pMHC–CD8 trimolecular bond formation and the negative cooperativity in bond number analysis, the negative cooperativity in terms of bond lifetime also diminished (Fig. 4l, m). These data further reveal that PD-1 suppresses the signaling-dependent cooperative binding of TCR and CD8 to pMHC manifesting negative cooperativity in their concurrent binding to dual-ligands. PD-1's suppression of TCR-CD8 cooperativity requires CD8–Lck association Our proposed molecular mechanism of positive cooperative binding of pMHC by TCR and CD8 ectodomains involves a TCR proximal signaling-dependent "Lck bridge" that on one end associates with CD8 via its N-terminus and, on the other, docks on phosphorylated CD3 ITAMs24,25,26, similar to that in TCR-CD4 coupling29,30,45,46. As PD-1 inhibits the early activating signals including the phosphorylation of CD3ζ and ZAP706,8,12, we hypothesize that the negative cooperativity is due to the disruption of this Lck-mediated dynamic molecular assembly. To test this, we sorted CD8−CD4− (double negative, DN) thymocytes (Fig. 5a) from P14 mice and transduced the cells with retrovirus encoding CD8αβ WT or C227SC229S mutant, where the Lck-binding C227KC229P motif of the α chain is replaced with S227KS229P to eliminate Lck association25,47 (Fig. 5b). Consistent with observations in other TCR-transgenic mice48,49, CD44 staining reveals most of the P14 DN thymocytes are in DN3 and DN4 stages (CD44−, Supplementary Fig. 3a), where Lck is expressed50. The expression of CD8, TCR, and PD-1 are similar on CD8WT and CD8SKSP cells (Fig. 5b and Supplementary Fig. 3b, c). When tested against the same RBCs bearing gp33:H2-Db, DN thymocytes re-expressing CD8WT formed more bonds than those re-expressing CD8SKSP that abolished cytoplasmic Lck association (Fig. 5c), consistent with our previous results suggesting that the cooperative TCR–pMHC–CD8 trimolecular interaction requires CD8–Lck association25. When tested against the same DN thymocytes re-expressing CD8WT, RBCs bearing gp33:H2-Db formed significantly higher number of bonds than those bearing gp33:H2-Dbα3A2, confirming the synergistic CD8 binding (Fig. 5d). However, when tested against the same DN thymocytes not expressing CD8 or re-expressing CD8SKSP, the differences in the number of bonds formed with RBCs bearing gp33:H2-Db and gp33:H2-Dbα3A2 were no longer observed, confirming the contribution of CD8 synergy (Fig. 5d). Most importantly, analysis of dual-ligand (gp33:H2-Db and PD-L1) binding shows a bond number reduction in CD8WT but not CD8SKSP cells (Fig. 5e). These data further indicate the adapter role of Lck in the "inside-out" regulation of TCR-CD8 positive cooperation and its suppression by PD-1. Fig. 5: PD-1's suppression of TCR-CD8 cooperativity requires CD8–Lck association. Sorted P14 CD8−CD4− (DN) thymocytes were transduced to express CD8 WT or SKSP mutant that abolishes Lck binding, followed by 2D kinetic analysis for negative cooperativity. a Representative CD4 vs CD8 plots of total P14 thymocytes showing the sorting strategy (left and middle) and purity (right) of DN thymocytes. b Representative histogram plot of retroviral-transduced P14 DN thymocytes expressing CD8 WT or SKSP mutant. c Comparison of normalized numbers of bonds formed between RBCs bearing gp33:H2-Db and P14 DN thymocytes transduced to express CD8WT or CD8SKSP (n = 17 and 15 cells) at 2-s contact time. d Comparison of normalized numbers of bonds formed between untransduced, CD8WT, or CD8SKSP P14 DN thymocytes and RBCs bearing gp33:H2-Db (n = 11, 11, and 11 cells) or gp33:H2-Dbα3A2 (n = 10, 10, and 12 cells) at 5-s contact time. e Comparison of normalized bond reductions (Δ<n>/<n>pred) between CD8WT and groups CD8SKSP (n = 16 and 17 cells). Data are presented by box-whisker plots (the center line labels median, the box contains the two middle quantiles, and the whiskers mark the min and the max) with the n values indicating the numbers of cells measured per condition. p values were calculated using Mann–Whitney test. PD-1 suppresses TCR-CD8-triggered Ca2+ signaling To evaluate the functional consequence of the PD-1 suppression of antigen recognition by the TCR–pMHC–CD8 interaction, we analyzed the Ca2+ response to pMHC stimulation in the presence or absence of PD-L1. PDCD1−/− P14 T cells re-expressing PD-1 generated robust Ca2+ flux upon landing on surface coated with gp33:H2-Db (Fig. 6a, b). About 90% of cells have an amplitude above 1.5-fold of baseline with earliest Ca2+ flux seen within 1 to 2 min upon landing (Fig. 6a, b, e). In contrast, additional PD-L1 coating on the surface significantly reduced the Ca2+ signal (Fig. 6c–e). Comparing to the condition with gp33:H2-Db only, amplitude contours of the pseudo-image (Fig. 6d) and distribution analysis reveal much smaller percentage of cells reaching the same level of Ca2+ flux when PD-L1 was present (Fig. 6e). The maximum and time-integrated signals were also significantly lower (Fig. 6f, g). These data indicate reduced downstream signaling when the initial antigen recognition was perturbed by PD-1. Fig. 6: PD-1 suppressed antigen-triggered T cell Ca2+ signaling. a, c Heat maps constructed using Ca2+ images from individual cells to show the signal dynamics. Activated PDCD1−/− P14 CD8+ T cells re-expressing PD-1 were loaded with the X-Rhod-1 calcium dye, placed on glass coverslips coated with gp33:H2-Db alone (a, n = 82 cells) or gp33:H2-Db plus PD-L1 (c, n = 47 cells), and imaged under a microscope with a 2-s frame interval for 20–30 min. Each row represents an individual cell aligned by their landing time to the glass surface and sorted from top to bottom with increasing Ca2+ signal strength. The pseudo-color represents normalized Ca2+ signal against base fluorescence. b, d Contour plots of images in a and c at indicated fold-increase. e–g Quantification of a–d showing percentages of cells (mean ± s.e.m.) with calcium peak above indicated levels of fold-increase (e), and box-whisker plots of the peak values (f) and area under curves (AUC) (g) of calcium time courses of individual cells with the center line labeling median, the box containing the two middle quantiles and the whiskers marking the min and the max. p values were calculated using standard t-test (e) or Mann–Whitney test (f, g). Since the discovery of PD-1 in 199251, decades of effort have lead us to an understanding of how PD-1 inhibits TCR and CD28 signaling, thereby controlling activation and regulating the function and metabolism of conventional T cells52. Its critical role in T cell exhaustion also has brought us the great opportunity of targeting PD-1 or its ligands to rescue lost T cell functions for immunotherapies of cancer or potentially chronic infectious diseases1,2,3,53. Despite the rapidly expanding clinical applications, many questions remain unclear regarding the mechanism of PD-1 triggering and signaling. In this study, we investigated the effect of PD-1 on T cell antigen recognition – the initiation of all T cell activation and subsequent responses. By combining imaging and in situ kinetic analysis, we observed a negative cooperativity between PD-1 and the TCR-CD8 axis in their concurrent ligand binding. This manifests as reduced cell spreading on, less bond formation and shorter bond lifetime with pMHC, indicating the disruption of the molecular interactions for antigen recognition. In conventional binding cooperativity, as that typically takes place with isolated molecules, binding of a second molecule is often physically modulated by binding of the first molecule. By comparison, in situ cooperative binding of cell surface molecules may occur by direct physical interplay of their ectodomains, and/or by inside-out regulation through signaling events that involve additional intracellular and/or membrane molecules. An example for the former case is the recently discovered cis hetero-dimerization between PD-L1 and B7-1 on APC surface, which prevents trans-interaction with PD-1 but not CD28, the receptor for B7-154,55. This negative cooperativity is due to the masking of PD-1 binding site on PD-L1, but not the CD28 binding site on B7-1, upon the PD-L1–B7-1 cis-interaction. Our results provide an example for the latter case, as the elimination of the negative cooperativity by inhibiting SHPs or Lck cannot be explained by ectodomain cis-interactions between PD-L1 and H2-Db or between PD-1 and TCR or CD8, none of which have been reported. Higher-level physical interference due to mismatch dimensions of the two trans-interactions is also unlikely, because the negative cooperativity was abolished by the H2-Dbα3A2 mutant that spans the same dimension as H2-Db. Instead, the negative cooperativity indicates an "inside-out" signaling feedback, where the interplay of the intracellular signals triggered by concurrent ligand binding of PD-1 and TCR-CD8 negatively regulates the extracellular cooperation of TCR and CD8 in binding to pMHC. A well-studied example of this type of loop is the "inside-out" upregulation of integrin ligand-binding, where activating signals from other receptors induce conformational changes and/or cell surface clustering of integrin molecules from low- to intermediate- and high-affinity/avidity states to mediate enhanced cell adhesion on the ligand presenting surface34,56,57,58. In addition to changes in molecular structures, more complex in situ cooperative binding could stem from global cell behaviors that integrate functions of multiple receptors. Pielak et al. reported an upregulation of CD28–B7-1 interaction by TCR–pMHC ligation during T-cell formation of IS with ligand-functionalized lipid bilayer59. This positive cooperativity seems to be a global effect on CD28 at the intercellular interface due to TCR-triggered integrin activity, possibly by creating a tighter cell–cell junction and therefore enhancing the on-rate of 2D binding59. Our data showing PD-1 suppression of T-cell spreading suggests the possibility of a similar global regulation of intercellular junction during antigen recognition, as it may involve TCR-signaling-dependent cytoskeleton activity in the spreading process. Studies of T-cell motility in vivo or interactions with functionalized lipid bilayer or with APC in vitro also demonstrated the PD-1 negative regulation at a global level. Blockade of PD-1 or PD-L1 reduced T cell motility and promoted antigen-induced T cell arrest in an autoimmune diabetic model and a skin inflammation model, which was due to relieving PD-1 inhibition of the TCR-triggered "stop signal"60,61. PD-1 disruption of stable synapse formation in vitro was also reported for CD8+ T cells from chronic lymphocytic leukemia models62,63 or CD4+ T cells from AND TCR transgenic mice8. The impairment was possibly due to PD-1's suppression of a TCR-triggered upregulation of integrin binding to their ligands64. Yet, this kind of PD-1 regulation could be context-dependent, as PD-1 was also found to stabilize IS formation of P14 T cells from LCMV infection and restrain T cell motility in virus-infected spleen12. It is noteworthy that, unlike the IS studies where the interactions between two cells or between a cell and a surface were largely mediated by adhesion molecules like integrins, there was no integrin binding involved in our dual-ligand presentation of pMHC and PD-L1. Cell spreading was due only to the interactions of TCR-CD8 and PD-1 with pMHC and PD-L1, respectively, making it possible to attribute the observed PD-1 effects at least partly to the downregulation of these molecular interactions. This was then confirmed in our 2D kinetic analysis where T cell contact with a surrogate APC was externally controlled instead of resulting from the cells' own action. The 2D kinetic analysis also underlined the molecular structures or their surface organizations that determine the sensitivity and responsiveness of a receptor to a specific "inside-out" signaling regulation. The lack of effect for SHPs or Lck inhibitors on PD-1–PD-L1 binding might be partially explained by PD-1's relatively simple structure, which consists of a single Ig V-like domain connecting to the ITIM and ITSM-containing cytoplasmic tail functioning as a monomer10,65. In contrast, both the number and the lifetime of the TCR–pMHC–CD8 bonds were greatly reduced when Lck kinase activity was abolished, recapitulating the previously reported Lck-dependent TCR-CD8 cooperative binding of pMHC17,24,25,26. This positive cooperativity of extracellular binding relies on CD8–Lck association and the phosphorylation of CD3 ITAMs and possibly other proximal signaling molecules, suggesting a dynamic molecular assembly that may share similar structure and composition for the TCR-CD4 coupling via CD4-associated Lck docking on phosphorylated CD3 ITAMs or the ITAM-bound ZAP7027,28,29,30,31,45,46. These distinct properties of PD-1 and TCR-CD8 therefore suggest that each interaction could be regulated differentially when crosstalk of the two signaling axes occurs. Using a mutant MHC, we were able to eliminate the negative cooperativity by abolishing binding of the CD8 ectodomain, which corroborates the evidence obtained by disrupting CD8–Lck association and by perturbing the intracellular signaling apparatus. Applying force spectroscopic analysis, we resolved the lifetime differences among PD-1–PD-L1 (short), TCR–gp33:H2-Db (intermediate), and TCR–gp33:H2-Db–CD8 (long) bonds. This analysis has provided more evidence in addition to that from the bond number analysis: the skewed occurrence of more short-lived and fewer long-lived bonds when we tested T cells using probes co-presenting gp33:H2-Db but not gp33:H2-Dbα3A2 with PD-L1. Finally, both TCR-CD8 positive cooperativity and TCR-PD-1 negative cooperativity were eliminated when the intracellular association between CD8 and Lck were disrupted by mutations. Together, our data suggest that the PD-1 suppression of TCR-CD8 positive cooperativity for pMHC binding as the mechanism for the negative cooperativity. This work adds PD-1 and SHPs to the TCR interaction network model22 built on the data that CD3 phosphorylation and Lck are required for the TCR-CD8 cooperative binding of pMHC24,25,26. Studies disrupting Lck association with CD8/CD4 or inhibiting its activity suggested that the initial step of TCR triggering is likely mediated by Lck not associated with co-receptors25,31,66 (Supplementary Fig. 4a). Instead, phosphorylated CD3 (and ZAP70) may serve as docking sites for CD8/CD4-associated Lck, recruiting CD8/CD4 to the TCR to augment pMHC binding29,30,67,68 (Supplementary Fig. 4b). The binding cooperativity reinforces TCR signaling by prolonging pMHC engagement, sustaining Lck co-localization, and possibly regulating Lck's activity. The assembly could be highly dynamic due to the dependency on the accumulation of proximal signaling, and possibly the exchange of Lck among co-receptor-associated, membrane-bound, CD3-associated69, and free states. The reliance of this mechanism on TCR proximal signaling also renders the augmentation by positive feedback sensitive to PD-1's inhibitory effect (Supplementary Fig. 4c). Although PD-1-associated SHP2 preferentially dephosphorylates CD28 in a cell-free system and in Jurkat cells13, phosphatases like SHP1 and SHP2 generally do not show high substrate specificity71. PD-1 was shown to significantly inhibit the phosphorylation of CD3 and ZAP70 in other T cell lines or primary T cells6,8,12 and reduce cytokine production in the absence of a CD28 signal70. Lck and PD-1 were shown to cross-regulate the phosphorylation of each other in a cell-free system13, but how Lck activity could be affected by PD-1 in cells remains unclear and may depend on the location of Lck and PD-1 as well as other Lck regulators72. We speculate that PD-1 may also inhibit the accumulation of ZAP70 on phosphorylated CD3 and the recruitment of Lck, thereby disrupting the molecular bridge for CD8 recruitment (Supplementary Fig. 4c). In this way, PD-1 signaling can feedback on the most upstream of the TCR signal initiation process, inhibiting T cell functions at a very early stage. These observations help understand the potent inhibitory effect of PD-1 on T cell functions and provide a basis for future optimizing PD-1-based therapeutic interventions. Mice and cells In all, 6–8-weeks-old female C57BL/6 mice were purchased from The Jackson Laboratory. P14 transgenic, PDCD1−/− P14 transgenic, and C57BL/6 mice were housed at the Emory University Department of Animal Resources facility (Daylight cycle 7 a.m. to 9 p.m.; Darklight cycle 9 p.m. to 7 a.m.; Ambient Temperature 72 °F; Humidity 30–70%.) and used in accordance with National Institutes of Health and the Emory University Institutional Animal Care and Use Committee guidelines. For experiments using splenic CD8+ T cells, total splenocytes were prepared by mechanical grinding of the spleen followed by RBC lysis (eBiosciences) according to the manufacturer's instructions. P14 splenocytes were incubated at a density of 2 × 106 cells/ml for 2 h at 37 °C with 10 nM LCMV gp33–41 (KAVYNFATM). Cells were then washed with HBSS, resuspended in R10 medium (RPMI 1640 supplemented with 10% FBS, 100 U/mL penicillin, 100 μg/mL streptomycin, 2 mM l-glutamine, 20 mM HEPES, and 50 mM 2-Mercaptoethanol) and cultured at 4 × 106 cells/3 ml/well in a 12-well plate at 37 °C with 5% CO2. CD8+ T cells were purified on day 2 or 3 post activation via Ficoll gradient separation followed by CD8 negative purification using EasySep™ Mouse CD8+ T Cell Isolation Kit (Stemcell Technology). For experiments using DN thymocytes, total thymic cells were prepared by mechanical grinding and stained with PE-conjugated anti-CD8a (1:100 dilution) and BV421-conjugated anti-CD4 (1:100 dilution) antibodies. DN thymocytes were sorted by gating on CD8−CD4− population with a purity of ~97% (Fig. 5a) and cultured in R10 medium supplemented with 1 ng/ml recombinant mouse IL-7. Human RBCs were isolated from healthy donors and used according to a protocol approved by the Institutional Review Board of the Georgia Institute of Technology. LCMV infection To generate P14 chimeric mice, we transferred 1 × 103 P14 cells from spleen of P14 mice (Thy1.1/1.1) into naïve B6 (Thy1.2/1.2) mice. The P14 chimeric mice were infected with LCMV Armstrong (LCMV Arm) (2 × 105 p.f.u. i.p.) for acute viral infection or infected with LCMV clone13 (LCMV CL13; 2 × 106 p.f.u. i.v.) for chronic viral infection. Mice were sacrificed at indicated days and P14 cells were sorted based on Thy1.1 staining. Retroviral transduction of splenic T cells and thymocytes We reconstituted PD-1 expression in activated PD-1 KO P14 cells and CD8WT or CD8SKSP mutant in DN thymocytes via retroviral transduction as previously described73,74,75. Briefly, mouse PD-1 or CD8α and CD8β chains joint by a P2A element was cloned into pMSCV-IRES-GFP II vector (pMIG II, Addgene # 52107). CD8SKSP mutant was generated by mutating both C227 and C229 of CD8WT α chain into Serine residues. To produce retrovirus, 293T cells (ATCC) were transfected with packaging plasmid pCL-Eco (Addgene # 12371) together with pMIG II or pMIG II containing mPD-1, CD8WT, or CD8SKSP using Lipofectamine 3000 (ThermoFisher Scientific) following manufacture's protocol. Cell culture medium (DMEM supplemented with 10% FBS, 6 mM l-glutamine, 0.1 mM MEM non-essential amino acids, and 1 mM sodium pyruvate) was replaced after overnight culture. Supernatant containing retrovirus was harvested 48–72 h later and froze at −80 °C until use. For transducing splenic T cells, naïve CD8+ T cells purified from RBC-lysed splenocytes of PD-1 KO P14 mice using EasySep™ Mouse CD8+ T Cell Isolation Kit were activated with plate-bound anti-CD3 (clone 145-2C11, Tonbo Biosciences), 0.5 μg/ml anti-CD28 (clone 37.51, Tonbo Biosciences), and 100 U/ml recombinant human IL-2 (Gemini Bio) for 24 h. Activated T cells were Percoll (GE Healthcare) purified and spinoulated for 1.5 h in 1:1 mix of R10 and retroviral supernatant containing 0.5 μg/ml anti-CD28, 100 U/ml recombinant human IL-2 and 8 μg/ml polybrene (Sigma-Aldrich) at 3 × 106 cells/ml. Cell culture medium was replaced the next day with complete R10 with 100 U/ml recombinant human IL-2. GFP+ cells were sorted using BD Fusion cell sorter (BD Biosciences) day 3 or 4 after transduction. Sorted cells were allowed to rest for at least 24 h before use. For transducing DN thymocytes, retroviral supernatant was added to retronectin-coated plate and centrifuged for 1.5 h. Due to the decrease of endogenous PD-1 expression during in vitro culture, PD-1 retrovirus was mixed with CD8WT or CD8SKSP retrovirus for experiments of analyzing PD-1's effect. Sorted CD8−CD4− thymocytes were rested in R10 medium containing 1 ng/ml recombinant IL-7 for 3 h before being added to virus-coated plate. Cells were cultured overnight before flow cytometry and 2D kinetics analyses. A second round of transduction was performed as needed to boost the expression of CD8 and PD-1. Proteins and chemicals His6-tagged mPD-L1 with BirA sequence were produced in CHO cells (ATCC) as described previously76. Biotinylation was performed in vitro using the BirA biotin-protein ligase kit (Avidity). Recombinant mouse ICAM-1 with a human IgG1 Fc tag was from Biolegend. Wide type and α3A2 mutant of gp33:H2-Db were made by the National Institutes of Health Tetramer Core Facility at Emory University. PE-conjugated anti-mPD-1 (clone J43, 1:20), anti-mTCR Vα2 (clone B20.1, 1:20), anti-mCD8α (clone 53-6.7, 1:20), isotype RatIgG2a,λ (clone B39-4, 1:20), isotype RatIgG2a,κ (clone A95-1, 1:20), and isotype american hamster IgG2,κ (clone B81-3, 1:20) were from BD Biosciences. PE-conjugated anti-mPD-L1 (clone MIH5, 1:20) and biotinylated anti-human IgG1 (clone HP6070, 1:200) were from ThermoFisher Scientific. BV421-conjugated anti-CD4 (clone GK1.5, 1:100), PE-conjugated anti-mCD8α (clone 53-6.7, 1:100), anti-H2-Db (clone KH95, 1:20), and isotype ratIgG1,κ (clone RTK2071, 1:20) were from Biolegend. APC-conjugated anti-CD44 (clone IM7, 1:20) was from Tonbo biosciences. Biotinylated anti-His tag (1:40) was from Qiagen. SHP1 and SHP2 inhibitor NSC87877 was from Santa Cruz. Lck inhibitor 7-Cyclopentyl-5-(4-phenoxyphenyl)-7H-pyrrolo[2,3‑d]pyrimidin-4-ylamine was from Sigma Aldrich. Samples were stained in 100 µl of FACS buffer (PBS without Ca2+ or Mg2+ supplemented with 5 mM EDTA and 2% FBS) containing 10 µg/ml (or as indicated above) of antibodies for 30 min at 4 °C. Sample were washed twice with 2 ml of FACS buffer, fixed with 200 µl of 1% PFA for 15 min at 4 °C. Fixed samples were washed once with 2 ml of FACS buffer, and resuspended in 300 µl of FACS buffer for analysis under LSR II flow cytometer (BD Biosciences). Flow cytometric data were analyzed using FACS DIVA (BD Biosciences) and FlowJo (TreeStar). Cell spreading assay In all, 96-well imaging plate was cleaned with 1 N NaOH followed by thorough wash with diH2O. The imaging surface was then prepared in the following incubation steps with three washes using PBS after each step: (1) coating with 50 µg/ml biotinylated bovine serum albumin (BSA); (2) coating with streptavidin; (3) coating with mixture of biotinylated pMHC and biotinylated anti-PentaHis (Qiagen) or biotinylated anti-human IgG1 Fc; and (4) coating his-tagged mPD-L1 or human IgG1-tagged mICAM-1. T cells from various groups were resuspended in imaging buffer (HBSS with Ca2+ and Mg2+ supplemented with 2% FBS), added to the surface and incubated for 20 min at room temperature. Cells were then imaged under Nikon Ti microscope equipped with a ×60 TIRF objective, a RICM cube, and a mercury lamp for excitation at 560 nm. Images were acquired using Volocity (PerkinElmer) with cell spreading area calculated using ImageJ and customized Matlab (Mathworks) scripts by thresholding the RICM images for dark regions. Ca2+ imaging Imaging surfaces were prepared as described above for cell spreading assay. T cells at a density of 1 × 106 cells/ml were incubated with 5 μM X-Rhod-1 (ThermoFisher Scientific) in R10 medium for 30 min at 37 °C. Cells were washed twice with imaging buffer and immediately added onto ligand-coated surface. Upon addition, cells were imaged under an Olympus IX70 microscope equipped with a ×20 air objective. Cells were excited with a Xenon lamp at 580/15 nm and emission was acquired at 620/60 nm with a 2-s frame interval for 20–30 min. The image stack was collected using Micro-Manager77 and analyzed using customized Matlab application to calculate the fluorescence changes over time for each cell. Briefly, cells on each frame were defined upon adaptive thresholding and tracked for the entire stack to calculate the fluorescence traces for each cell. After determining the cell landing frame (t0), normalized fluorescence against baseline was calculated to reflect the fold change. Normalized fluorescence traces were aligned by t0 and sorted based on their maximum fold change to generate the pseudo-image as shown in Fig. 6a, c. Micropipette adhesion frequency assay The theoretical framework and detailed procedures of this assay have been reported previously16,35,36. Briefly, a RBC was used as a surrogate APC to present ligands and as a force transducer to detect binding to a T cell. RBCs were isolated from whole blood drawn from healthy donors according to a protocol approved by the Institute Review Board of Georgia Institute of Technology. RBCs were biotinylated using various concentrations of biotin-NHS linker (ThermoFisher Scientific). Biotinylated RBCs were first coated with saturating amount of streptavidin (SA) and washed. SA-coated RBCs were then incubated with saturating amount of biotinylated recombinant proteins and washed before the experiments. During the experiment, a T cell was aspirated by a piezo-driven micropipette whose movement was precisely controlled using Labview (National Instruments) programs. Each T cell was repeatedly brought into contact with a ligand-coated RBC and held for a pre-defined duration (tc). Adhesion was detected during the separation of the two cells from the membrane stretch of the RBC. The adhesion frequency (Pa) was determined after 30–50 cycles and was used to calculate the average number of bonds per contact <n> and the effective 2D affinity as follows: $${P}_{a}=1-\exp (- < n > )$$ $${{\mathrm{and}}}\,\,< n > ={m}_{{\mathrm{r}}} {m}_{{\mathrm{l}}} {A}_{{\mathrm{c}}} {K}_{{\mathrm{a}}} [1 -\exp (- {k}_{{\mathrm{off}}}{t}_{{\mathrm{c}}})].$$ Here mr and ml are the respective densities of the receptor on the T cell and the ligand on the RBC that were measured using PE-labeled monoclonal antibody together with QuantiBRITE PE standard beads (BD Biosciences), Ac is the contact area, Ka is the 2D affinity (in μm2), and koff is the off-rate (in s−1). With long contact duration (e.g. 5 s), Pa and <n> approach equilibrium, and the effective 2D affinity AcKa was estimated by normalizing <n > against receptor and ligand densities. $$\begin{array}{c}{A}_{{\rm{c}}}{K}_{{\rm{a}}}= < n > /{m}_{{\rm{r}}}{m}_{{\rm{l}}}\end{array}$$ Biomembrane force probe force-clamp assay The procedures for BFP force-clamp assay of single-bond lifetime under force have been described previously19. Briefly, a T cell was aspirated by a piezo-driven micropipette whose movement was precisely controlled using Labview (National Instruments) programs. Each t cell was repetitively brought into contact with a ligand-coated glass bead attached to the apex of a micropipette-aspirated RBC, which serves as an ultrasensitive force transducer. After contact, the T cell was retracted and held at a distance corresponding to the set force level. The displacement of the bead tracked at 1000 fps was translated into force reading with a preset RBC spring constant of 0.3 pN/nm. Molecular bond formed between the ligand on the bead and the receptor on the T cell pulls the bead away from its original position during T cell retraction, as reflected by the increase in force applied on the bond. The force (bead displacement) sustains until the bond ruptures, with the total duration defining the bond lifetime under the clamped force level. Repeated measurement cycles at multiple force levels generated a pool of such events, which were presented in the form of average bond lifetime <t> vs average force after binning. Cumulative histogram of lifetime events for each bin were also calculated as the natural log of the number of events with a lifetime >t. Cooperativity analysis of bond number and bond lifetime Cooperativity analysis based on comparison of the number of dual-species of receptor–ligand bonds with the sum of the numbers of the two component single-species bonds has been described previously24,37,38,39. For the molecular systems in this study, the average bond numbers for RBCs coated with individual or mixed ligands were calculated as follows: $$< n{ > }_{{\rm{pMHC}}}=-\,{\mathrm{ln}}(1-{P}_{{\rm{a}},{\rm{pMHC}}}),$$ $$< n{ > }_{{\rm{PD}}-{\rm{L1}}}=-\,{\mathrm{ln}}(1-{P}_{{\rm{a}},{\rm{PD}}-{\rm{L1}}}),$$ $$< n{ > }_{{\rm{mix}}}=-\,{\mathrm{ln}}(1-{P}_{{\rm{a}},{\rm{mix}}}).$$ For concurrent and independent interactions of pMHC and PD-L1 with their respective receptors on the T cell, bond formation is governed by their affinities as defined in Eq. (3). The predicted total bond number would be the sum of the individual bond numbers37,38,39 $$< n{ > }_{{\mathrm{pred}}}={r}_{{\mathrm{pMHC}}} < n{ > }_{{\mathrm{pMHC}}}+{r}_{{\mathrm{PD}}-{\mathrm{L}}1} < n{ > }_{{\mathrm{PD}}-{\mathrm{L}}1},$$ where rpMHC and rPD-L1 are the ratios of ligand densities on dual-ligand bearing RBC versus the two single-ligand bearing RBCs, all of which were measured by flow cytometry. The net cooperativity is then calculated as the difference between predicted total bond number and that measured in the mixture coating condition $$\varDelta < n > = < n{ > }_{{\mathrm{mix}}}- < n{ > }_{{\mathrm{pred}}}.$$ We name it apparent "positive cooperativity" or "negative cooperativity" in the case Δ<n> > 0 or Δ<n> < 0, respectively. The percentage of changes in bond number is then defined as Δ<n>/<n>pred. To reduce cell–cell variation among groups with different coating, we tested the same T cell in randomized order against three RBCs coated with individual ligands or mixed ligands, which allowed for cooperativity analysis with paired single-cell readout. Cooperativity analysis of bond lifetime is similar except that the readout now is the lifetimes of single bonds from all possible interactions involved. By binning the events based on their clamped force, we calculated the average bond lifetime for each coating conditions: <t>pMHC, <t>PD-L1, and <t> mix as well as the cumulative lifetime histogram PtpMHC, PtPD-L1, and Ptmix. The predicted average bond lifetime and cumulative histogram were defined as $$< t{ > }_{{\mathrm{pred}}}={f}_{{\mathrm{pMHC}}} < t{ > }_{{\mathrm{pMHC}}}+{f}_{{\mathrm{PD}}-{\mathrm{L}}1} < t{ > }_{{\mathrm{PD}}-{\mathrm{L}}1}$$ $$P{t}_{{\mathrm{pred}}}={f}_{{\mathrm{pMHC}}}P{t}_{{\mathrm{pMHC}}}+{f}_{{\mathrm{PD}}-{\mathrm{L}}1}P{t}_{{\mathrm{PD}}-{\mathrm{L}}1},$$ Where fpMHC and fPD-L1 are the fractions of each species in total molecular bonds as predicted by Eq. (3). Statistics and reproducibility Replication were performed 2–3 times independently to ensure reproducibility and increase the sample size (cells or bond lifetime events). Groups with fewer than 10 cells were from single experiment. Statistical analysis was performed using Excel (Microsoft), Prism (GraphPad software), and Matlab (MathWorks). Comparison of two groups were based on two-sided Mann–Whitney test unless two-sided Student's t test or paired t-test was noted. For Fig. 6e, bootstrapping (10,000 times) was applied to calculate the mean and standard error for each bar. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data supporting the findings of this study are presented in the article and supplementary materials and are available from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability Customized codes for imaging data analysis are available from the corresponding author upon reasonable request. Okazaki, T. & Honjo, T. PD-1 and PD-1 ligands: from discovery to clinical application. Int. Immunol. 19, 813–824 (2007). Baumeister, S. H., Freeman, G. J., Dranoff, G. & Sharpe, A. H. Coinhibitory pathways in immunotherapy for cancer. Annu. Rev. Immunol. 34, 539–573 (2016). Hashimoto, M. et al. CD8 T cell exhaustion in chronic infection and cancer: opportunities for interventions. Annu. Rev. Med. 69, 301–318 (2018). Okazaki, T., Maeda, A., Nishimura, H., Kurosaki, T. & Honjo, T. PD-1 immunoreceptor inhibits B cell receptor-mediated signaling by recruiting src homology 2-domain-containing tyrosine phosphatase 2 to phosphotyrosine. Proc. Natl Acad. Sci. USA 98, 13866–13871 (2001). Article ADS CAS PubMed PubMed Central Google Scholar Chemnitz, J. M., Parry, R. V., Nichols, K. E., June, C. H. & Riley, J. L. SHP-1 and SHP-2 associate with immunoreceptor tyrosine-based switch motif of programmed death 1 upon primary human T cell stimulation, but only receptor ligation prevents T cell activation. J. Immunol. 173, 945–954 (2004). Sheppard, K. A. et al. PD-1 inhibits T-cell receptor induced phosphorylation of the ZAP70/CD3zeta signalosome and downstream signaling to PKCtheta. FEBS Lett. 574, 37–41 (2004). Parry, R. V. et al. CTLA-4 and PD-1 receptors inhibit T-cell activation by distinct mechanisms. Mol. Cell Biol. 25, 9543–9553 (2005). Yokosuka, T. et al. Programmed cell death 1 forms negative costimulatory microclusters that directly inhibit T cell receptor signaling by recruiting phosphatase SHP2. J. Exp. Med. 209, 1201–1217 (2012). Wei, F. et al. Strength of PD-1 signaling differentially affects T-cell effector functions. Proc. Natl Acad. Sci. USA 110, E2480–E2489 (2013). Cheng, X. et al. Structure and interactions of the human programmed cell death 1 receptor. J. Biol. Chem. 288, 11771–11785 (2013). Li, K., Cheng, X., Tilevik, A., Davis, S. J. & Zhu, C. In situ and in silico kinetic analyses of programmed cell death-1 (PD-1) receptor, programmed cell death ligands, and B7-1 protein interaction network. J. Biol. Chem. 292, 6799–6809 (2017). Zinselmeyer, B. H. et al. PD-1 promotes immune exhaustion by inducing antiviral T cell motility paralysis. J. Exp. Med. 210, 757–774 (2013). Hui, E. F. et al. T cell costimulatory receptor CD28 is a primary target for PD-1-mediated inhibition. Science 355, 1428–142 (2017). Courtney, A. H., Lo, W. L. & Weiss, A. TCR signaling: mechanisms of initiation and propagation. Trends Biochem. Sci. 43, 108–123 (2017). Huppa, J. B. et al. TCR-peptide-MHC interactions in situ show accelerated kinetics and increased affinity. Nature 463, 963–967 (2010). Huang, J. et al. The kinetics of two-dimensional TCR and pMHC interactions determine T-cell responsiveness. Nature 464, 932–936 (2010). Liu, B. et al. 2D TCR-pMHC-CD8 kinetics determines T-cell responses in a self-antigen-specific TCR system. Eur. J. Immunol. 44, 239–250 (2014). Hong, J. et al. Force-regulated in situ TCR-peptide-bound MHC class II kinetics determine functions of CD4+ T cells. J. Immunol. 195, 3557–3564 (2015). Liu, B., Chen, W., Evavold, B. D. & Zhu, C. Accumulation of dynamic catch bonds between TCR and agonist peptide-MHC triggers T cell signaling. Cell 157, 357–368 (2014). Liu, B. et al. The cellular environment regulates in situ kinetics of T-cell receptor interaction with peptide major histocompatibility complex. Eur. J. Immunol. 45, 2099–2110 (2015). Seo, Y. J., Jothikumar, P., Suthar, M. S., Zhu, C. & Grakoui, A. Local cellular and cytokine cues in the spleen regulate in situ T cell receptor affinity, function, and fate of CD8+ T cells. Immunity 45, 988–998 (2016). Zhu, C., Jiang, N., Huang, J., Zarnitsyna, V. I. & Evavold, B. D. Insights from in situ analysis of TCR-pMHC recognition: response of an interaction network. Immunol. Rev. 251, 49–64 (2013). Zhu, C., Chen, W., Lou, J., Rittase, W. & Li, K. Mechanosensing through immunoreceptors. Nat. Immunol. 20, 1269–1278 (2019). Jiang, N. et al. Two-stage cooperative T cell receptor-peptide major histocompatibility complex-CD8 trimolecular interactions amplify antigen discrimination. Immunity 34, 13–23 (2011). Casas, J. et al. Ligand-engaged TCR is triggered by Lck not associated with CD8 coreceptor. Nat. Commun. 5, 5624 (2014). Article ADS CAS PubMed Google Scholar Hong, J. et al. A TCR mechanotransduction signaling loop induces negative selection in the thymus. Nat. Immunol. 19, 1379–1390 (2018). Rudd, C. E., Trevillyan, J. M., Dasgupta, J. D., Wong, L. L. & Schlossman, S. F. The CD4 receptor is complexed in detergent lysates to a protein-tyrosine kinase (pp58) from human T lymphocytes. Proc. Natl Acad. Sci. USA 85, 5190–5194 (1988). Burgess, K. E. et al. Biochemical identification of a direct physical interaction between the CD4:p56lck and Ti(TcR)/CD3 complexes. Eur. J. Immunol. 21, 1663–1668 (1991). Duplay, P., Thome, M., Herve, F. & Acuto, O. P56(Lck) interacts via its Src homology-2 domain with the Zap-70 kinase. J. Exp. Med. 179, 1163–1172 (1994). Thome, M., Germain, V., DiSanto, J. P. & Acuto, O. The p56lck SH2 domain mediates recruitment of CD8/p56lck to the activated T cell receptor/CD3/zeta complex. Eur. J. Immunol. 26, 2093–2100 (1996). Xu, H. & Littman, D. R. A kinase-independent function of Lck in potentiating antigen-specific T cell activation. Cell 74, 633–643 (1993). Dustin, M. L. The immunological synapse. Cancer Immunol. Res. 2, 1023–1033 (2014). Tabdanov, E. et al. Micropatterning of TCR and LFA-1 ligands reveals complementary effects on cytoskeleton mechanics in T cells. Integr. Biol. 7, 1272–1284 (2015). Ju, L. et al. Dual Biomembrane Force Probe enables single-cell mechanical analysis of signal crosstalk between multiple molecular species. Sci. Rep. 7, 14185 (2017). Chesla, S. E., Selvaraj, P. & Zhu, C. Measuring two-dimensional receptor-ligand binding kinetics by micropipette. Biophys. J. 75, 1553–1572 (1998). Chen, W., Zarnitsyna, V. I., Sarangapani, K. K., Huang, J. & Zhu, C. Measuring receptor-ligand binding kinetics on cell surfaces: from adhesion frequency to thermal fluctuation methods. Cell Mol. Bioeng. 1, 276–288 (2008). Williams, T. E., Selvaraj, P. & Zhu, C. Concurrent binding to multiple ligands: kinetic rates of CD16b for membrane-bound IgG1 and IgG2. Biophys. J. 79, 1858–1866 (2000). Zhu, C. & Williams, T. E. Modeling concurrent binding of multiple molecular species in cell adhesion. Biophys. J. 79, 1850–1857 (2000). Williams, T. E., Nagarajan, S., Selvaraj, P. & Zhu, C. Concurrent and independent binding of Fcgamma receptors IIa and IIIb to surface-bound IgG. Biophys. J. 79, 1867–1875 (2000). Hui, E. & Vale, R. D. In vitro membrane reconstitution of the T-cell receptor proximal signaling network. Nat. Struct. Mol. Biol. 21, 133–142 (2014). Arnold, L. D. et al. Pyrrolo[2,3-d]pyrimidines containing an extended 5-substituent as potent and selective inhibitors of lck I. Bioorg. Med. Chem. Lett. 10, 2167–2170 (2000). Zarnitsyna, V. I. et al. Memory in receptor-ligand-mediated cell adhesion. Proc. Natl Acad. Sci. USA 104, 18037–18042 (2007). Liu, B., Chen, W. & Zhu, C. Molecular force spectroscopy on cells. Annu. Rev. Phys. Chem. 66, 427–451 (2015). Ju, L. & Zhu, C. Benchmarks of biomembrane force probe spring constant models. Biophys. J. 113, 2842–2845 (2017). Arcaro, A. et al. CD8 beta endows CD8 with efficient coreceptor function by coupling T cell receptor/CD3 to raft-associated CD8/p56(lck) complexes. J. Exp. Med. 194, 1485–1495 (2001). Yachi, P. P., Ampudia, J., Zal, T. & Gascoigne, N. R. Altered peptide ligands induce delayed CD8-T cell receptor interaction–a role for CD8 in distinguishing antigen quality. Immunity 25, 203–211 (2006). Zamoyska, R. et al. Inability of CD8 alpha' polypeptides to associate with p56lck correlates with impaired function in vitro and lack of expression in vivo. Nature 342, 278–281 (1989). Leng, Q., Ge, Q., Nguyen, T., Eisen, H. N. & Chen, J. Stage-dependent reactivity of thymocytes to self-peptide-MHC complexes. Proc. Natl Acad. Sci. USA 104, 5038–5043 (2007). Legrand, N. & Freitas, A. A. CD8+ T lymphocytes in double αβ TCR transgenic mice. I. TCR expression and thymus selection in the absence or in the presence of self-antigen. J. Immunol. 167, 6150–6157 (2001). Wolfer, A., Wilson, A., Nemir, M., MacDonald, H. R. & Radtke, F. Inactivation of Notch1 impairs VDJbeta rearrangement and allows pre-TCR-independent survival of early alpha beta Lineage Thymocytes. Immunity 16, 869–879 (2002). Ishida, Y., Agata, Y., Shibahara, K. & Honjo, T. Induced expression of PD-1, a novel member of the immunoglobulin gene superfamily, upon programmed cell death. EMBO J. 11, 3887–3895 (1992). Sharpe, A. H. & Pauken, K. E. The diverse functions of the PD1 inhibitory pathway. Nat. Rev. Immunol. 18, 153–167 (2018). Callahan, M. K., Postow, M. A. & Wolchok, J. D. Targeting T cell co-receptors for cancer therapy. Immunity 44, 1069–1078 (2016). Sugiura, D. et al. Restriction of PD-1 function by cis-PD-L1/CD80 interactions is required for optimal T cell responses. Science 364, 558–566 (2019). Zhao, Y. et al. PD-L1:CD80 cis-heterodimer triggers the co-stimulatory receptor CD28 while repressing the inhibitory PD-1 and CTLA-4 pathways. Immunity 51, 1059–1073.e1059 (2019). Dustin, M. L. & Springer, T. A. T-cell receptor cross-linking transiently stimulates adhesiveness through LFA-1. Nature 341, 619–624 (1989). Springer, T. A. & Dustin, M. L. Integrin inside-out signaling and the immunological synapse. Curr. Opin. Cell Biol. 24, 107–115 (2012). Chen, Y. et al. An integrin alphaIIbbeta3 intermediate affinity state mediates biomechanical platelet aggregation. Nat. Mater. 18, 760–769 (2019). Pielak, R. M. et al. Early T cell receptor signals globally modulate ligand:receptor affinities during antigen discrimination. Proc. Natl Acad. Sci. USA 114, 12190–12195 (2017). Fife, B. T. et al. Interactions between PD-1 and PD-L1 promote tolerance by blocking the TCR-induced stop signal. Nat. Immunol. 10, 1185–1192 (2009). Honda, T. et al. Tuning of antigen sensitivity by T cell receptor-dependent negative feedback controls T cell effector function in inflamed tissues. Immunity 40, 235–247 (2014). McClanahan, F. et al. PD-L1 checkpoint blockade prevents immune dysfunction and leukemia development in a mouse model of chronic lymphocytic leukemia. Blood 126, 203–211 (2015). McClanahan, F. et al. Mechanisms of PD-L1/PD-1-mediated CD8 T-cell dysfunction in the context of aging-related immune defects in the E -TCL1 CLL mouse model. Blood 126, 212–221 (2015). Saunders, P. A., Hendrycks, V. R., Lidinsky, W. A. & Woods, M. L. PD-L2:PD-1 involvement in T cell proliferation, cytokine production, and integrin-mediated adhesion. Eur. J. Immunol. 35, 3561–3569 (2005). Zhang, X. et al. Structural and functional analysis of the costimulatory receptor programmed death-1. Immunity 20, 337–347 (2004). Lee-Fruman, K. K., Collins, T. L. & Burakoff, S. J. Role of the Lck Src homology 2 and 3 domains in protein tyrosine phosphorylation. J. Biol. Chem. 271, 25003–25010 (1996). Thome, M., Duplay, P., Guttinger, M. & Acuto, O. Syk and ZAP-70 mediate recruitment of p56lck/CD4 to the activated T cell receptor/CD3/zeta complex. J. Exp. Med. 181, 1997–2006 (1995). Mørch, A. M., Bálint, Š., Santos, A. M., Davis, S. J. & Dustin, M. L. Coreceptors and TCR signaling – the strong and the weak of it. Front. Cell Dev. Biol. 8, 597627 (2020). Li, L. et al. Ionic CD3-Lck interaction regulates the initiation of T-cell receptor signaling. Proc. Natl Acad. Sci. USA 114, E5891–E5899 (2017). Mizuno, R. et al. PD-1 Primarily targets TCR signal in the inhibition of functional T cell activation. Front. Immunol. 10, 630 (2019). Barr, A. J. et al. Large-scale structural analysis of the classical human protein tyrosine phosphatome. Cell 136, 352–363 (2009). Rossy, J., Owen, D. M., Williamson, D. J., Yang, Z. & Gaus, K. Conformational states of the kinase Lck regulate clustering in early T cell signaling. Nat. Immunol. 14, 82–89 (2013). Xu, X. et al. Autophagy is essential for effector CD8(+) T cell survival and memory formation. Nat. Immunol. 15, 1152–1161 (2014). Kurachi, M. et al. Optimized retroviral transduction of mouse T cells for in vivo assessment of gene function. Nat. Protoc. 12, 1980 (2017). Cornetta, K., Pollok, K. E. & Miller, A. D. Transduction of primary hematopoietic cells by retroviral vectors. Cold Spring Harb. Protoc. 2008, pdb prot4884 (2008). Collins, A. V. et al. The interaction properties of costimulatory molecules revisited. Immunity 17, 201–210 (2002). Edelstein, A. D. et al. Advanced methods of microscope control using muManager software. J. Biol. Methods 1, e10 (2014). This work was supported by NIH grants U01CA214354 (to C.Z.), R01CA243486 (to C.Z.), and U01CA250040 (to C.Z. and R.A.). Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA Kaitao Li, Jintian Lyu & Cheng Zhu Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, USA Kaitao Li, Zhou Yuan, Jintian Lyu & Cheng Zhu George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA Zhou Yuan & Cheng Zhu Emory Vaccine Center, Emory University School of Medicine, Atlanta, GA, USA Eunseon Ahn & Rafi Ahmed Department of Microbiology and Immunology, Emory University School of Medicine, Atlanta, GA, USA Radcliffe Department of Medicine and Medical Research Council Human Immunology Unit, University of Oxford, John Radcliffe Hospital, Headington, Oxford, UK Simon J. Davis Kaitao Li Zhou Yuan Jintian Lyu Eunseon Ahn Rafi Ahmed Cheng Zhu K.L., R.A., and C.Z. designed experiments; K.L., Z.Y., J.L., and E.A. performed experiments; K.L., Z.Y., J.L., E.A., and C.Z. analyzed the data. K.L., S.J.D., and C.Z. wrote the paper with contributions from other authors. Correspondence to Cheng Zhu. Peer review information Nature Communications thanks Christopher Rudd and Philip van der Merwe for their contribution to the peer review of this work. Peer reviewer reports are available. Source data Li, K., Yuan, Z., Lyu, J. et al. PD-1 suppresses TCR-CD8 cooperativity during T-cell antigen recognition. Nat Commun 12, 2746 (2021). https://doi.org/10.1038/s41467-021-22965-9 Revisiting PD-1 to target leukaemic stem cells Chong Yang Toshio Suda Nature Cell Biology (2023) Combination strategies with PD-1/PD-L1 blockade: current advances and future directions Ming Yi Xiaoli Zheng Kongming Wu Molecular Cancer (2022) Cooperative binding of T cell receptor and CD4 to peptide-MHC enhances antigen sensitivity Muaz Nik Rushdi Victor Pan Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
BMC Medical Informatics and Decision Making Improving patient self-description in Chinese online consultation using contextual prompts Xuedong Li1, Dezhong Peng1 & Yue Wang ORCID: orcid.org/0000-0002-0278-23472 BMC Medical Informatics and Decision Making volume 22, Article number: 170 (2022) Cite this article Online health care consultation has been widely adopted to supplement traditional face-to-face patient-doctor interactions. Patients benefit from this new modality of consultation because it allows for time flexibility by eliminating the distance barrier. However, unlike the traditional face-to-face approach, the success of online consultation heavily relies on the accuracy of patient-reported conditions and symptoms. The asynchronous interaction pattern further requires clear and effective patient self-description to avoid lengthy conversation, facilitating timely support for patients. Inspired by the observation that doctors talk to patients with the goal of eliciting information to reduce uncertainty about patients' conditions, we proposed and evaluated a machine learning-based computational model towards this goal. Key components of the model include (1) how a doctor diagnoses (predicts) a disease given natural language description of a patient's conditions, (2) how to measure if the patient's description is incomplete or more information is needed from the patient; and (3) given the patient's current description, what further information is needed to help a doctor reach a diagnosis decision. This model makes it possible for an online consultation system to immediately prompt a patient to provide more information if it senses that the current description is insufficient. We evaluated the proposed method by using classification-based metrics (accuracy, macro-averaged F-score, area under the receiver operating characteristics curve, and Matthews correlation coefficient) and an uncertainty-based metric (entropy) on three Chinese online consultation corpora. When there was one consultation round, our method delivered better disease prediction performance than the baseline method (No Prompts) and two heuristic methods (Uncertainty-based Prompts and Certainty-based Prompts). The disease prediction performance correlated with uncertainty of patients' self-described symptoms and conditions. However, heuristic solutions ignored the context to decrease large amounts of uncertainty, which did not improve the prediction performance. By elaborate design, a machine-learning algorithm can learn the inner connection between a patient's self-description and the specific information doctors need from doctor-patient conversations to provide prompts, which can enrich the information in patient self-description for a better performance in disease prediction, thereby achieving online consultation with fewer rounds of doctor-patient conversation. Advances in information technologies have boosted the development and adoption of online consultation for health care. For example, haodf.com, the leading online consultation platform in China, has provided service for more than 58 million patients as of 2019 [1]. Supplementing traditional face-to-face consultation, this new channel of patient-doctor interaction has been playing an increasingly crucial role in modern health care systems for several reasons. First, online consultation can alleviate the problem of imbalanced distribution of precious health care resources. Most medical centers are located in developed areas, which makes it difficult for many people living in rural areas to access high-quality health care services [2]. Online consultation can eliminate the physical distance between doctors and patients, allowing sick people to acquire timely diagnoses from doctors even thousands of miles away from their home. Second, online consultation helps decrease the load of the health care system. In populous countries such as China, hospitals are often over-crowded with patients. This phenomenon is partly caused by many people going to the hospital for periodic checkups of chronic diseases or preliminary diagnoses, whose health care needs are not critical or urgent. Internet-based diagnoses can triage these patients by helping them decide whether they really need to go see a doctor in person or not. This can help reduce the over-crowding of hospitals and improve the throughput of the health care system. Third, the unexpected COVID-19 pandemic caused by the severe acute respiratory coronavirus 2 (SARS-CoV-2) in 2019 has attracted ever-increasing attention to contact-free diagnosis methods. As a result, the demand for online consultation sharply increased. According to a report from Xinhuanet, an official news website in China, the number of people consulting doctors through the Internet was more than 100,000 per day across China during the lockdown period, increasing by a factor of 6–7 compared to normal times.Footnote 1 Indeed, online patient consultation has the advantage that patients can conveniently visit doctors almost anywhere anytime through the Internet. Its primary drawback, however, is communication inefficiency [3]. To better understand this problem, let us first recall the process of asynchronous consultation. Normally, a doctor's diagnosis starts with the clinician asking a patient what is wrong and the patient giving the doctor a self-description about how he or she feels. Then the doctor continues to ask more questions, such as "Does this part or that part hurt?", "What medicine did you take?", ''What is your body temperature?'', and so on. The patient answers these questions, and the doctor may follow up with more questions. This back-and-forth exchange does not end until the doctor reaches a confident diagnosis, or assessment, of the patient's medical problems. How many rounds of conversation this process requires largely depends on the completeness of the information provided by the patient. In the example above, the patient needed at least four rounds to make the doctor understand his condition. However, if a patient tells the doctor how they feel, which part(s) of his body hurt, and what medicine he has taken in the first round of self-description, the patient will greatly reduce the number of rounds of question-answering with the doctor before a diagnosis is made. In face-to-face diagnoses, the difference between four rounds and one round of conversation does not matter because any question can be answered almost immediately. However, when the diagnosis is done through the internet, the difference is critical. For example, in online diagnosis in haodf.com, these consultations are not real-time but work like an asynchronous chat. The time it takes to receive a doctor's reply depends on how busy the doctor is with duties offline. Similarly, the patient may not give an instant response for various reasons. Under this situation, a four-round conversation can take much longer than a one-round one. The problem of prolonged consultations may affect user experience extensively for both doctors and patients [4]. Such effects can be magnified in a society like China with a low doctor-to-patient ratio. To reduce the conversation rounds that doctors need in an online consultation, we aim to seek an approach that can automatically prompt patients to provide more information during their first round of input in an online consultation so that they provide as complete information as possible early in the conversation. Some studies have directly used powerful machine-learning models to assign disease labels to patients according to their health records [5,6,7,8,9,10,11]. This kind of approach assumes that a patient's full information is already collected, which is not the case in online consultation scenarios. Other studies resorted to dialogue systems to automatically guide patients to detail their conditions, and then generate diagnosis results [4, 12,13,14]. These methods require not only nuanced understanding of patient's language and fluent generation of doctor's language, but also a large and diverse collection of labeled conversation data, which are extremely difficult to create. Other studies select existing answers given a patient's utterance to drive the dialogue [15,16,17]. The difficulty with such methods is that a large pool of question–answer pairs is required, which is also difficult to obtain. Building on these studies, we are motivated by the characteristics observed in health care consultations: (1) To measure certainty, doctors need a certain amount of information; (2) doctor-patient conversations involve multiple-choice questions in many cases; (3) doctors converse with patients in order to decrease their uncertainty and increase their confidence about the correct diagnosis. We designed a machine learning based framework to fulfill our aim. We first trained a disease diagnosis function with full information consisting of patient self-descriptions and doctor-patient conversations in order to measure the certainty for an unseen patient's self-described symptoms; then, we built a collection of potential prompts by selecting top k TFIDF words from doctor-patient conversations; finally, we used the prompt and patient self-description pairs to train an information elicitation function, wherein the prompt can increase the prediction of the correct diagnosis. When a self-description is evaluated as under-informative by the diagnosis function, the elicitation function launches to provide prompts to help make it more informative. In this study, our main contribution can be summarized as follows: Designed a machine learning based framework to reduce the rounds of doctor-patient conversation in online consultations. Instantiated the proposed framework with different models and prompt strategies. Conducted a number of experiments on the different instantiations on three Chinese online diagnosis datasets, and found that the instantiation BERT + Learned Prompts delivered the best performance in most time. Prior work Dialogue diagnosis There are some works studying dialogue between doctors and patients during the medical consultation for diagnosis. Tang et al. [4] proposed a framework that casts dialogue diagnosis as Markov Decision Process and trains the dialogue policy via reinforcement learning. In general, the working process of the framework is like MYCIN [18], it starts with a patient's self-report and inquires symptoms from the patient, this loop will not end until the system meets ending condition. Wei et al. [12] used a similar schema, but adopt deep Q-network to parameterize policy. The two works mostly rely on data-driven learning. To utilize external information, Lin et al. [13] proposed an end-to-end knowledge-based dialogue system to incorporate knowledge graph into dialogue management, and Xu et al. [14] used a symptom graph to implement goal attention mechanism capturing more symptoms related information from dialogue. Unlike all of these works, which utilize the conversation form to do diagnose, our work is to learn from conversation. Answer selection Guiding user to complete information can be done through providing the answer of most related questions in a question–answer pool. Technically this is an answer selection task. Feng et al. [15] designed six different architectures based on convolutional neural networks (CNN) to select the right answer for a question in insurance domain. In that work, CNN is used to extract the representation of question and answer in text at different steps in proposed framework. Another work also adopts CNN as the sentence presentation extractor [16]. The difference of this work with previous one is it used a non-linear tensor layer at the final layer to compare the similarity of question and answer. The other popular deep learning model—long short-term memory (LSTM) is also applied to this task. Tan et al. [17] designed a bidirectional-LSTM (BiLSTM) based model as baseline and further extend it through mixing a CNN on top of BiLSTM. The difference between our method and the above ones is that we do not find the answer directly instead a kind of hint. Disease diagnosis with machine learning Machine learning technology has been widely applied in disease diagnosis. Garg et al. [5] applied several feature selection methods and machine learning algorithms on text-based electronic health records to classify ischemic stroke. Malik et al. [7] developed a general framework for recording diagnostic data and used machine learning algorithms to analyze patient data based on multiple features and clinical observations for eye disease classification. Lucas et al. [8] used support vector machine to search patterns in electroencephalography epochs to differentiate patients with Alzheimer disease. Li et al. [6] used existing knowledge base as additional information source to improve rare disease classification performance. These methods only tried to improve the performance making classifier have better generalization ability or incorporating external knowledge, in this study we introduce a more human-like way to reach better performance. Problem formulation At a high level, prompting patient to provide more complete information can be viewed as an active information-seeking problem [19]. To motivate our problem formulation, we describe a simplified example of diagnosis as follows. We assume the doctor can differentiate two diseases: pneumonia and enteritis. To reach the diagnosis of pneumonia, a patient has to simultaneously present the following conditions: fever, asthenia, and dry cough. To reach the diagnosis of enteritis, the conditions include fever, asthenia, and diarrhea. If a patient comes to the doctor for consultation and says he has fever and feels asthenic. In this case, according to the above diagnostic rules, the doctor can not determine which of the two diseases the patient has—both are equally possible. To be certain about the diagnosis, the doctor needs to ask whether the patient experienced dry cough or diarrhea. When the third condition is confirmed, the doctor can reach a conclusion: if the patient has dry cough, he probably has pneumonia; otherwise, he is more likely to have enteritis (assuming that dry cough and diarrhea are mutually exclusive). We make several observations from this example. First, when the information is incomplete, the doctor asks questions to elicit more information. Second, such questions are asked to decrease uncertainty in diagnosing a disease. Third, each question expects a categorical answer. After obtaining further information, the doctor incorporates it with the initial information towards making a diagnosis with more certainty. We now formulate the consultation process as follows. Given a patient's self-description \({\varvec{x}}\) (represented as a vector of information), a doctor attempts to make a diagnosis by mapping x to y, where \({\varvec{y}}\) is probability distribution over the set of diseases in question. The doctor's mapping/reasoning process can be represented as a function \(f\), i.e., \({\varvec{y}} = f\left( {\varvec{x}} \right)\).The most probable disease \(y^{*} \in {\varvec{y}}\) would be chosen as the diagnosed disease. If the patient's self-description \({\varvec{x}}\) is complete, the doctor will confidently assert a diagnosis \(y^{*}\) with high certainty. However, if \({\varvec{x}}\) is incomplete, the doctor may be uncertain about the diagnosis. In terms of the disease probability vector \({\varvec{y}}\), \(y^{*}\) may not have a high enough probability, or multiple diseases may have nearly as high probabilities as \(y^{*}\). To reduce uncertainty about the diagnosis, the doctor will need more information \({\varvec{z}}\) to be collected. \({\varvec{z}}\) is another vector of information that answers doctor's follow-up questions after seeing \({\varvec{x}}\). After obtaining \({\varvec{z}}\), the doctor will make a diagnosis again by invoking \(\user2{y^{\prime}} = f\left( {{\varvec{x}} + {\varvec{z}}} \right)\). The hope is that this time, the candidate diagnosis \(y^{*} \in \user2{y^{\prime}}\) is correctly identified as the most probable disease with high certainty. Contextual prompts Ideally, a computer algorithm can capture the doctor's follow-up questioning process as a model \(g\) that generates the questions \({\varvec{z}}\) based on \({\varvec{x}}\), i.e., \({\varvec{z}} = g\left( {\varvec{x}} \right)\). This allows the online platform to ask follow-up questions as soon as the user typed in his initial descriptions, instead of waiting for the doctor to ask such questions. We call \({\varvec{z}} = g\left( {\varvec{x}} \right)\) contextual prompts as the prompts \({\varvec{z}}\) shall depend on the context \({\varvec{x}}\). Such contextual prompts can be useful as it can save doctors and patients from time-consuming asynchronous communications. Instead, the online platform can prompt the patient to enter more information based on what has been entered so far. Computational modeling of the consultation process with contextual prompts. The above conceptual formulation includes a few components, which we further instantiate below. Patient's initial self-description \({\varvec{x}}^{\left( i \right)}\). This is a short natural language document written by the \(i\)-th patient when they initiate the request for online consultation. Here we assume different documents are written by different patients. Ground-truth diagnosis result \({\varvec{y}}^{\left( i \right)}\). This is the actual diagnosis given by the doctor to the \(i\)-th patient. Formally, if there are m diseases, then \({\varvec{y}}^{\left( i \right)}\) is a m-dimensional one-hot vector with a 1 at the dimension corresponding to the diagnosed disease, and 0 elsewhere. Diagnosis function \(f\). The diagnosis function \(f\) takes a patient's self-description (with or without prompts) as input and then outputs a probability distribution \({\varvec{y}}\) over the set of m predefined diseases. This function is instantiated as a text classification model trained on the online consultation corpus. Further, as a ''simulated doctor'', this function needs to reason like doctors who made the disease prediction after having complete information of the patient. Thus, we train \(f\) with complete information where the input document is a concatenation of patients' initial descriptions and the follow-up doctor-patient conversation. Uncertainty measure and threshold. The consultation process involves a decision point: if the predicted disease distribution has uncertainty higher than some threshold \({\uptau }\), follow-up questions (or contextual prompts) shall be invoked. Here we use Shannon's information entropy to measure the degree of uncertainty of a probability distribution [20]. Given a predicted disease distribution vector \({\varvec{y}}\), we calculate the entropy \(H\left( {\varvec{y}} \right)\) as its uncertainty measure: $$H\left( {\varvec{y}} \right) = \mathop \sum \limits_{j = 1}^{m} - y_{j} log\left( {y_{j} } \right)$$ where m is the number of diseases, \(y_{j}\) is the predicted probability for the \(j\)-th disease. In pilot study, we also explored margin (absolute difference between the highest and the second highest probabilities) and confidence (absolute difference between the highest probability and 1/m) [21] to measure uncertainty. The impact of different uncertainty measures on experimental results was minimal. We need a threshold \({\uptau }\) to decide whether the uncertainty is high enough to invoke contextual prompts. We set the average entropy value of the training data as the threshold. That is, for every patient's initial self-description \({\varvec{x}}^{\left( i \right)} \user2{ }\) in the training data, we apply \(f\) to obtain a predicted probability vector \({\varvec{y}}^{{\varvec{i}}}\), which has an uncertainty measure \(H\left( {{\varvec{y}}^{\left( i \right)} } \right)\). The threshold \(\tau\) equals the average of \(H\left( {{\varvec{y}}^{\left( i \right)} } \right)\) as \(i\) exhausts all \({\varvec{x}}^{\left( i \right)}\)'s in the training data. Contextual prompt vector \({\varvec{z}}\). Each dimension in \({\varvec{z}}\) represents whether to prompt the user to describe his experience about a specific medical term. Knowing information about these terms should help the doctor better assess the patient's condition and make a diagnosis. We apply a data-driven approach to construct this vocabulary of prompt terms. Specifically, we take \(k\) words with the highest TFIDF weights from doctor-patient conversations in the training corpus. A predicted contextual prompt vector \({\varvec{z}} = \left[ {z_{1} ,z_{2} , \ldots ,z_{k} } \right]\) has \(k\) dimensions. The elements can take real values indicating the predicted utility of prompting a user to mention a term in the follow-up conversation. In this work, our focus is the way of information elicitation, so in order to limit the computation complexity, we limit the prompt vocabulary size \(k = 100\). Information elicitation function \(g\). As formulated above, the function \(g\) generates the contextual prompt \({\varvec{z}}\) given patient's initial description \({\varvec{x}}\). Since each of the \(k\) dimensions in \({\varvec{z}}\) represents whether a term should be present or absent, we can view \(g\) as a function that has \(k\) real-valued outputs, each estimating an importance score for a term given the context \({\varvec{x}}\). Our instantiations of function \(g\) are described in the next subsection. Updating operation \(+ .\) Patients use natural language to revise their initial self-description under the guidance of prompts in the real world. Therefore, we assumed that the given prompts would be mentioned in the new description: \({\varvec{x}} + {\varvec{z}}\) simply appends \({\varvec{z}}\) to \({\varvec{x}}\). Instantiating information elicitation function \({\varvec{g}}\) According to the way doctors ask questions, with the goal of decreasing their uncertainty about the correct diagnosis based on the patient's self-description, we designed a strategy named ''Learned Prompts''. We train a classification model \({\varvec{z}} = g\left( {\varvec{x}} \right)\) where \({\varvec{z}}\) is an array of \(k\) independent probabilities indicating the chance of prompting a term. This effectively translates \(g\) into \(k\) independent binary classifiers, each predicting the chance of prompting the \(l\)-th term, \(1 \le l \le k\). To construct training data \(\left\{ {\left( {{\varvec{x}}^{\left( i \right)} ,t_{l}^{\left( i \right)} } \right)} \right\}\) for the \(l\)-th binary classifier, the ground truth label \(t_{l}^{\left( i \right)}\) for the \(i\)-th training instance is determined as follows. \(t_{l}^{\left( i \right)} = 1\) if adding \(z_{l}\) (the \(l\)-th prompt term) into the initial description \({\varvec{x}}^{\left( i \right)}\) increases the predicted probability of the diagnosed disease; \(t_{l}^{\left( i \right)} = 0\) otherwise. The rationale here is that a term \(z_{l}\) should have high chance to be prompted if mentioning it in later conversations would increase the doctor's certainty on the disease that is ultimately diagnosed. Formally, \(t_{l}^{\left( i \right)} = 1\) if \({\varvec{y}}^{\left( i \right)} ,f\left( {{\varvec{x}}^{\left( i \right)} + z_{l} } \right) > {\varvec{y}}^{\left( i \right)} ,f\left( {{\varvec{x}}^{\left( i \right)} } \right)\), where \(\langle {\varvec{a}},{\varvec{b}}\rangle\) is the dot product of \({\varvec{a}}\) and \({\varvec{b}}\); \({\varvec{x}}^{\left( i \right)} + z_{l}\) concatenates the document \({\varvec{x}}\) and the word \(z_{l}\). For comparison purposes, we also present three baseline strategies as follows. No prompts Under this strategy \(g\) does not output a prompt. This effectively assumes no information elicitation process when making diagnosis. Certainty-based prompts Given \({\varvec{x}}\), we measure uncertainty for all \(f\left( {{\varvec{x}}^{\left( i \right)} + z_{l} } \right)\), \(1 \le l \le k\). We then rank prompt terms \(z_{l}\) such that the terms that made the disease prediction more certain (has low entropy) are ranked at the top. The rationale is that if knowing more about a term can increase doctor's certainty about the diagnosis, that term may be useful. Uncertainty-based prompts This strategy was similar to that of Certainty-based Prompts but ranked prompt terms in reverse order (prioritizing terms that made the prediction more uncertain). This is inspired by the uncertainty-based sampling method in active learning [21]. Selecting top prompts When an online platform prompts a user to say more about their medical conditions, the prompt may only contain a small number of terms (e.g., ''Can you continue to describe aspects x, y, and z''?). Therefore, we only consider the top \(q\) terms ranked by their scores as assigned by \(g\). In this study, we vary \(q\) across the range \(\left\{ {1,2,3,4,5,6,7,8,9,10} \right\}\). Summarizing the above description, Fig. 1 depicts the overall workflow of the proposed method. The dashed box denotes a switch that controls which strategy the information elicitation function \(g\) chooses. Workflow of diagnosis with prompts Compared methods Because both \(f\) and \(g\) are classifiers, we explored two kinds of models: The one was a traditional classifier working on sparse representation, like bag-of-words (BOW), that is logistic regression; we use BOW to denote this model in the rest of this paper. The other model was Bidirectional Encoder Representations from Transformers (BERT) [22]. Because the datasets we used below are Chinese, we configure the Chinese BERT-base model released by Google.Footnote 2 Each model can work with four prompts strategies, thus we had eight methods to compare. We named each method with the unified pattern ''model name + prompts strategy''. The eight methods are: BOW + No Prompts, BOW + Learned Prompts, BOW + Certainty-based Prompts, BOW + Uncertainty-based Prompts, BERT + No Prompts, BERT + Learned Prompts, BERT + Certainty-based Prompts, and BERT. + Uncertainty-based Prompts. Experiment and evaluation We used three Chinese patient diagnosis datasets to demonstrate the effectiveness of our method. They were from three different areas of medicine: pediatrics, andrology and cardiology. Figure 2a–c show the distribution of the three datasets, respectively. The data distribution of three departments The corpora are all from haodf.com, the largest Chinese online platform that connects patients to doctors. On the platform, a diagnosis starts with a patient's main concerns in text. Then a doctor converses with the patient to give his or her suggestion or ask more questions to better understand the patient's condition. In the end, the doctor uses a disease to label this consultation. We illustrate the data pattern on haodf.com in Fig. 3 Source: https://www.haodf.com/bingcheng/8821240724.html. Accessed in June 2021 A screenshot of a diagnosis on haodf.com. The doctor's statements are in light-blue bubbles. The patient's statements are in light-gray bubbles. We include English translation of the Chinese post to improve readability. Each document consists of two parts: initial description (ID) and clarification. An initial description is a patient's self-description of symptoms used to consult a doctor, and a clarification is the conversation between the patient and the doctor. Following the notation we used in the Problem Formulation section, we use \(X\) and \(C\) to denote the collection of ID and clarification respectively, such that \(X = \left\{ {x_{1} , x_{2} , \ldots ,x_{n} } \right\}\), \(x_{i}\) denotes the \(i\)-th example's ID, \(C = \left\{ {c_{1} , c_{2} , \ldots ,c_{n} } \right\}\), \(c_{i}\) denotes the \(i\)-th example's clarification. Both \(x_{i}\) and \(c_{i}\) are text sequence. The full information of diagnosis should incorporate both ID and clarification, so we denote it with \(X_{comp - info} = \left\{ {x_{1} + c_{1} , x_{2} + c_{2} , \ldots ,x_{n} + c_{n} } \right\}\), where \(x_{i} + c_{i}\) denotes putting \(i\)-th example's ID and clarification together to form one text sequence. When training \(f\), \(X_{comp - info}\) is used, and for test, \(X\) is used. Table 1 summarizes basic statistics of the three corpora. pkuseg package was used for Chinese word segmentation [23]. Table 1 Corpora statistics All paths in Fig. 1 were designed to predict the disease diagnosis as the final output, thus we used the classification metrics of accuracy, macro-averaged F-score, macro-averaged area under the receiver operating characteristics (ROC) curve, and macro-averaged Matthews correlation coefficient (MCC) to evaluate the performance. To reveal the correlation of prediction and diagnosis uncertainty we also used entropy as a metric. Viewing the classification of each individual disease as a binary classification problem, results can be divided into true positive (TP), true negative (TN), false positive (FP) and false negative (FN). Accuracy. Accuracy measures the proportion of right predictions without considering the difference among classes. The metric was calculated as follows: $${\text{accuracy}} = \frac{TP + TN}{{TP + FP + FN + TN}}$$ Macro-averaged F-score. F-score is the harmonic mean of precision and recall, a metric that balances the two [24]. Recall measured the percentage of TPs among all documents that truly mentioned that disease; precision measured the percentage of TPs among all documents predicted to mention that disease. The metric was calculated as follows: $$\begin{array}{*{20}c} {{\text{F}}-{\mathrm{score}} = \frac{2 \times recall \times precision}{{recall + precision}} = \frac{2 \times TP}{{2 \times TP + FP + FN}},} \\ \end{array}$$ To measure the classification performance of a set of diseases, we used the macro-averaged F-score. Formally, the metric was calculated as follows: $$\begin{array}{*{20}c} {{\text{macro-averaged F-score}} = \frac{1}{\left| C \right|}\mathop \sum \limits_{i = 1}^{\left| C \right|} {\text{F-score}}_{{\text{i}}} ,} \\ \end{array}$$ where \(C\) is the set of diseases (classes), and \({\text{F-score}}_{{\text{i}}}\) is the \({\text{F-score}}\) of the \(i\)-th disease. Macro-averaged Area Under the ROC Curve (Macro-averaged AUC). The ROC curve shows the performance of a classification model at all classification thresholds. The curve plots two parameters: true-positive rate (TPR) and false-positive rate (FPR). The two parameters are defined as follows: $$\begin{array}{*{20}c} {TPR = \frac{TP}{{TP + FN}},} \\ \end{array}$$ $$\begin{array}{*{20}c} {FPR = \frac{FP}{{FP + TN}}} \\ \end{array}$$ Area under the ROC curve (AUC) measures the entire two-dimensional area underneath the entire ROC curve [24]. In practice, the calculation of AUC often adopts the Wilcoxon-Mann–Whitney test [25]: $$\begin{array}{*{20}c} {{\text{AUC}} = \frac{{\mathop \sum \nolimits_{{t_{1} \in D^{0} }} \mathop \sum \nolimits_{{t_{1} \in D^{1} }} 1|f\left( {t_{0} } \right)\left\langle {f\left( {t_{1} } \right)} \right| }}{{\left| {D^{0} } \right| \times \left| {D^{1} } \right|}},} \\ \end{array}$$ where \(1|f\left( {t_{0} } \right)\left\langle {f\left( {t_{1} } \right)} \right|\) denotes an indicator function that returns 1 if \(f\left( {t_{0} } \right) < f\left( {t_{1} } \right)\) otherwise it returns 0; \(D^{0}\) is the set of negative examples, and \(D^{1}\) is the set of positive examples. Macro-averaged AUC was used to calculate the average AUC of all diseases: $$\begin{array}{*{20}c} {{\text{macro-averaged AUC}} = \frac{1}{\left| C \right|}\mathop \sum \limits_{i = 1}^{\left| C \right|} AUC_{i} ,} \\ \end{array}$$ where \(C\) was the set of diseases(classes) and \({\text{AUC}}_{{\text{i}}}\) was the \({\text{AUC}}\) of the \(i -\) th disease. Macro-averaged Matthews Correlation Coefficient (Macro-averaged MCC). The Matthews correlation coefficient (MCC) calculates the Pearson product–moment correlation coefficient between actual and predicted values [26]. The formula to calculate MCC is as follows: $$\begin{array}{*{20}c} {{\text{MCC}} = \frac{TP \times TN - FP \times FN}{{\sqrt {\left( {TP + FP} \right)\left( {TP + FN} \right)\left( {TN + FP} \right)\left( {TN + FN} \right)} }},} \\ \end{array}$$ Considering the weight of each disease equally, the macro-averaged MCC was calculated as follows: $$\begin{array}{*{20}c} {{\text{macro-averaged MCC}} = \frac{1}{\left| C \right|}\mathop \sum \limits_{i = 1}^{\left| C \right|} MCC_{i} ,} \\ \end{array}$$ where \(C\) was the set of diseases(classes) and \({\text{MCC}}_{{\text{i}}}\) was the \({\text{MCC}}\) of the \(i\)-th disease. Entropy. Entropy was introduced in Method section. Here we use averaged entropy as the metric: $$\begin{array}{*{20}c} {{\text{averaged}} \;H = \frac{1}{\left| N \right|}\mathop \sum \limits_{i = 1}^{\left| N \right|} H_{i} ,} \\ \end{array}$$ where \(N\) is the size of test example in a dataset. Train-test split To reduce the variance of results caused by the train-test split, we ran a fivefold cross-validation, where 4 folds of the data are used as training and onefold of the data are used as test. The final results are the averaged results of 5 folds. To avoid the case where some classes do not appear in the training or test set, we applied the stratified k-fold. Figure 4 shows the accuracy, macro-averaged F-score, macro-averaged AUC, macro-averaged MCC and entropy of all eight methods under various numbers of prompts on three evaluation corpora. This figure consists of 15 subplots, each one reporting results of one metric of all methods with 1 to 10 prompts on one corpus. All experimental results on the three corpora are exhibited in this figure. The whole figure consists of 15 subplots, a–o. Each column of the figure is one data set, each row is one metric. In one subplot, the x-axis is the number of prompts, y-axis is the corresponding metric. For instance, subplot (a) summarizes the accuracy of all methods from 1 to 10 prompts on pediatrics data set. The higher the accuracy, macro-averaged F-score, macro-averaged AUC and macro-averaged MCC the better. The lower the entropy, the better In terms of accuracy (subplots a, b, c in Fig. 4), BERT + No Prompts consistently outperformed BOW + No Prompts across three corpora. When learned prompts were involved, the two baseline methods improved accordingly: BERT + Learned Prompts consistently delivered better performance than BERT + No Prompts, and the performance of BOW + Learned Prompts exceeded that of BOW + No Prompts on two of three data sets. Certainty-based prompts did not help much: both BERT + Certainty-based Prompts and BOW + Certainty-based Prompts performed slightly worse than baseline methods. The two methods related to uncertainty-based prompts performed much worse than baseline. In terms of macro-averaged metrics, in F-score (subplots d, e, f in Fig. 4), BOW + No Prompts performed better than BERT + No Prompts over all corpora. Apart from andrology, learned prompts benefitted the other two models. But both certainty-based and uncertainty-based prompts almost always hurt the performance of the two models. As for AUC (subplots g, h, i in Fig. 4) and MCC (subplots j, k, l in Fig. 4), there was not a consistent pattern for the baseline methods, but learned prompts consistently improved them across all datasets and helped BERT achieve the best performance on two of three datasets in AUC and all three datasets in MCC. Certainty and uncertainty based strategies still did not help in most cases in the two metrics. In terms of entropy (subplots m, n, o in Fig. 4), BERT showed much lower entropy than BOW, and there was a similar pattern in entropy over all corpora: BOW + Uncertainty-based Prompts > BOW + No Prompts > BOW + Learned Prompts > BOW + Certainty-based Prompts > BERT + Uncertainty-based Prompts > BERT + No Prompts > BERT + Learned Prompts > BERT + Certainty-based Prompts. As more prompts are adapted, learned prompts tend to be more helpful, although such benefits are not consistent with the number of prompts. It is easy to observe one trend: there is a performance gap between two classifiers in disease prediction. From the results delivered by BERT + No Prompts and BOW + No Prompts, we can see that BERT has better performance than BOW in accuracy. This is partly due to the Transformer's multi-head attention mechanism, which allows BERT to learn long-distance dependency efficiently. Another reason is BERT's unique pretraining objective, which can incorporate the sequence information of text in two directions efficiently. When it comes to macro-averaged metrics, BOW was not always worse than BERT, especially in F-score, where BOW consistently outperformed BERT. This is because BERT has relatively poorer performance than shallow conventional models, such as SVM, on classes with few samples [27]. Each dataset in experiments had nearly 50% classes (diseases) with fewer than 10 examples (eight for training): these included 15 of 31 in andrology, 41 of 79 in pediatrics and 28 of 53 in cardiology. So, the macro-averaged F-score delivered by BERT was lower than that of BOW. In addition, besides the high proportion of minority classes (those with limited examples), data distribution was highly skewed, which made classifiers biased toward predicting major classes (those with more examples) [28], and F-scores of the two classifiers were much lower than their accuracy. Another intriguing observation is that the trivial solution for decreasing uncertainty did not improve disease prediction. As we described, because of the lack of information in self-description, doctors may be too uncertain to make an accurate diagnosis. So good performance on disease prediction should correspond with low uncertainty. Uncertainty-based and learned prompts did follow the hypothesis: compared to baseline, the uncertainty-based prompts increased the uncertainty while decreasing the performance, the learned prompts decreased uncertainty while increasing performance. But the certainty-based prompts failed to follow this path: searching to quickly lessen large amounts of uncertainty hurt the prediction performance most times. To explore the reason, we use an example (shown in Table 2) from a pediatrics department; this example was classified correctly by BERT + Learned Prompts but incorrectly by BERT + No Prompts, BERT + Certainty-based Prompts and BERT + Uncertainty-based Prompts. Table 2 An example for case study In this example, the self-description is short and involves common symptoms relating to several diseases, like cough, cold and fever. It is unlikely to be classified correctly without additional information. But neither the certainty-based nor the uncertainty-based prompts helped the prediction. The certainty-based prompts were all related to the cough class. Naturally, these words guided the classifier to be biased toward the cough class; therefore, the predicted probability distribution was more concentrated than in the baseline method, lessening uncertainty. But the certainty-based strategy only considered the decrease of uncertainty and ignored the exactness of prompts, making the classifier like a doctor who is eager to make his decision but lacks comprehensive inquiries. In contrast, the uncertainty-based prompts were too general; those prompts seemed to relate to every disease. They were not helpful to assign correct labels and might have led classifiers to give a more even probability distribution over all diseases than baseline methods, which resulted in the increase in uncertainty. In considering both uncertainty and exactness at the same time, the learned prompts complemented the self-description with related information, thus leading to a correct prediction. In reality, if the patient followed these prompts to complete his initial self-description, the doctor might have had a better chance of getting the correct diagnosis even without further conversation. At the same time, more learned prompts possibly covered the lack of provided information, which therefore resulted in better prediction performance. However, when self-descriptions only include little diagnosis-related information or are too complex, even Learned Prompts do not work well. We list such two examples, which BERT + Learned Prompts failed to classify, in Table 3. This is a reasonable phenomenon. For the under-informative cases, even doctors need to ask questions from the start to get clues for diagnosis, so it is understandable that Learned Prompts did not know what to suggest and failed to give the right information in such cases. And for complicated cases, the intuitive but trivial strategy was not capable of capturing the key points to give effective suggestions. Table 3 Case misclassified by BERT + learned prompts In general, Learned Prompts can bring improvement, but it worked relatively poorly with BOW on the andrology corpus. This is related to the characteristics of the department. Andrology has more across-disease keywords than the other two departments: 22.22 in andrology, 19.83 in cardiology and 18.5 in pediatrics. Adding prompts consisting of those words directly might blur the difference among classes for the traditional classifier, which applies a bag-of-words model to represent examples [29], in contrast BERT, which benefits from the self-attention mechanism, which can better capture the slight differences than BOW can. Therefore, the learned prompts hurt BOW but still benefit BERT. There are some limitations in this study: (1) The way patients were cued to provide further information and (2) the way elicited information was incorporated. First, the natural way to elicit complementary information is the way doctors do it—by asking questions with understandable and complete sentences; our system's pop-up word prompts are not as user-friendly as a natural conversation and may lead to some patient confusion. Second, when people see prompts, they tend to incorporate the new information by revising their self-description in natural language; however, the current updating operation is relatively primitive and might make the useful information noise by missing the patient's syntax. In this paper we introduced a method to deal with a problem in Chinese online health care consultation: low communication efficiency caused by under-informative patients' self-descriptions of the problem. The method consists of several parts, including a diagnosis function, an uncertainty calculation function, a prompts pool and an information elicitation function. The diagnosis function was implemented with a disease classifier trained using comprehensive information from both doctors and patients; the uncertainty calculation function was implemented with an entropy calculation formula; the prompts pool was constructed using top \(k\) TFIDF words from doctor-patient conversations; the information elicitation function adopted a classifier trained with pairs of potential prompts and patient self-descriptions, where the prompt improved the prediction of the description on the right class. Through experiments conducted on three Chinese online medical consultation corpora, we proved the effectiveness of our method. Although, in general, the better option to implement our method is the powerful pretrained deep learning model BERT, the conventional learning regression model (BOW) also delivered decent results, which were comparable with the BERT baseline method occasionally. This means that when computational resources are limited, our method still works in this task. In future work, we will conduct more evaluation studies to assess the performance of the method using real-world scenarios. The data set analyzed in the current study is available in the following GitHub Repository: https://github.com/bruceli518/Contextual-Prompts. http://www.xinhuanet.com/2020-02/25/c_1125622948.htm https://github.com/google-research/bert TFIDF: Term frequency-inverse document frequency BERT: Bidirectional encoder representations from transformers BOW: Bag-of-words Zhou F, Wang Z, Mai X, Liu X, Reid C, Sandover S, et al. Online clinical consultation as a utility tool for managing medical crisis during a pandemic: retrospective analysis on the characteristics of online clinical consultations during the COVID-19 pandemic. J Prim Care Commun Health. 2020;11:2150132720975517. Kurniawan FF, Shidiq FR, Sutoyo E. WeCare project: development of web-based platform for online psychological consultation using scrum framework. Bull Comput Sci Electr Eng. 2020;1(1):33–41. Nie L, Wang M, Zhang L, Yan S, Zhang B, Chua TS. Disease inference from health-related questions via sparse deep learning. IEEE Trans Knowl Data Eng. 2015;27(8):2107–19. Tang KF, Kao HC, Chou CN, Chang EY. Inquire and diagnose: Neural symptom checking ensemble using deep reinforcement learning. In: NIPS Workshop on Deep Reinforcement Learning; 2016. Garg R, Oh E, Naidech A, Kording K, Prabhakaran S. Automating ischemic stroke subtype classification using machine learning and natural language processing. J Stroke Cerebrovasc Dis. 2019;28(7):2045–51. Li X, Wang Y, Wang D, Yuan W, Peng D, Mei Q. Improving rare disease classification using imperfect knowledge graph. BMC Med Inform Decis Mak. 2019;19(5):1–10. Malik S, Kanwal N, Asghar MN, Sadiq MAA, Karamat I, Fleury M. Data driven approach for eye disease classification with machine learning. Appl Sci. 2019;9(14):2789. Trambaiolli LR, Lorena AC, Fraga FJ, Kanda PA, Anghinah R, Nitrini R. Improving Alzheimer's disease diagnosis with machine learning techniques. Clin EEG Neurosci. 2011;42(3):160–5. Senturk ZK. Early diagnosis of Parkinson's disease using machine learning algorithms. Med Hypotheses. 2020;138: 109603. Şentürk ZK, Çekiç, N. A machine learning based early diagnosis system for mesothelioma disease. Düzce Üniv Bilim ve Teknoloji Dergisi. 2020;8(2):1604–11. Senturk ZK, Bakay MS (2021) Machine learning based hand gesture recognition via emg data. ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal. 10 (2) Wei Z, Liu Q, Peng B, Tou H, Chen T, Huang XJ, et al. Task-oriented dialogue system for automatic diagnosis. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers); 2018. p. 201–207. Lin X, He X, Chen Q, Tou H, Wei Z, Chen T. Enhancing dialogue symptom diagnosis with global attention and symptom graph. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP); 2019. p. 5033–5042. Xu L, Zhou Q, Gong K, Liang X, Tang J, Lin L. End-to-end knowledge-routed relational dialogue system for automatic diagnosis. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33; 2019. p. 7346–7353. Feng M, Xiang B, Glass MR, Wang L, Zhou B. Applying deep learning to answer selection: A study and an open task. In: 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE; 2015. p. 813–820. Qiu X, Huang X. Convolutional neural tensor network architecture for community-based question answering. In: Twenty-Fourth international joint conference on artificial intelligence; 2015. Tan M, Santos Cd, Xiang B, Zhou B. Lstm-based deep learning models for non-factoid answer selection. arXiv preprint arXiv:151104108. 2015. Buchanan BG, Shortliffe EH. Rule Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project (The Addison-Wesley series in artificial intelligence). Addison-Wesley Longman Publishing Co., Inc.; 1984. McKenzie PJ. A model of information practices in accounts of everyday‐life information seeking. J Doc. 2003;59(1):19–40. https://doi.org/10.1108/00220410310457993. Shannon CE. A mathematical theory of communication. Bell Syst Tech J. 1948;27(3):379–423. Settles B. Active learning literature survey. Computer Sciences Technical Report 1648. 2009. Devlin J, Chang MW, Lee K, Toutanova K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:181004805. 2018. Luo R, Xu J, Zhang Y, Ren X, Sun X. PKUSEG: A Toolkit for Multi-Domain Chinese Word Segmentation. CoRR. 2019;abs/1906.11455. Available from: https://arxiv.org/abs/1906.11455. Powers DM. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv preprint arXiv:201016061. 2020. Mason SJ, Graham NE. Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: statistical significance and interpretation. Quart J R Meteorol Soc J Atmos Sci Appl Meteorol Phys Oceanogr. 2002;128(584):2145–66. Chicco D, Jurman G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020;21(1):1–13. Li X, Yuan W, Peng D, Mei Q, Wang Y. When BERT Meets Bilbo: A Learning Curve Analysis of Pretrained Language Model on Disease Classification. In: 2020 IEEE International Conference on Healthcare Informatics (ICHI). IEEE; 2020. p. 1–2. He H, Garcia EA. Learning from imbalanced data. IEEE Trans Knowl Data Eng. 2009;21(9):1263–84. Ruojia W. Automatic triage of online doctor services based on machine learning. Data Anal Knowl Discov. 2019;3(9):88–97. The authors would like to thank the anonymous reviewers for their valuable feedback and Dr. Qiaozhu Mei at the University of Michigan for his constructive comments on this work. This work was funded by Chengdu Science and Technology Project under grant number 2021-JB00-00025-GX. College of Computer Science, Sichuan University, Chengdu, China Xuedong Li & Dezhong Peng School of Information and Library Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA Yue Wang Xuedong Li Dezhong Peng XL preprocessed the data, designed and implemented different algorithms, and drafted the manuscript. DP edited the manuscript. WY provided the data and edited the manuscript. All authors read and approved the final manuscript. Correspondence to Yue Wang. Li, X., Peng, D. & Wang, Y. Improving patient self-description in Chinese online consultation using contextual prompts. BMC Med Inform Decis Mak 22, 170 (2022). https://doi.org/10.1186/s12911-022-01909-3 Online health care consultation Self-description Big Data and Machine Learning in Bioinformatics and Medical Informatics Submission enquiries: [email protected]
CommonCrawl
Individual dosimetry system for targeted alpha therapy based on PHITS coupled with microdosimetric kinetic model Tatsuhiko Sato ORCID: orcid.org/0000-0001-9902-70831,2, Takuya Furuta1, Yuwei Liu3, Sadahiro Naka4, Shushi Nagamori5, Yoshikatsu Kanai6 & Tadashi Watabe3 EJNMMI Physics volume 8, Article number: 4 (2021) Cite this article An individual dosimetry system is essential for the evaluation of precise doses in nuclear medicine. The purpose of this study was to develop a system for calculating not only absorbed doses but also EQDX(α/β) from the PET-CT images of patients for targeted alpha therapy (TAT), considering the dose dependence of the relative biological effectiveness, the dose-rate effect, and the dose heterogeneity. A general-purpose Monte Carlo particle transport code PHITS was employed as the dose calculation engine in the system, while the microdosimetric kinetic model was used for converting the absorbed dose to EQDX(α/β). PHITS input files for describing the geometry and source distribution of a patient are automatically created from PET-CT images, using newly developed modules of the radiotherapy package based on PHITS (RT-PHITS). We examined the performance of the system by calculating several organ doses using the PET-CT images of four healthy volunteers after injecting 18F-NKO-035. The deposition energy map obtained from our system seems to be a blurred image of the corresponding PET data because annihilation γ-rays deposit their energies rather far from the source location. The calculated organ doses agree with the corresponding data obtained from OLINDA 2.0 within 20%, indicating the reliability of our developed system. Test calculations by replacing the labeled radionuclide from 18F to 211At suggest that large dose heterogeneity in a target volume is expected in TAT, resulting in a significant decrease of EQDX(α/β) for higher-activity injection. As an extension of RT-PHITS, an individual dosimetry system for nuclear medicine was developed based on PHITS coupled with the microdosimetric kinetic model. It enables us to predict the therapeutic and side effects of TAT based on the clinical data largely available from conventional external radiotherapy. Recently, targeted alpha therapy (TAT) is gaining grounds as a novel treatment for refractory cancer, particularly after an excellent treatment effect of 225Ac-PSMA-617 [1]. We have already proved the therapeutic efficacies of [211At]NaAt against differentiated thyroid cancer, 211At-labeled phenylalanine for glioma, and 225Ac-labeled fibroblast activation protein inhibitors (FAPI) against pancreatic cancer in preclinical studies [2,3,4]. For clinical translation, physicians initiated clinical trial is under preparation using [211At]NaAt in patients with differentiated thyroid cancer refractory to radio-iodine (131I) treatment. However, the TAT drugs which are successful in clinical application is still limited, and we need practical tools to evaluate the precise dose in the target and risk organs to define the most suitable dose for individual patients. The absorbed dose (Gy) has generally been used as the primary index for predicting the therapeutic effects on tumor and unintended harmful effects on normal tissue, both in preclinical and clinical trials. In addition, higher relative biological effectiveness (RBE) must be considered in this prediction because α particles densely deposit their energies along their tracks and effectively induce cell killing compared to X-rays and β particles with the same dose. For simplicity, a fixed RBE value of 5 is recommended to use in the dosimetry of TAT [5]. However, actual values of RBE intrinsically depend on the absorbed dose. Thus, explicit consideration of the dose dependence of RBE in the design of TAT is desired in the same way as the carbon ion therapy [6]. In addition, the repair mechanism during the irradiation must also be considered because of a relatively lower dose rate of TAT in comparison to external radiotherapy. Therefore, the concept of the equieffective dose, EQDX(α/β), formalism was proposed to use in the TAT dosimetry [7], where EQDX represents the absorbed dose to give the same biological effect of the reference treatment, e.g., fractionated X-ray therapy [8]. The commonly used biological effective dose, BED [9], is a special case of EQDX(α/β). Using EQDX(α/β), the therapeutic and side effects of TAT can be predicted from the clinical data largely available from conventional external radiotherapy. Dosimetry systems based on standardized phantoms such as OLINDA/EXM [10] and IDAC-Dose 2.1 [11] are widely used to estimate organ doses in nuclear medicine. However, they have some shortcomings when applied to the targeted radionuclide therapy (TRT) including TAT. For example, they cannot consider detailed anatomical differences of each patient and cannot calculate the heterogeneity of absorbed doses in the target tumor and normal tissues, which may influence tumor response and normal tissue toxicity. Therefore, several authors [12,13,14,15,16,17,18] developed 3-dimensional dosimetry systems by automatically creating patient-specific human phantoms and spatial distributions of radionuclides from CT and PET/SPECT images, respectively. These systems allow for a sophisticated design of TRT by calculating more detailed dosimetric quantities such as dose-mass histograms (DMH) in target tumor and normal tissues. In addition, some of them have a function of evaluating BED based on their calculated absorbed doses and dose rates. However, none of the existing system was capable of calculating EQDX(α/β) for TAT, considering the complex dose dependence of RBE. Under these situations, we developed a patient-specific dosimetry system that can calculate EQDX(α/β) for TAT as well as other TRT, based on the Particle and Heavy Ion Transport code System (PHITS) [19] coupled with the microdosimetric kinetic model (MKM) [20]. The accuracy of RBE estimated by PHITS coupled with MKM was well verified for proton therapy [21], carbon-ion therapy [22], and boron neutron capture therapy (BNCT) [23]. In the system, a voxel phantom and a cumulative activity distribution map of a patient are automatically created in the PHITS input format from PET-CT images, respectively. After the PHITS simulation using these input files, EQDX(α/β) as well as the total absorbed dose and deposition energy in each voxel are estimated, considering the microscopic dose distribution and dose rate. In this study, the performance of the system was examined using the dynamic PET-CT data, and the results were compared with corresponding data obtained from OLINDA 2.0 [10]. Individual dosimetry system based on PHITS Figure 1 shows the flowchart of our dosimetry system, which was developed as an extension of the radiotherapy package based on PHITS, so-called RT-PHITS. It can be divided into three processes: (1) conversion from PET-CT images to PHITS input files, (2) calculation of absorbed doses using PHITS, and (3) estimation of EQDX(α/β) based on the PHITS results coupled with MKM. EQDX(α/β) as well as total dose and deposition energy in each voxel are converted in DICOM RT-DOSE format. Thus, they can be imported to commercial DICOM software for further analysis. Details of each process are described below. Flowchart of the individual dosimetry system based on PHITS Conversion from PET-CT images to PHITS input files Firstly, the patient-specific voxel phantom in the PHITS input format is created from his/her CT image using the CT2PHITS module, which was formerly called DICOM2PHITS [19]. Then, we adopted the correlation between CT numbers (Hounsfield Unit) and tissue parameters proposed by Schneider et al. [24] in this conversion, though users can define their own formula to represent the correlation in our system. The tallies for scoring the absorbed doses in Gy and deposition energies in MeV are also generated during this process. The resolutions of the created voxel phantom and mesh tallies are the same as the CT image. A new module of RT-PHITS named PET2PHITS was developed in this study to create the maps of the cumulative activities as well as biological decay constants of the radionuclides based on the PET images. There are two types of patient-specific dosimetry systems; one is to create time-dependent activity maps and execute the particle transport simulations for each time step, and the other is to create a cumulative activity map and execute a single particle transport simulation. Using the former method, dynamical dose evaluation is possible by fitting the calculated doses for each time step. However, it is very time-consuming because the Monte Carlo simulation needs to be continued until sufficiently small statistical uncertainties of the calculated doses in each voxel and time step are obtained to achieve the meaningful fitting. We therefore adopted the latter method; our system determines the cumulative activities and the biological decay constants of the radionuclides by fitting the dynamic PET images. Then, the dose rates are estimated under the assumption that they are proportional to the sum of the physical and biological decay constants of nearby voxels. The detail procedures for determining the cumulative activities and the biological decay constants are shown in Appendix A. Calculation of absorbed doses using PHITS Using the input files created from CT and PET images, PHITS simulation is performed to calculate the absorbed doses in the patient. In this study, PHITS version 3.20 was employed, and the EGS5 mode [25] was used for the photon, electron, and positron transport. The fluences of the source particles including the contributions from daughter nuclides are determined from the RI source generation function in PHITS, based on ICRP Publication 107 [26]. The absorbed doses due to the ionization induced by α and β± particles (referred to α and β doses, respectively) were separately calculated in the simulation. Note that the kerma approximation was not adopted, and thus, the photon doses were categorized as their secondary particle doses, i.e., β dose. Before performing the particle transport simulation inside the patient body, another PHITS simulation must be performed to calculate the dose probability densities (PD) of lineal energy, d(y), in water for α and β doses, which are to be provided to MKM for the RBE estimation. The definition of the fundamental microdosimetric quantities such as lineal energy y is described in Appendix B. This simulation is required once for each radionuclide because it is not specific at each patient. The microdosimetric function of PHITS [27] is utilized for this calculation because the site size of y needed to be evaluated for MKM is too small (less than 1 μm) to be handled with the condensed history method employed in EGS5. Note that the microdosimetric function was developed by fitting the results of track-structure simulation. Thus, it can analytically determine the PD of y down to the nanometer scales, considering the dispersion of deposition energies from the production of δ-rays. Figure 2 shows examples of the calculated PD of y for α and β doses of 211At. Calculated dose PD of y, yd(y), for rd = 0.282 μm for the α and β doses of 211At Estimation of EQDX(α/β) EQDX(α/β) is defined as the total absorbed dose delivered by the reference treatment plan (fraction size X) leading to the same biological effect as a test treatment plan [8]. Assuming that the biological effectiveness is proportional to the cell surviving fraction following a linear-quadratic (LQ) relationship, EQDX(α/β) for a test treatment with the surviving fraction S can be calculated by $$ \mathrm{EQDX}\left(\frac{\alpha }{\beta}\right)=\frac{-\ln (S)}{\alpha +\beta X}, $$ where α and β are the LQ parameters for the reference treatment. Based on MKM with the extensions of the saturation correction due to the overkill effect [28] and the dose rate effect [29], the cell surviving fraction in any radiation field with an absorbed dose D can be estimated by $$ S(D)=\exp \left[-\left({\alpha}_0+\beta {z}_{1\mathrm{D}}^{\ast}\right)D- G\beta {D}^2\right], $$ where α0 is the linear coefficient of the surviving fraction with the limit of LET → 0, G is the correction factor due to the dose rate effect, and \( {z}_{1\mathrm{D}}^{\ast } \) is the saturation-corrected dose-mean specific energy, deduced by $$ {z}_{1\mathrm{D}}^{\ast }=\frac{1}{\pi {r}_{\mathrm{d}}^2}{y}^{\ast }=\frac{1}{\pi {r}_{\mathrm{d}}^2}{y}_0^2\int \frac{\left[1-\exp \left(-{y}^2/{y}_0^2\right)\right]d(y)}{y}\mathrm{d}y, $$ where y* is the saturation-corrected lineal energy, rd is the radius of a subcellular structure referred to as domain, y0 is a so-called saturation parameter that indicates the lineal energy above which the saturation correction due to the overkill effect becomes very important, and d(y) is the dose probability density in domain. d(y) in each voxel can be determined from its α and β doses, Dα and Dβ, respectively, as written by $$ d(y)=\frac{D_{\alpha }{d}_{\alpha }(y)+{D}_{\beta }{d}_{\beta }(y)}{D_{\alpha }+{D}_{\beta }}, $$ where dα(y) and dβ(y) are their dose PD for each radionuclide precalculated by PHITS using the microdosimetric function. More detailed descriptions about the features of MKM are given in Appendix B in addition to the definition of fundamental microdosimetric quantities. Assuming that the dose rates of TRT are expressed as a mono-exponential function with a decay constant of λphy + λbio, where λphy and λbio are the physical and biological decay constants, respectively, the value of G can be calculated using [13] $$ G=\frac{\lambda_{\mathrm{phy}}+{\lambda}_{\mathrm{bio}}}{\mu +{\lambda}_{\mathrm{phy}}+{\lambda}_{\mathrm{bio}}}, $$ where μ is the recovery rate constant. The parameters α, β, μ, α0, rd, and y0 depend on the cell line. Among them, α0, rd, and y0 are specific to MKM, and their determination requires the experimental data of cell surviving fractions for various ion irradiations, which are generally not available. Thus, we fixed rd and y0 to 0.282 μm and 93.4 keV/μm, respectively, which were evaluated from the surviving fractions of the HSG cell irradiated by various radiations including He ions [30, 31], and calculated α0 from α and \( {z}_{1\mathrm{D}}^{\ast } \) for the reference radiation, \( {z}_{1\mathrm{D},\mathrm{ref}}^{\ast } \), using the equation of \( {\alpha}_0=\alpha -\beta {z}_{1\mathrm{D},\mathrm{ref}}^{\ast } \). Then, the user input parameters to our dosimetry system are α, β, and μ, which can be obtained from the measured surviving fractions of the reference radiation, as well as the fraction size X. Referring to our previous works [23, 30], we set α = 0.251 Gy−1, β = 0.0615 Gy−2, μ = 1.5 h−1, and X =2 Gy in the test simulations performed in this study. Consequently, EQDX(α/β) calculated in this study can be expressed as EQD2(4.08), where 4.08 is the α/β ratio, i.e., 0.251/0.0615. EQDX(α/β) in a certain voxel can be simply calculated from Eq. 1 by substituting the surviving fraction in the voxel obtained from Eq. 2. In contrast, special care should be taken when EQDX(α/β) in a certain volume of interest (VOI) consisting of multiple voxels such as tumor and normal tissue is calculated because of the non-linear relationship between the EQDX(α/β) and the surviving fraction. In such cases, the mean surviving fraction in VOI, SVOI, is given by $$ {S}_{\mathrm{VOI}}=\frac{\sum_i{S}_i\left({D}_i\right){m}_i}{\sum_i{m}_i}, $$ where Si, Di, and mi are the surviving fraction, dose, and mass, respectively, of voxel i made up of VOI. EQDX(α/β) in VOI can be obtained from Eq. 1 by supplying SVOI to S in similar to the concept of the equivalent uniform dose (EUD) [32]. DMH in VOI is the key quantity in this evaluation, which can be also calculated from our dosimetry system. These calculations are performed by a newly developed module of RT-PHITS named PHITS2TRTDOSE. Dynamic PET-CT acquisition and analysis This study was approved by the institutional review board, and written informed consents were obtained from all participants. The performance of the system was examined using the dynamic PET-CT data of four healthy volunteers after injecting 18F-labeled NKO-035 with 221.6 ± 3.8 MBq, which is a specific substrate of L-type amino acid transporter-1 (LAT1). The dynamic PET data were acquired in nine frames (total scan duration: 90 min) with low-dose CT scan. All images were depicted by OSIRIX (Newton Graphics, Inc., Sapporo, Japan). Details of the data acquisition procedures were described in Appendix C. In the dose estimation, NKO-035 was assumed to be labeled with not only 18F but also 211At, 131I, and 177Lu with the same distribution in the body. Volume of interest was placed in major organs on dynamic PET images using PMOD software (PMOD Technologies Ltd., Zurich, Switzerland) with reference to CT images. The residence times in major organs and tissues were estimated for each patient based on their dynamic PET data using the method described in Appendix A. Supplying those data into OLINDA 2.0, the organ doses were calculated and compared with the corresponding data obtained from our dosimetry system by Bland-Altman analysis. Figure 3 shows the coronal view of CT and PET scans for a volunteer after injecting 18F-NKO-035, and the corresponding deposition energy, absorbed dose, and EQD2(4.08) maps, where 4.08 is the α/β ratio. The history number of the PHITS simulation was set to 300 million so that the statistical uncertainties are very small. The deposition energy map seems to be a blurred image of the PET data particularly around the high-activity organs such as kidney and bladder because annihilation γ-rays deposit their energies rather far from the source location. In contrast, the dose and EQD2(4.08) maps exhibit higher values even at low activity regions such as the lungs. This is because the dose and EQDX(α/β) are closely related to the activity per mass (and not volume) and consequently tend to be higher at low-density regions. The relative distributions of the dose and EQD2(4.08) are similar to each other, though the absolute values of EQD2(4.08) are approximately 74% of the corresponding dose as discussed later. Representative CT and PET images (90 min average) of a volunteer after the injection of NKO-035 labeled with 18F and the corresponding deposition energy, absorbed dose, and EQD2(4.08) maps obtained from RT-PHITS Table 1 summarises the absorbed doses in the brain, lung, liver, spleen, pancreas, and kidney obtained from RT-PHITS and OLINDA 2.0. It constitutes the mean values and standard deviations of the four volunteers after injection of NKO-035 virtually labeled with 1 MBq of 18F, 211At, 131I, or 177Lu. The mean organ doses for four volunteers calculated by RT-PHITS agree with the corresponding OLINDA data mostly within 20%. Figure 4 shows the Bland-Altman plot between the mean and percent difference of the organ doses calculated by RT-PHTIS and OLINDA 2.0 for each volunteer, radioisotope, and organ. It is evident that data are scattered randomly with respect to the mean organ doses. Table 1 Absorbed doses (μGy) in the brain, lung, liver, spleen, pancreas, and kidney obtained from RT-PHITS and OLINDA 2.0. The data are the mean value and standard deviation (S.D.) of the four volunteers with the injection of NKO-035 virtually labeled with 1 MBq of 18F, 211At, 131I, or 177Lu Bland-Altman plot between the mean and percent difference of the organ doses calculated by RT-PHITS and OLINDA 2.0 for each volunteer, radioisotope, and organ. The percent differences were obtained from (DRT-PHITS − DOLINDA)/Dmean, where DRT-PHITS and DOLINDA are organ doses calculated by RT-PHITS and OLINDA 2.0, respectively, while Dmean is the mean value of the two doses We have developed an individual dosimetry system, including the function for calculating EQDX(α/β), based on PHITS coupled with the microdosimetric kinetic model. The agreements between the calculated doses obtained from RT-PHITS and OLINDA 2.0 are quite satisfactorily, confirming the reliability of our developed system. In addition, no apparent trend is observed in the Bland-Altman plot drawn in Fig. 4, suggesting that the discrepancies between RT-PHITS and OLINDA results are predominantly attributed to random issues such as anatomical differences between each volunteer and the standardized phantom adopted in OLINDA 2.0. For example, data with the percent difference out of ± 1.96 S.D. are for organs whose masses differ from those of the standardized phantom by more than 30%. Figure 5 shows the activity dependency of the calculated dose and EQD2(4.08) in the kidney for a volunteer after injecting NKO-035 labeled with 211At or 18F. The calculated doses are directly proportional to the injection activity because the biokinetics of the radionuclides are assumed to be independent of their activity in this calculation. In contrast, EQD2(4.08) complicatedly depend on the injection activity. For 211At, they are higher and lower than the corresponding dose at lower and higher activities, respectively, and vice versa for 18F. Activity dependences of the calculated dose and EQD2(4.08) in the kidney for volunteer 1 with the injection of NKO-035 labeled with 211At or 18F In order to clarify these complicated relationships, we calculated EQD2(4.08) without considering the dose heterogeneity by simply averaging EQD2(4.08) in the kidney, and those without considering the dose-rate effect by setting the recovery rate constant μ = 0. Figure 6 shows the ratios of each EQD2(4.08) to the corresponding absorbed dose as a function of the injection activity. It is evident from the graph that ignoring the dose heterogeneity results in the increase of EQD2(4.08) particularly when injecting 211At with higher activities. This tendency can be explained due to the following. Firstly, the surviving fractions at high-dose irradiation are predominantly determined from those of cells having relatively smaller doses as discussed in our previous paper [33]. Lastly, the dose heterogeneity is relatively large for the injection of 211At in comparison to 18F, as shown in Fig. 7. Therefore, the consideration of the dose heterogeneity in a target volume is indispensable in the clinical design of TAT. The ignorance of the dose-rate effect also results in the increase of EQD2(4.08), but its influence is not so significant and is limited only at higher activities. This is because the dose rates are not very low in the studied cases owing to rather short half-lives of 211At and 18F, and the dose-rate effect reduces the coefficient of the quadratic term as expressed in Eq. 2, which is important only at high-dose irradiation. Note that the ratio of EQD2(4.08) to dose at lower activities becomes closer to 5.5 and 0.74 for 211At and 18F, respectively, which correspond to RBE at the limit of D → 0, RBEM, multiplied with α/(α+βX). Activity dependences of the ratios of EQD2(4.08) to the corresponding dose for the kidney. The data for EQD2(4.08) calculated without considering the dose heterogeneity or the dose-rate effect are also plotted Dose mass histogram (DMH) in the kidney for volunteer 1 with the injection of NKO-035 labeled with 211At or 18F. The mean doses were adjusted to 1 Gy for both DMH It should be mentioned that the model parameters used in these test calculations were determined from the surviving fractions of cells irradiated with external radiations, which might be inappropriate to be used for representing the surviving fraction of TAT because the absorbed doses are heterogeneously distributed in a microscopic scale due to the heterogeneity of radionuclides among each cell compartment [34] and organ microstructure [35]. Thus, the evaluation of the reliable model parameters is the key issue for introducing RT-PHITS in the preclinical study of TAT. For precisely calculating the doses in organs with fine structure such as stomach wall, implementation of tetrahedral-mesh phantoms in RT-PHITS is ongoing by introducing the technology developed by another PHITS-based internal dosimetry tool PARaDIM [36]. Reduction of the computational time is also desirable before the practical use of RT-PHITS in the clinic because PET-CT data of patients are generally confidential and not able to be transferred to a high-performance computer that is publicly accessible. Currently, the conventional organ dose calculation using RT-PHITS costs less than a few CPU hours, but the precise estimation of EQDX requires at least 100 CPU hours because the statistical uncertainties in each voxel must be very small in the calculation. As an extension of RT-PHITS, we developed an individual dosimetry system dedicated to nuclear medicine particularly to TAT based on PHITS coupled with the microdosimetric kinetic model. It calculates not only absorbed doses but also EQDX(α/β) from the PET-CT images, considering the dose dependence of RBE, the dose-rate effect, and the dose heterogeneity. With these functionalities, RT-PHITS enables us to predict the therapeutic and side effects of TAT based on the clinical data largely available from conventional external radiotherapy. RT-PHITS including the modules developed in this study has been implemented in the latest version of PHITS, which is freely available upon the request to Japan Atomic Energy Agency. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Biological effective dose DMH: Dose-mass histgram EQD: Equieffective dose EUD: Equivalent uniform dose LET: Linear energy transfer MKM: Microdosimetric kinetic model Probability density PHITS: Particle and Heavy Ion Transport code System RBE: Relative biological effectiveness RT-PHITS: Radiotherapy package based on PHITS TAT: Targeted alpha therapy VOI: Volume of interest Kratochwil C, Bruchertseifer F, Giesel FL, Weis M, Verburg FA, Mottaghy F, et al. 225Ac-PSMA-617 for PSMA-targeted alpha-radiation therapy of metastatic castration-resistant prostate cancer. J Nucl Med. 2016;57:1941–4. https://doi.org/10.2967/jnumed.116.178673. Watabe T, Kaneda-Nakashima K, Liu Y, Shirakami Y, Ooe K, Toyoshima A, et al. Enhancement of 211At uptake via the sodium iodide symporter by the addition of ascorbic acid in targeted alpha-therapy of thyroid cancer. J Nucl Med. 2019;60:1301–7. https://doi.org/10.2967/jnumed.118.222638. Watabe T, Kaneda-Nakashima K, Shirakami Y, Liu Y, Ooe K, Teramoto T, et al. Targeted alpha therapy using astatine (211At)-labeled phenylalanine: A preclinical study in glioma bearing mice. Oncotarget. 2020;11:1388–98. https://doi.org/10.18632/oncotarget.27552. Watabe T, Liu Y, Kaneda-Nakashima K, Shirakami Y, Lindner T, Ooe K, et al. Theranostics targeting fibroblast activation protein in the tumor stroma: 64Cu- and 225Ac-labeled FAPI-04 in pancreatic cancer xenograft mouse models. J Nucl Med. 2020;61:563–9. https://doi.org/10.2967/jnumed.119.233122. Sgouros G, Roeske JC, McDevitt MR, Palm S, Allen BJ, Fisher DR, et al. MIRD Pamphlet No. 22 (abridged): radiobiology and dosimetry of alpha-particle emitters for targeted radionuclide therapy. J Nucl Med. 2010;51:311–28. https://doi.org/10.2967/jnumed.108.058651. Karger CP, Peschke P. RBE and related modeling in carbon-ion therapy. Phys Med Biol. 2017;63:01TR2. https://doi.org/10.1088/1361-6560/aa9102. Hobbs RF, Howell RW, Song H, Baechler S, Sgouros G. Redefining relative biological effectiveness in the context of the EQDX formalism: implications for alpha-particle emitter therapy. Radiat Res. 2014;181:90–8. https://doi.org/10.1667/RR13483.1. Bentzen SM, Dorr W, Gahbauer R, Howell RW, Joiner MC, Jones B, et al. Bioeffect modeling and equieffective dose concepts in radiation oncology--terminology, quantities and units. Radiotherapy Oncol. 2012;105:266–8. https://doi.org/10.1016/j.radonc.2012.10.006. Barendsen GW. Dose fractionation, dose rate and iso-effect relationships for normal tissue responses. Int J Radiat Oncol Biol Phys. 1982;8:1981–97. https://doi.org/10.1016/0360-3016(82)90459-x. Stabin MG, Sparks RB, Crowe E. OLINDA/EXM: the second-generation personal computer software for internal dose assessment in nuclear medicine. J Nucl Med. 2005;46:1023–7. Andersson M, Johansson L, Eckerman K, Mattsson S. IDAC-Dose 2.1, an internal dosimetry program for diagnostic nuclear medicine based on the ICRP adult reference voxel phantoms. EJNMMI Res. 2017;7:88. https://doi.org/10.1186/s13550-017-0339-3. Kolbert KS, Sgouros G, Scott AM, Bronstein JE, Malane RA, Zhang J, et al. Implementation and evaluation of patient-specific three-dimensional internal dosimetry. J Nucl Med. 1997;38:301–8. Prideaux AR, Song H, Hobbs RF, He B, Frey EC, Ladenson PW, et al. Three-dimensional radiobiologic dosimetry: application of radiobiologic modeling to patient-specific 3-dimensional imaging-based internal dosimetry. J Nucl Med. 2007;48:1008–16. https://doi.org/10.2967/jnumed.106.038000. Botta F, Mairani A, Hobbs RF, Vergara Gil A, Pacilio M, Parodi K, et al. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images. Phys Med Biol. 2013;58:8099–120. https://doi.org/10.1088/0031-9155/58/22/8099. Marcatili S, Pettinato C, Daniels S, Lewis G, Edwards P, Fanti S, et al. Development and validation of RAYDOSE: a Geant4-based application for molecular radiotherapy. Phys Med Biol. 2013;58:2491–508. https://doi.org/10.1088/0031-9155/58/8/2491. Kost SD, Dewaraja YK, Abramson RG, Stabin MG. VIDA: a voxel-based dosimetry method for targeted radionuclide therapy using Geant4. Cancer Biother Radiopharm. 2015;30:16–26. https://doi.org/10.1089/cbr.2014.1713. Besemer AE, Yang YM, Grudzinski JJ, Hall LT, Bednarz BP. Development and validation of RAPID: a patient-specific monte carlo three-dimensional internal dosimetry platform. Cancer Biother Radiopharm. 2018;33:155–65. https://doi.org/10.1089/cbr.2018.2451. Ljungberg M, Gleisner KS. 3-D image-based dosimetry in radionuclide therapy. IEEE TRPMS. 2018;2:527–40. Sato T, Iwamoto Y, Hashimoto S, Ogawa T, Furuta T, Abe S, et al. Features of particle and heavy ion transport code system PHITS Version 3.02. J Nucl Sci Technol. 2018;55:684–90. https://doi.org/10.1080/00223131.2017.1419890. Hawkins RB. A microdosimetric-kinetic model of cell death from exposure to ionizing radiation of any LET, with experimental and clinical applications. Int J Radiat Biol. 1996;69:739–55. https://doi.org/10.1080/095530096145481. Takada K, Sato T, Kumada H, Koketsu J, Takei H, Sakurai H, et al. Validation of the physical and RBE-weighted dose estimator based on PHITS coupled with a microdosimetric kinetic model for proton therapy. J Radiat Res. 2018;59:91–9. https://doi.org/10.1093/jrr/rrx057. Sato T, Kase Y, Watanabe R, Niita K, Sihver L. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model. Radiat Res. 2009;171:107–17. https://doi.org/10.1667/Rr1510.1. Sato T, Masunaga S, Kumada H, Hamada N. Microdosimetric modeling of biological effectiveness for boron neutron capture therapy considering intra- and intercellular heterogeneity in 10B distribution. Sci Rep. 2018;8:988. https://doi.org/10.1038/s41598-017-18871-0. Schneider W, Bortfeld T, Schlegel W. Correlation between CT numbers and tissue parameters needed for Monte Carlo simulations of clinical dose distributions. Phys Med Biol. 2000;45:459–78. https://doi.org/10.1088/0031-9155/45/2/314. Hirayama H, Namito Y, Bielajew AF, Wilderman SJ, Nelson WR, SLAC national accelerator laboratory and high energy accelerator research organization. The EGS5 code system. SLAC-R-730 and KEK Report 2005-8; 2005. International Commission on Radiological Protection. Nuclear decay data for dosimetric calculations, ICRP Publication 107, Ann. ICRP 38(3); 2008. Sato T, Watanabe R, Niita K. Development of a calculation method for estimating specific energy distribution in complex radiation fields. Radiat Prot Dosim. 2006;122:41–5. Kase Y, Kanai T, Matsumoto Y, Furusawa Y, Okamoto H, Asaba T, et al. Microdosimetric measurements and estimation of human cell survival for heavy-ion beams. Radiat Res. 2006;166:629–38. https://doi.org/10.1667/RR0536.1. Matsuya Y, McMahon SJ, Tsutsumi K, Sasaki K, Okuyama G, Yoshii Y, et al. Investigation of dose-rate effects and cell-cycle distribution under protracted exposure to ionizing radiation for various dose-rates. Sci Rep. 2018;8:8287. https://doi.org/10.1038/s41598-018-26556-5. Sato T, Watanabe R, Kase Y, Tsuruoka C, Suzuki M, Furusawa Y, et al. Analysis of cell-survival fractions for heavy-ion irradiations based on microdosimetric kinetic model implemented in the particle and heavy ion transport code system. Radiat Prot Dosim. 2011;143:491–6. https://doi.org/10.1093/Rpd/Ncq484. Furusawa Y, Fukutsu K, Aoki M, Itsukaichi H, Eguchi-Kasai K, Ohara H, et al. Inactivation of aerobic and hypoxic cells from three different cell lines by accelerated He-3-,C-12- and Ne-20-ion beams. Radiat Res. 2000;154:485–96. https://doi.org/10.1667/0033-7587(2000)154[0485:Ioaahc]2.0.Co;2. O'Donoghue JA. Implications of nonuniform tumor doses for radioimmunotherapy. J Nucl Med. 1999;40:1337–41. Sato T, Furusawa Y. Cell survival fraction estimation based on the probability densities of domain and cell nucleus specific energies using improved microdosimetric kinetic models. Radiat Res. 2012;178:341–56. https://doi.org/10.1667/Rr2842.1. Goddu SM, Howell RW, Rao DV. Cellular Dosimetry - Absorbed fractions for monoenergetic electron and alpha-particle sources and S-values for radionuclides uniformly distributed in different cell compartments. J Nucl Med. 1994;35:303–16. Hobbs RF, Song H, Huso DL, Sundel MH, Sgouros G. A nephron-based model of the kidneys for macro-to-micro alpha-particle dosimetry. Phys Med Biol. 2012;57:4403–24. https://doi.org/10.1088/0031-9155/57/13/4403. Carter LM, Crawford TM, Sato T, Furuta T, Choi C, Kim CH, et al. PARaDIM: A PHITS-based Monte Carlo tool for internal dosimetry with tetrahedral mesh computational phantoms. J Nucl Med. 2019;60:1802–11. https://doi.org/10.2967/jnumed.119.229013. Sarrut D, Halty A, Badel JN, Ferrer L, Bardies M. Voxel-based multimodel fitting method for modeling time activity curves in SPECT images. Med Phys. 2017;44:6280–8. https://doi.org/10.1002/mp.12586. International Commission on Radiation Units and Measurements. Microdosimetry. ICRU Report, vol. 36; 1983. Kellerer AM, Rossi HH. A generalized formation of dual radiation action. Radiat Res. 1978;75:471–88. PET measurement was jointly funded by J-Pharma Co. This study was funded by the QiSS program of OPERA from the Japan Science and Technology Agency (JST), Japan (Grant number: JPMJOP1721). Nuclear Science and Engineering Center, Japan Atomic Energy Agency, Shirakata 2-4, Tokai, Ibaraki, 319-1195, Japan Tatsuhiko Sato & Takuya Furuta Research Center for Nuclear Physics, Osaka University, Suita, Japan Tatsuhiko Sato Department of Nuclear Medicine and Tracer Kinetics, Graduate School of Medicine, Osaka University, Suita, Japan Yuwei Liu & Tadashi Watabe Department of Radiology, Osaka University Hospital, Suita, Japan Sadahiro Naka Department of Laboratory Medicine, The Jikei University School of Medicine, Tokyo, Japan Shushi Nagamori Department of Bio-system Pharmacology, Graduate School of Medicine, Osaka University, Suita, Japan Yoshikatsu Kanai Takuya Furuta Yuwei Liu Tadashi Watabe TS and TW contributed to the study conception and design. Calculations and code development were performed by TS, TF, and YL. PET-CT measurements and drug development were performed by SN, SN, YK, and TW. The first draft of the manuscript was written by TS. All authors read and approved the final manuscript. Correspondence to Tatsuhiko Sato. All procedures performed in studies involving human participants were in accordance with the ethical standards of the Osaka University institutional review board and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. There is no other potential conflict of interest relevant to this article to disclose. Procedure for determining the cumulative activities and the biological decay constants In general, the cumulative activities are estimated by integrating the time-activity curve determined from a mono-exponential or bi-exponential fitting of the PET/SPECT images [37]. However, such fitting procedures are occasionally failed particularly when the statistical fluctuation of the measured activities was high or the number of the time steps of PET/SPECT images was small. We therefore employed a simple method for determining the decay-corrected activities, A(t), by linearly interpolating the corresponding data obtained from the ith measurement of PET/SPECT, Ai, and by extrapolating the last data, An, under the assumption of the mono-exponential decay, as written below: $$ A(t)={a}_it+{b}_i=\frac{A_i-{A}_{i-1}}{t_i-{t}_{i-1}}t+\frac{A_{i-1}{t}_i-{A}_i{t}_{i-1}}{t_i-{t}_{i-1}}\kern1em \mathrm{for}\kern0.75em {t}_{i-1}<t\le {t}_i\kern0.5em \left(i=1\dots n\right) $$ $$ A(t)={A}_n{e}^{-{\lambda}_{\mathrm{bio}}\left(t-{t}_n\right)}\kern0.75em \mathrm{for}\kern1em t>{t}_n, $$ where ti is the reference time of the ith measurement, and λbio represents the decay constant of the radiopharmaceutical due to the biological clearance. Note that we assumed A0 = t0 = 0 in this calculation. The numerical value of λbio is determined from the least-square fitting of Ai by the mono-exponential function, excluding the data before the peak or below a certain threshold value. The fitting is regarded to be failed in the case that the fitted decay constant is negative. The actual value of λbio used in Eq. 8 as well as Eq. 5 is calculated by averaging the fitted decay constants of 10 nearby voxels where the fitting was succeeded. Figure 8 shows an example of the decay-corrected activities obtained from Eqs. 7 and 8 in comparison with the measured data. Example of the decay-corrected activities calculated from Eqs. 7 and 8 in comparison with the measured data. The fitting curve for determining λbio is also depicted The cumulative activity, C, can be mathematically derived from A(t) as follows: $$ {\displaystyle \begin{array}{c}C=\frac{\lambda_{\mathrm{phy}}}{\lambda_{\mathrm{phy},\mathrm{PET}}}{\int}_0^{\infty }A(t){e}^{-{\lambda}_{\mathrm{phy}}t}\mathrm{d}t\\ {}=\frac{\lambda_{\mathrm{phy}}}{\lambda_{\mathrm{phy},\mathrm{PET}}}\left[\sum \limits_{i=1}^n{\int}_{t_{i-1}}^{t_i}\left({a}_it+{b}_i\right){e}^{-{\lambda}_{\mathrm{phy}}t}\mathrm{d}t+{A}_n{e}^{-{\lambda}_{\mathrm{phy}}{t}_n}{\int}_{t_n}^{\infty }{e}^{-\left({\lambda}_{\mathrm{phy}}+{\lambda}_{\mathrm{bio}}\right)\left(t-{t}_n\right)}\mathrm{d}t\right]\\ {}=\frac{\lambda_{\mathrm{phy}}}{\lambda_{\mathrm{phy},\mathrm{PET}}}\left[\sum \limits_{i=1}^n\frac{\left({\lambda}_{\mathrm{phy}}{t}_{i-1}+1\right){e}^{-{\lambda}_{\mathrm{phy}}{t}_{i-1}}-\left({\lambda}_{\mathrm{phy}}{t}_i+1\right){e}^{-{\lambda}_{\mathrm{phy}}{t}_i}}{\lambda_{\mathrm{phy}}^2}{a}_i+\frac{e^{-{\lambda}_{\mathrm{phy}}{t}_{i-1}}-{e}^{-{\lambda}_{\mathrm{phy}}{t}_i}}{\lambda_{\mathrm{phy}}}{b}_i+\frac{A_n{e}^{-{\lambda}_{\mathrm{phy}}{t}_n}}{\left({\lambda}_{\mathrm{phy}}+{\lambda}_{\mathrm{bio}}\right)}\right].\kern23.75em \end{array}} $$ where λphy and λphy,PET is the physical decay constants of the radionuclides used for TRT and PET/SPECT, respectively. Definition of fundamental microdosimetric quantities and basic features of MKM The most important feature of microdosimetry in comparison to the conventional-scale dosimetry is the consideration of the spatial and stochastic divergences of deposition energies around the trajectories of charged particles. Thus, two stochastic quantities specially used in microdosimetry, i.e., specific energy z in Gy and lineal energy y in keV/μm, were defined and measured instead of the corresponding non-stochastic quantities used in conventional dosimetry, i.e. absorbed dose and linear energy transfer (LET), respectively. The definitions z and y are as follows: $$ z=\frac{\varepsilon }{m}, $$ $$ y=\frac{\varepsilon }{\overline{l}}, $$ where ɛ is the energy imparted to a target with mass m and mean chord length \( \overline{l} \). They are generally expressed in their frequency or dose probability density functions, f(z) and f(y) or d(z) and d(y), respectively. Detailed descriptions on the definition of the microdosimetric quantities are given in International Commission on Radiation Units and Measurements (ICRU) Report 36 [38]. MKM [20] is one of the most successful models to explain the biological effectiveness for the cellular surviving fraction. It mathematically interprets the LQ relation of the surviving fraction based on the theory of dual radiation action [39]. The concept of MKM is schematically drawn in Fig. 9. In MKM, the following six basic assumptions were made: (i) a cell nucleus can be divided into multiple domains with submicron scales; (ii) radiation exposure produces two types of DNA damage named lethal and sublethal lesions in cell nuclei; (iii) the number of lethal and sublethal lesions produced in a domain is proportional to the specific energy, z, in the domain; (iv) a sublethal lesion is to be repaired, or converted into a lethal lesion via spontaneous transformation or interaction with another sublethal lesion created in the same domain; (v) a domain is to be considered inactivated when an intra-domain lethal lesion is formed; and (vi) a cell is to be considered inactivated when an intranuclear domain is inactivated. Schematic drawing of the concept of MKM Based on these assumptions, it can be mathematically derived that the cell surviving fraction in any radiation field with an absorbed dose D, SMK(D), is calculated by $$ {S}_{\mathrm{MK}}(D)=\exp \left[-\left({\alpha}_0+\beta {\overline{z}}_{1\mathrm{D}}\right)D-\beta {D}^2\right], $$ where α0 and β are the parameters independent of the radiation field, and \( {\overline{z}}_{1\mathrm{D}} \) denotes the dose-mean specific energy per event in domain. Considering the saturation correction due to the overkill effect [28] and the dose rate effect [29], Eq. 12 can be replaced by Eq. 2. The most important feature of MKM in comparison to other conventional LQ models is that it can computationally determine the α parameter for any radiation field considering the spatial and stochastic divergences of the deposition energies, owing to the use of the microdosimetric quantities z and y instead of LET. Thus, the RBE values of low-energy He ions calculated by MKM are higher than the corresponding data for high-energy C ions having the same LET, as expected from the track structure simulation and experimental data [31]. In addition, MKM suggests that the β parameter should be independent of the radiation field, while conventional LQ models regard it as a variable and generally presume the value to be 0 for high-LET radiation. Thus, RBE obtained from by MKM are always greater than 1 in contrast to those calculated by conventional LQ models, which are occasionally less than 1 in the case of high-dose and high-LET irradiations with β = 0. These features are beneficial in the estimate of RBE for TAT. Procedure for PET-CT data acquisition Whole body PET/CT images were acquired using SET-3000BCT/X, (SHIMADZU, Kyoto, Japan) in 3-D mode (pixel size 4.0 mm, slice thickness 3.25 mm) with 9 min per frame (from the mid-thigh to top-skull). PET images were reconstructed by Dynamic Row-Action Maximum Likelihood Algorithm (DRAMA) with an image matrix of 128 × 128, and a voxel size of 4.0 × 4.0 × 3.25 mm3. Attenuation correction was performed using 137Cs source. The unenhanced low-dose CT was acquired after PET scan (120 kVp and 37.5 mAs). The CT-scans were reconstructed to a slice thickness of 5 mm. Sato, T., Furuta, T., Liu, Y. et al. Individual dosimetry system for targeted alpha therapy based on PHITS coupled with microdosimetric kinetic model. EJNMMI Phys 8, 4 (2021). https://doi.org/10.1186/s40658-020-00350-7 Individual dosimetry Microdosimetry EQDX
CommonCrawl
New techniques for bounding stabilizer rank Benjamin Lovitz1 and Vincent Steffan2 1Institute for Quantum Computing and Department of Applied Mathematics, University of Waterloo, 200 University Ave W, Waterloo, ON, Canada 2QMATH, Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark Comment on Fermat's library In this work, we present number-theoretic and algebraic-geometric techniques for bounding the stabilizer rank of quantum states. First, we refine a number-theoretic theorem of Moulton to exhibit an explicit sequence of product states with exponential stabilizer rank but constant approximate stabilizer rank, and to provide alternate (and simplified) proofs of the best-known asymptotic lower bounds on stabilizer rank and approximate stabilizer rank, up to a log factor. Second, we find the first non-trivial examples of quantum states with multiplicative stabilizer rank under the tensor product. Third, we introduce and study the generic stabilizer rank using algebraic-geometric techniques. @article{Lovitz2022newtechniques, doi = {10.22331/q-2022-04-20-692}, url = {https://doi.org/10.22331/q-2022-04-20-692}, title = {New techniques for bounding stabilizer rank}, author = {Lovitz, Benjamin and Steffan, Vincent}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {6}, pages = {692}, month = apr, year = {2022} } [1] Sergey Bravyi, Graeme Smith, and John A. Smolin. ``Trading classical and quantum computational resources''. Physical Review X 6, 021043 (2016). doi: 10.1103/​PhysRevX.6.021043. [2] Sergey Bravyi and David Gosset. ``Improved classical simulation of quantum circuits dominated by Clifford gates''. Physical Review Letters 116, 250501 (2016). doi: 10.1103/​PhysRevLett.116.250501. [3] Sergey Bravyi, Dan Browne, Padraic Calpin, Earl Campbell, David Gosset, and Mark Howard. ``Simulation of quantum circuits by low-rank stabilizer decompositions''. Quantum 3, 181 (2019). doi: 10.22331/​q-2019-09-02-181. [4] Hammam Qassim, Hakop Pashayan, and David Gosset. ``Improved upper bounds on the stabilizer rank of magic states''. Quantum 5, 606 (2021). doi: 10.22331/​q-2021-12-20-606. [5] Shir Peleg, Amir Shpilka, and Ben Lee Volk. ``Lower Bounds on Stabilizer Rank''. Quantum 6, 652 (2022). doi: 10.22331/​q-2022-02-15-652. [6] David Petrie Moulton. ``Representing powers of numbers as subset sums of small sets''. Journal of Number Theory 89, 193–211 (2001). doi: 10.1006/​jnth.2000.2646. https:/​/​doi.org/​10.1006/​jnth.2000.2646 [7] A. A. Razborov. ``Lower bounds on the size of bounded depth circuits over a complete basis with logical addition''. Mathematical notes of the Academy of Sciences of the USSR 41, 333–338 (1987). doi: 10.1007/​BF01137685. [8] R. Smolensky. ``Algebraic methods in the theory of lower bounds for boolean circuit complexity''. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing. Pages 77–82. (1987). doi: 10.1145/​28395.28404. https:/​/​doi.org/​10.1145/​28395.28404 [9] R. Smolensky. ``On representations by low-degree polynomials''. In Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science. Pages 130–138. (1993). doi: 10.1109/​SFCS.1993.366874. https:/​/​doi.org/​10.1109/​SFCS.1993.366874 [10] J. Landsberg and M. Michałek. ``Towards finding hay in a haystack: explicit tensors of border rank greater than 2.02m in $\mathbb{C}^m \otimes \mathbb{C}^m \otimes \mathbb{C}^m$.'' (2019). arXiv:1912.11927. [11] Joseph Landsberg. ``Graduate studies in mathematics''. In Tensors: Geometry and Applications. Volume 128 of Graduate Studies in Mathematics. AMS (2011). doi: 10.1090/​gsm/​128. https:/​/​doi.org/​10.1090/​gsm/​128 [12] Hammam Qassim. ``Classical simulations of quantum systems using stabilizer decompositions''. PhD thesis. University of Waterloo. (2021). [13] Michael Beverland, Earl Campbell, Mark Howard, and Vadym Kliuchnikov. ``Lower bounds on the non-clifford resources for quantum computations''. Quantum Science and Technology 5, 035009 (2020). doi: 10.1088/​2058-9565/​ab8963. [14] Jeroen Dehaene and Bart De Moor. ``Clifford group, stabilizer states, and linear and quadratic operations over GF(2)''. Physical Review A 68, 042318 (2003). doi: 10.1103/​PhysRevA.68.042318. [15] Maarten Van den Nest. ``Classical simulation of quantum computation, the Gottesman-Knill theorem, and slightly beyond''. Quantum Information and Computation 10, 258–271 (2010). [16] Joe Harris. ``Algebraic geometry: A first course''. Graduate Texts in Mathematics. Springer New York. (2013). doi: 10.1007/​978-1-4757-2189-8. https:/​/​doi.org/​10.1007/​978-1-4757-2189-8 [17] Jay Goldman and Gian-Carlo Rota. ``On the foundations of combinatorial theory IV: Finite vector spaces and Eulerian generating functions.''. Technical report. Harvard University (1970). [18] David Gross. ``Hudson's theorem for finite-dimensional quantum systems''. Journal of Mathematical Physics 47, 122107 (2006). doi: 10.1063/​1.2393152. https:/​/​doi.org/​10.1063/​1.2393152 [19] William Feller. ``An introduction to probability theory and its applications''. Volume 1. John Wiley and Sons. (1991). Third edition. [20] Benjamin Lovitz and Nathaniel Johnston. ``Entangled subspaces and generic local state discrimination with pre-shared entanglement'' (2020). arXiv:2010.02876. [21] Kil-Chan Ha and Seung-Hyeok Kye. ``Multi-partite separable states with unique decompositions and construction of three qubit entanglement with positive partial transpose''. Journal of Physics A: Mathematical and Theoretical 48, 045303 (2015). doi: 10.1088/​1751-8113/​48/​4/​045303. [22] Benjamin Lovitz and Fedor Petrov. ``A generalization of Kruskal's theorem on tensor decomposition'' (2021). arXiv:2103.15633. [23] Daniel Gottesman. ``Stabilizer codes and quantum error correction''. PhD thesis. California Institute of Technology. (1997). [24] Michael A. Nielsen and Isaac L. Chuang. ``Quantum computation and quantum information''. Cambridge University Press. (2000). doi: 10.1017/​CBO9780511976667. [25] Daniel Gottesman. ``The Heisenberg representation of quantum computers''. In 22nd International Colloquium on Group Theoretical Methods in Physics. Pages 32–43. (1998). arXiv:quant-ph/​9807006. [26] P. Oscar Boykin, Tal Mor, Matthew Pulver, Vwani Roychowdhury, and Farrokh Vatan. ``On universal and fault-tolerant quantum computing: A novel basis and a new constructive proof of universality for Shor's basis''. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science. (1999). doi: 10.1109/​SFFCS.1999.814621. https:/​/​doi.org/​10.1109/​SFFCS.1999.814621 [27] Sergey Bravyi and Alexei Kitaev. ``Universal quantum computation with ideal clifford gates and noisy ancillas''. Physical Review A 71, 022316 (2005). doi: 10.1103/​PhysRevA.71.022316. [1] Shir Peleg, Amir Shpilka, and Ben Lee Volk, "Lower Bounds on Stabilizer Rank", arXiv:2106.03214, (2021). The above citations are from SAO/NASA ADS (last updated successfully 2023-01-27 03:01:18). The list may be incomplete as not all publishers provide suitable and complete citation data. On Crossref's cited-by service no data on citing works was found (last attempt 2023-01-27 03:01:17).
CommonCrawl
Does the average primeness of natural numbers tend to zero? This question was posted in MSE. It got many upvotes but no answer hence posting it in MO. A number is either prime or composite, hence primality is a binary concept. Instead I wanted to put a value of primality to every number using some function $f$ such that $f(n) = 1$ iff $n$ is a prime otherwise, $0 < f(n) < 1$ and as the number divisors of $n$ increases, $f(n)$ decreases on average. Thus $f(n)$ is a measure of the degree of primeness of $n$ where 1 is a perfect prime and 0 is a hypothetical perfect composite. Hence $\frac{1}{N}\sum_{r \le N} f(r)$ can be interpreted as a the average primeness of the first $N$ integers. After trying several definitions and going through the ones in literature, I came up with: Define $f(n) = \dfrac{2s_n}{n-1}$ for $n \ge 2$, where $s_n$ is the standard deviation of the divisors of $n$. One reason for using standard deviation was that I was already studying the distribution of the divisors of a number. Question 1: Does the average primeness tend to zero? i.e. does the following hold? $$ \lim_{N \to \infty} \frac{1}{N}\sum_{r = 2}^N f(r) = 0 $$ Question 2: Is $f(n)$ injective over composites? i.e., do there exist composites $3 < m < n$ such that $f(m) = f(n)$? $f(4.35\times 10^8) \approx 0.5919$ and decreasing so the limit if it exists must be between 0 and 0.5919. For $2 \le i \le n$, computed data shows that the minimum value of $f(i)$ occurs at the largest highly composite number $\le n$. Note: Here standard deviation of $x_1, x_2, \ldots , x_n$ is defined as $\sqrt \frac{\sum_{i=1}^{n} (x-x_i)^2}{n}$. Also notice that even if we define standard deviation as $\sqrt \frac{\sum_{i=1}^{n} (x-x_i)^2}{n-1}$ our questions remain unaffected because in this case in the definition of $f$, we will be multiplying with $\sqrt 2$ instead of $2$ to normalize $f$ in the interval $(0,1)$. nt.number-theory real-analysis analytic-number-theory prime-numbers Nilotpal Kanti Sinha asked Apr 8, 2019 at 9:51 Nilotpal Kanti SinhaNilotpal Kanti Sinha $\begingroup$ From the linked question it seems that $s_n$ grows faster than $n$ so that $f(n)$ doesn't go to zero. $\endgroup$ – lcv Apr 8, 2019 at 10:18 $\begingroup$ @lcv No $s_n$ doesn't grow faster than $n$. What are you looking at? $\endgroup$ – Nilotpal Kanti Sinha $\begingroup$ "...I wanted to have a continuous function...". In what topology is $f$ continuous? If you put discrete topology on natural numbers, then any function is continuous so you probably have something else in mind. $\endgroup$ $\begingroup$ I have verified that $f$ is injective over composites less than 10,000,000. $\endgroup$ – Matt F. $\begingroup$ @NilotpalKantiSinha - This may be interesting for you. Have a look at this question. There is a definition of "compositeness" of a number. Here, the value approaches zero as numbers become more prime-like, and approaches infinity as they become more composite. $\endgroup$ – swami The answer to Question 1 is "yes". To see this, notice that $s_n$ is at most square root of the average square of divisor, i.e. $$ s_n\leq \sqrt{\frac{\sum_{d\mid n}d^2}{\sum_{d\mid n} 1}}=\sqrt{\frac{\sigma_2(n)}{\sigma_0(n)}}, $$ where $\sigma_k(n)$ is the sum of $k$-th powers of divisors of $n$. Now, $$ \sigma_2(n)=n^2\sigma_{-2}(n), $$ $$ \sigma_2(n)<\frac{\pi^2}{6}n^2 $$ for all $n$. Therefore we have $$ f(n)\leq \frac{2}{n-1} \sqrt{\frac{\pi^2}{6}n^2/\sigma_0(n)}\leq \frac{5.14}{\sqrt{\sigma_0(n)}} $$ for all $n$. Now, almost all $n\leq N$ have at least $0.5\ln\ln N$ distinct prime factors. In particular, for almost all $n\leq N$ we have $\sigma_0(n)\geq 0.5\ln\ln N$. Therefore, our bound for $f(n)$ together with the trivial observation that $0\leq f(n)\leq 1$ gives $$ \sum_{n\leq N} f(n)\leq \sum_{n\leq N, \sigma_0(n)\geq 0.5\ln\ln N} \frac{5.14}{\sqrt{\sigma_0(n)}}+\sum_{n\leq N, \sigma_0(n)<0.5\ln\ln N} 1= o(N), $$ as needed. Using contour integration method one can even prove something like $$ \sum_{n\leq N} f(n)=O(N(\ln N)^{1/\sqrt{2}-1}) $$ Alexander KalmyninAlexander Kalmynin $\begingroup$ That last expression reminds me an XKCD alt-text: "If you ever find yourself raising $\log(\text{anything})^{1/\sqrt{2}}$, set down the marker and back away from the whiteboard; something has gone horribly wrong." $\endgroup$ – Michael Seifert $\begingroup$ @MichaelSeifert Bah, it was $\log(anything)^e$ not $\frac{1}{\sqrt{2}}$. Log to the power of one over the sqrt of 2 is mundane; log of something to the power e is a sign of insanity. $\endgroup$ – Yakk $\begingroup$ @Yakk Yeah, but Randall also says that taking $\pi$-th root of anything is insane. However, the paper "Mean values of multiplicative functions" by Montgomery and Vaughan, Theorem 5, contains $(\log x)^{1/\pi-1}$ and is totally fine! (P.S. Is inequality $n_p<p^{\frac{1}{4\sqrt{e}}+o(1)}$ ok?..) $\endgroup$ – Alexander Kalmynin $\begingroup$ @AsymptotiacK: Thanks for the good answer to Question 1. I think that in general if $f(n)$ is any function who value decreases from 1 to zero as its defined measure of primness decreases then the mean value of $f$ must tend to zero because as we go higher up the number line, for any $k \ge 2$, the probability of of finding numbers with $\le k$ factors should decrease $\endgroup$ – Space Apr 9, 2019 at 6:11 Is there a connection between the average 'compositeness' of a rational number and $\phi$ (golden ratio)? Is the ratio of a number to the variance of its divisors injective? Pascal Triangle and Prime Numbers Erdos Kac for imaginary class number Riemann sum formula for definite integral using prime numbers Is every prime the largest prime factor in some prime gap? How many numbers $\le x$ can be factorized into three numbers which form the sides of a triangle? On the connection between sums of prime numbers and distribution of prime numbers Primality test for numbers of the form $\frac{a^p-1}{a-1}$?
CommonCrawl
History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up. Who discovered the indeterminate forms like 0/0? Who discovered the indeterminate forms and how did they discover them? How did someone come to know that a particular form (fraction, product, sum/difference, exponent) is indeterminate? For example, $\frac{0}{0}$ is an indeterminate form. How did they come to know that when both numerator and denominator approach zero the fraction can be any number (which depends upon their respective rates)? I'll be highly grateful if you explain it comprehensively, I'm a high school student. mathematics calculus elementary-algebra dRIFT sPEEDdRIFT sPEED $\begingroup$ Related: hsm.stackexchange.com/questions/5065/… $\endgroup$ – Spencer Apr 5 '20 at 23:16 Special cases were handled algebraically even before the "l'Hopital's" rule, which appears in l'Hopital's 1696 transcription of tips on calculus he purchased (literally) from Johann Bernoulli in 1694, see Indeterminate Forms Revisited, by Boas. For example, Descartes's method of finding tangents involved resolving "indeterminate forms" like $0/0$, see Is there a 'lost calculus'? So the phenomenon was known from examples (without a name or special attention paid to it) by the time it was singled out by Bernoulli, and then comprehensively systematized by Euler. That was done in Euler's textbook Institutionum Calculi Differentialis (1755), chapter 15 of part II. Fortunately, there is an English translation by Bruce. At the beginning Euler explains how $0/0$ come up, why they are "indeterminate", and then gives some tricks for resolving them, including cancellation, "l'Hopital's" rule and logarithmic differentiation. He freely manipulates infinitesimals, and at the end even deduces the famous sum of the Basil series by applying the "l'Hopital's" rule thrice. Here is from the opening, where he shows that arbitrary value is possible with a remarkably simple example: "If the fraction $\frac{P}{Q}$ were some function $y$ of $x$, the numerator and the denominator of which likewise may vanish on putting a certain value in place of $x$, then in that case the fraction $\frac{P}{Q}$ may arise expressing the value of the function $y=\frac00$; which expression thus may be considered indeterminate, since for each quantity either finite or infinite, or infinitely small it may become equal to, from that evidently in this case the value of $y$ cannot be deduced. Yet meanwhile it is easily seen, because in addition in this case the function $y$ takes a determined value always, whatever may be substituted for $x$, also in this case an indeterminate value of $y$ cannot be possible. This is made clear from this example, if there were $y=\frac{aa-xx}{a-x}$, so that on making $x=a$ certainly there becomes $y=\frac00$. But since with the numerator divided by the denominator it may become $y = a + x$ , it is evident, if there is put $x= a$ to become $y=2a$, thus so that in this case that fraction $\frac00$ may be equivalent to the quantity $2a$." Although Euler uses the (Latin version of) "indeterminate" he does not call them "indeterminate forms" or introduces handy notations and classifications encountered in modern textbooks. According to Jeff Miller's Earliest Uses, this taxonomic process began in 1840s: "The term INDETERMINATE FORM is used in French in 1840 in Moigno, abbé (François Napoléon Marie), (1804-1884): Leçons de calcul différentiel et de calcul intégral, rédigées d'après les méthodes et les ouvrages publiés ou inédits de M. A.-L. Cauchy, par M. l'abbé Moigno. Indeterminate Forms is found in English as a chapter title in 1841 in An Elementary Treatise on Curves, Functions, and Forces by Benjamin Pierce. Forms such as $0/0$ are called singular values and singular forms in in 1849 in An Introduction to the Differential and Integral Calculus, 2nd ed., by James Thomson. In Primary Elements of Algebra for Common Schools and Academies (1866) by Joseph Ray, $0/0$ is called "the symbol of indetermination." ConifoldConifold Thanks for contributing an answer to History of Science and Mathematics Stack Exchange! Not the answer you're looking for? Browse other questions tagged mathematics calculus elementary-algebra or ask your own question. Founders of undetermined forms Is there a 'lost calculus'? Who discovered L'Hôpital's rule? Was the "polynomial remainder theorem" known before polynomial long division was discovered? Grassmann's "forms" Who discovered the power rule for derivatives? Who invented or discovered the hyperbolic numbers? What is history behind Smith-Volterra-Cantor sets?
CommonCrawl
TVT: TVT, 2010, Volume 48, Issue 2, Pages 206–209 (Mi tvt710) This article is cited in 9 scientific papers (total in 9 papers) Thermophysical Properties of Materials The density and interdiffusion coefficients of bismuth-tin melts of eutectic and near-eutectic composition R. A. Khairulina, S. V. Stankusa, R. N. Abdullaevb, V. M. Skljarchukc a S.S. Kutateladze Institute of Thermophysics, Siberian Division of the Russian Academy of Sciences b Novosibirsk State University c Ivan Franko State University of L'viv Abstract: The method of gamma-raying of samples by a narrow beam of gamma radiation is used to investigate the temperature dependences of the density of bismuth-tin liquid alloys, which contain $30.0$, $44.0$, and $54.0$ at.$%$ Bi, at temperatures from the liquidus line to $920$ K. The density jump in the case of solid-liquid phase transition is directly measured for the first time for an alloy of eutectic composition ($44.0$ at.$%$ Bi). The temperature and concentration dependences of the thermal properties of the liquid Bi–Sn system are constructed. The kinetics of homogenization are studied of melts with average composition of $44.0$ and $54.0$ at.$%$ Bi at temperatures from $550$ to $850$ K. The interdiffusion coefficients are determined from the results of these experiments. High Temperature, 2010, 48:2, 188–191 UDC: 536.41:669.65'76+532.72:669.65'76 PACS: 61.25.Mv; 65.20.-w; 66.10.-x Citation: R. A. Khairulin, S. V. Stankus, R. N. Abdullaev, V. M. Skljarchuk, "The density and interdiffusion coefficients of bismuth-tin melts of eutectic and near-eutectic composition", TVT, 48:2 (2010), 206–209; High Temperature, 48:2 (2010), 188–191 \Bibitem{KhaStaAbd10} \by R.~A.~Khairulin, S.~V.~Stankus, R.~N.~Abdullaev, V.~M.~Skljarchuk \paper The density and interdiffusion coefficients of bismuth-tin melts of eutectic and near-eutectic composition \jour TVT \mathnet{http://mi.mathnet.ru/tvt710} \jour High Temperature \crossref{https://doi.org/10.1134/S0018151X10020082} http://mi.mathnet.ru/eng/tvt710 http://mi.mathnet.ru/eng/tvt/v48/i2/p206 V. E. Sidorov, S. A. Uporov, D. A. Yagodin, K. I. Grushevskii, N. S. Uporova, D. V. Samokhvalov, "Density, electrical resistivity and magnetic susceptibility of $\mathrm{Sn}$–$\mathrm{Bi}$ alloys at high temperatures", High Temperature, 50:3 (2012), 348–353 Khairulin R.A., Stankus S.V., Abdullaev R.N., Morozov V.A., "The Interdiffusion in Sodium-Lead Melts of Compositions From 2.5 to 41.1 at.% Pb", J. Phase Equilib. Diffus., 33:5 (2012), 369–374 S. V. Stankus, R. A. Khairulin, V. G. Martynets, P. P. Bezverkhii, "Studies of the thermophysical properties of substances and materials at the Novosibirsk Scientific Center of the Siberian Branch of the Russian Academy of Sciences, 2002–2012", High Temperature, 51:5 (2013), 695–711 Chen W.-M., Zhang L.-J., Liu D.-D., Du Y., Tan Ch.-Yu., "Diffusivities and Atomic Mobilities of Sn-Bi and Sn-Pb Melts", J. Electron. Mater., 42:6 (2013), 1158–1170 Khairulin R.A., Abdullaev R.N., Stankus S.V., "Plotnost troinogo splava vismut-indii-olovo evtekticheskogo sostava v zhidkom sostoyanii i v oblasti plavleniya-kristallizatsii", Vestnik novosibirskogo gosudarstvennogo universiteta. seriya: fizika, 8:1 (2013), 104–106 Gibbs P.J., Imhoff S.D., Morris C.L., Merrill F.E., Wilde C.H., Nedrow P., Mariam F.G., Fezzaa K., Lee W.-K., Clarke A.J., "Multiscale X-Ray and Proton Imaging of Bismuth-Tin Solidification", JOM, 66:8 (2014), 1485–1492 Hooper R.J., Davis C.G., Johns P.M., Adams D.P., Hirschfeld D., Nino J.C., Manuel M.V., "Prediction and Characterization of Heat-Affected Zone Formation in Tin-Bismuth Alloys Due To Nickel-Aluminum Multilayer Foil Reaction", J. Appl. Phys., 117:24 (2015), 245104 Theodossiadis G.D., Zaeh M.F., "Study of the Heat Affected Zone Within Metals Joined By Using Reactive Multilayered Aluminum-Nickel Nanofoils", Prod. Eng.-Res. Dev., 11:4-5 (2017), 401–408 Astafieva I.M., Gerasimov D.N., Makseev R.E., International Conference Problems of Thermal Physics and Power Engineering (Ptppe-2017), Journal of Physics Conference Series, 891, IOP Publishing Ltd, 2017
CommonCrawl
Mathematics Educators Mathematics Educators Meta Mathematics Educators Stack Exchange is a question and answer site for those involved in the field of teaching mathematics. Join them; it only takes a minute: Mathematics Educators Beta What female mathematician can I introduce to my High School students? I enjoy talking about Pythagoras when I teach the Pythagorean theorem. I sometimes mention Descartes when introducing Cartesian coordinates. And Leibniz and Newton are mentioned in many calculus classes. But all of these famous mathematicians accessible to high school students are male. What female mathematician can I introduce to my high school students? And what mathematical concept did she work with? secondary-education examples history women David Ebert David EbertDavid Ebert $\begingroup$ See the book Women in Mathematics by Lynn M. Osen for several good examples. $\endgroup$ – Jim Belk Apr 20 '14 at 23:03 $\begingroup$ What do you say about Pythagoras? I'd want to emphasize that stories about him are less reliable than stories about Johnny Appleseed. $\endgroup$ – user173 Apr 21 '14 at 2:10 $\begingroup$ Ada Lovelace and Vi Hart $\endgroup$ – Chloe Apr 21 '14 at 5:02 $\begingroup$ Few if any of the results that high school students will rely on are attributable to women. You can't much work around that because it's a feature of the syllabus, the difficulty of modern mathematics, and the small number of pre-modern female mathematicians. I can't think of an example not of the form, "here is an impressive female mathematician, here is why her work was important, but you won't be using that". AP classes might get near, but for example Emmy Noether's work starts after an average undergraduate algebra syllabus stops. $\endgroup$ – Steve Jessop Apr 21 '14 at 13:29 $\begingroup$ Also there was no Fields medal when Noether was under 40. You don't need to manufacture a female Fields medalist, as Noether was clearly a more important mathematician than the average Fields medalist. $\endgroup$ – Noah Snyder Apr 21 '14 at 16:19 Emmy Noether comes first to mind, as one of the most influential mathematicians in abstract algebra, specifically in the development of Noetherian rings (along with many properties of ideals). One aspect of her work that high school students might like is from another area, analysis. Noether's theorem says that every symmetry of the laws of nature (or the universe) gives rise to a Conservation Law. So, energy is conserved because of time symmetry (meaning that the laws of nature don't change after time). Momentum is conserved because of translation symmetry (meaning that the laws of nature are the same at every point). Angular momentum is conserved because of rotation symmetry (meaning that the laws of nature are the same in every direction). Edit: Recently I've learned that, in quantum mechanics, Noether's theorem has quite a few more implications. For instance, the fact that the phase factor is redundant (i.e. the direction of the complex number that you square to get a probability) gives rise to conservation of charge, and quite a few other things get conserved in QFT. Brian RushtonBrian Rushton 6,27266 gold badges4545 silver badges122122 bronze badges $\begingroup$ Noether's theorem is probably going to seem quite mind-blowing to anyone who isn't able to follow its mathematical derivation. Which could be a good or bad thing. ;-) $\endgroup$ – David Z Apr 21 '14 at 0:53 $\begingroup$ Can you change the link from the mobile wikipedia to the desktop? I tried to edit it but it said it has to be over 6 characters. $\endgroup$ – Cameron Martin Apr 21 '14 at 4:18 $\begingroup$ @CameronMartin done! $\endgroup$ – Brian Rushton Apr 21 '14 at 12:59 $\begingroup$ Even if you can't follow the derivation of Noether's theorem the examples are still pretty cool. It's pretty intuitive once it's pointed out that momentum should be related to shift symmetry and angular momentum to rotational symmetry (and then you can blow their minds with time shift symmetry and energy). $\endgroup$ – Noah Snyder Apr 21 '14 at 16:21 Julia Robinson! I recommend her for a high school audience for a few reasons: Mathematical reasons: She is best known for her work towards the solution of Hilbert's 10th Problem, regarding an algorithm for solving Diophantine Equations. High school students can absolutely recognize and solve particular Diophantine Equations. Furthermore, and more relevant to Robinson's work, I believe high school students can appreciate (upon seeing examples) the subtle dependence of such equations on their coefficients. For instance, ask a high school algebra student to experiment and find all the solutions to, say, $6x+15y=33$. Then, ask them to solve $6x+15y=34$. What changes? Give them some random coefficients for $ax+by=c$. What would they do? Can they generalize to more variables? Then, work with them on a quadratic equation, something like $x^2+1=y^2$. Can they find any solutions? What about $3x^2-5y^2=2$? Do they see how hard it is to get a general method? Then, you can start to talk about what Hilbert's 10th Problem says. This can facilitate several interesting discussions for a high school audience Solving equations in integers (Why have mathematicians been interested in solving them since pretty much antiquity? Where do they appear in real life?) Algorithms and decision procedures (What is an algorithm? What makes one better than another? Do we care about how efficiently they can run, or how easy they are to describe/implement?) Logic and philosophy of mathematics (What does it mean to prove something? What does it mean to disprove something? What does it mean to prove that there is no procedure that could solve every Diophantine equation?) (I understand this doesn't really address the full depth of Hilbert's 10th, but if you're looking to pique the interest of a high school audience, I believe this should suffice.) Historical reasons: Julia Robinson is one of the more notable and renowned mathematicians (no need for "female") of the 20th century, especially in America. Despite this, she had difficulty securing an academic position. But despite that, she went on to obtain such a position and became the first female president of the AMS. Along the way, she devoted much of her time to other interests, including political campaigns. Surely, she can spark many interesting discussions for a high school audience! ThomasW Brendan W. SullivanBrendan W. Sullivan $\begingroup$ Another good J.R. topic is one of the "positive aspects of a negative solution": there is a polynomial whose positive values are exactly the primes. $\endgroup$ – user173 Apr 21 '14 at 5:36 $\begingroup$ +1: It is hard for me to imagine anyone who would not regard Julia Bowman Robinson as being an inspirational figure in one manner or another. $\endgroup$ – Pete L. Clark May 20 '14 at 17:39 Perhaps not strictly a mathematician in the traditional sense, but I think Ada Lovelace might be a great woman to start with in today's digital world. She even has an important programming language named after her: Ada. Augusta Ada King, Countess of Lovelace (10 December 1815 – 27 November 1852), born Augusta Ada Byron and now commonly known as Ada Lovelace, was an English mathematician and writer chiefly known for her work on Charles Babbage's early mechanical general-purpose computer, the Analytical Engine. Her notes on the engine include what is recognised as the first algorithm intended to be carried out by a machine. Because of this, she is often described as the world's first computer programmer. Or, if you choose to go with the ancients, I'd suggest Hypatia: Hypatia (/haɪˈpeɪʃə/ hy-PAY-shə; Ancient Greek: Ὑπατία; Hypatía) (born c. AD 350 – 370; died 415) was a Greek Alexandrine Neoplatonist philosopher in Egypt who was one of the earliest mothers of mathematics. As head of the Platonist school at Alexandria, she also taught philosophy and astronomy. As a Neoplatonist philosopher, she belonged to the mathematic tradition of the Academy of Athens, as represented by Eudoxus of Cnidus; she was of the intellectual school of the 3rd century thinker Plotinus, which encouraged logic and mathematical study in place of empirical enquiry and strongly encouraged law in place of nature. Recent scholarship has determined that certain important commentaries were indeed written by Hypatia: Hypatia also wrote commentaries on the Arithmetica of Diophantus, the Conics of Apollonious and edited part of her father's Commentary on the Almagest by Ptolemy. Hypatia also met with a rather unfortunate end: According to the only contemporary source, Hypatia was murdered by a Christian mob after being accused of exacerbating a conflict between two prominent figures in Alexandria: the governor Orestes and the Bishop of Alexandria. More details @ Events leading to her murder. She was at times something of a popular romantic icon as well. See: In the 19th century, interest in the "literary legend of Hypatia" began to rise. Perhaps the dramatic aspects of Hypatia's life could be deemed distractions from her accomplishments in mathematics, when dealing with HS students. On the other hand, since her story is interesting and provocative, it might well serve to foster students' interest in her accomplishments. VectorVector $\begingroup$ I recommend against discussing Hypatia, since we have nothing written by her. How would you answer the OP's question: What mathematical concept did she work with? $\endgroup$ – user173 Apr 21 '14 at 3:00 $\begingroup$ @DanNeely - where someone tried to insist she was murdered for being a woman in science - that contention is absurd to anyone who knows anything about the ancient Greeks, and perhaps bringing her up will open the door to educating kids about good History and the role of women in the ancient world, which seems to be related to what the OP is interested in. Otherwise, why the question at all? $\endgroup$ – Vector Apr 21 '14 at 21:50 $\begingroup$ Additional insight for Ada Lovelace $\endgroup$ – user1167 Apr 22 '14 at 9:14 $\begingroup$ For another influential female computer scientist, there is Grace Hopper: smbc-comics.com/?id=2516 $\endgroup$ – Thomas Eding Apr 23 '14 at 17:36 $\begingroup$ I was hoping that I'd find this answer. $\endgroup$ – Jonast92 Apr 28 '14 at 10:12 Florence Nightingale, elected to the Royal Statistical Society and (honorarily) to the American Statistical Association, for her work on the importance of statistical data and statistical graphics. Her statistics and her graphics persuaded the British government to improve sanitation in military hospitals, saving many soldiers' lives. She was a statistician rather than a mathematician, but since statistics is part of high school math classes, she seems worth talking about. $\begingroup$ It is worth noting that according to some sources (en.wikipedia.org/wiki/Pie_chart) she popularised / invented the use of pie charts. $\endgroup$ – Rune Apr 21 '14 at 19:44 $\begingroup$ @Rune, she popularized/invented a variant of pie charts -- a creative variant that worked for her, but not a format I would advise students to use. If I showed those charts in a class, I would ask: "How else might you present this data to make it clear and visually striking?" $\endgroup$ – user173 Apr 21 '14 at 19:59 $\begingroup$ +1 Florence Nightingale was one of the first people to successfully use statistics and data visualisation to bring around a direct change in government policy which saved lives and changed the history of war and medicine. It's a perfect example of how maths can change the world, with loads to interest children (war and nursing and surprising counter-intuitive facts: she proved that more soldiers were dying of infections in dirty field hospitals than from battle, which led to modern nursing). She was genuinely one of the top pioneers in her field (nothing to do with her gender). $\endgroup$ – user568458 Apr 22 '14 at 14:17 $\begingroup$ +1 - Florence Nightingale as a mathematician - good catch! $\endgroup$ – Vector Apr 23 '14 at 5:05 $\begingroup$ Brilliant idea. There is vast potential here for interdisciplinary ideas, and a good reminder that not all of the people who developed statistics and probability were into gambling! $\endgroup$ – David Ebert Apr 27 '14 at 15:17 Sophie Germain and her work on Fermat's Last Theorem. Santiago CanezSantiago Canez $\begingroup$ Germain primes are a very accessible mathematical starting point. $\endgroup$ – jwg Apr 23 '14 at 5:59 $\begingroup$ I remember doing a report on her in 6th grade. But I remember boundary constrained differential equations (well, I didn't know that's what they were at the time). $\endgroup$ – Potatoswatter Apr 24 '14 at 10:02 $\begingroup$ I often quote Sophie Germain to elementary math students, saying, "Algebra is but written geometry and geometry is but figured algebra," or some slight paraphrase of that. I find this relevant when doing any kind of analytic geometry, and seeing the same facts revealed geometrically as well as in formulas. $\endgroup$ – G Tony Jacobs Jun 15 '17 at 14:02 Maryam Mirzakhani, who just won the Fields Medal, and also was the first Iranian student to win a gold medal in the IMO in 1995 with a perfect score. My colleague Mohammad Javaheri was on Iran's IMO team with her in 1995. He told us the other day that after Maryam won the gold, when the rest of the team went up to congratulate her she said "next, the Fields Medal". Nineteen years later, she did it! (I'm not sure what else to say that would be interesting to high school students...as moduli spaces of Riemann surfaces may not be so interesting to them...) Jon BannonJon Bannon $\begingroup$ This is especially relevant now. As for the last sentence, it would be worth mentioning that she helped bridge the gap between probability theory and geometry (?) the common pool table analogy seems particularly effective with kids, although she gave her preferred example here $\endgroup$ – Andres Mejia Jul 16 '17 at 22:58 $\begingroup$ Today's news was heartbreaking. Thanks for the link, Andres. $\endgroup$ – Jon Bannon Jul 17 '17 at 1:03 Any Living One who is friendly enough to come talk with them. Seriously, learning about "people in books" can sometimes be inspiring. But actual live role models are best. Write a local college, university, or business to find a woman who self-identifies as a mathematician. Invite her to your school to spend some time with your students. You want a real person who happens to be a mathematician, a woman, and reasonably happy about that arrangement. I feel awkward about leaving out women who might be working as mathematics instructors in your school, because I think they count as part of the community, too. But students are used to the whole "female high school teacher" archetype, and you seem like you want to adjust their view of normal jobs for women mathematicians, so you are better off finding someone new. TJ HitchmanTJ Hitchman Edit (Jan 2018) I recommend checking Annie Perkins' page: The Mathematicians Project: Mathematicians Are Not Just White Dudes If you scroll down, then you will find a section entitled Women (alphabetical by last name). There are some great sources/names there, and - as a bonus - the project keeps evolving! $$ $$ Edit: Marjorie Rice has recently passed away; for more on her life, see the Quantum story here. I think Marjorie Rice is a great example of an amateur mathematician who managed to make nontrivial discoveries with regard to tessellations. Interestingly, her notation was deciphered by a female professor of mathematics, Doris Schattschneider (also here), who helped to lead the development of Geometer's Sketchpad. (Perhaps you use this piece of software in your school?) Another great choice is Mary Dolciani (also here) who was a Ph.D mathematician especially known for her teaching abilities, and a prolific writer of textbooks and curricula (e.g., through the School Mathematics Study Group, SMSG, which developed New Math). I included a bit about her in an earlier MESE answer. Benjamin DickmanBenjamin Dickman $\begingroup$ (A recent update on tessellating pentagons!) $\endgroup$ – Benjamin Dickman Aug 13 '15 at 6:19 $\begingroup$ (& an even more recent update!) $\endgroup$ – Benjamin Dickman Jul 11 '17 at 23:23 Vi Hart, the self-termed Mathemusician. I especially enjoy her Doodling in Math Class YouTube series. EthanBEthanB $\begingroup$ I really think this answer needs more attention. Her videos were really inspiring for me in high school. $\endgroup$ – DLeh Apr 23 '14 at 20:13 $\begingroup$ They're inspiring for me and I've already graduated from college! $\endgroup$ – EthanB Apr 23 '14 at 21:21 $\begingroup$ This girl is awsome and very inspiring. She should definitely be a female mathematician model for high school students, firstly because she can easily inspire them, and also because she has worked on things that high school students can understand. I have created an account here, just to upvote your answer $\endgroup$ – Jim Blum Jul 3 '14 at 18:59 $\begingroup$ She's working as part of elevr which is a VR research group, of which one part is exploring mathematics via VR. The other women she is working with are all interesting in their own right too. $\endgroup$ – icc97 May 17 '17 at 14:26 Sonya Kovalevsky, correspondent of Weierstrass, for example. $\begingroup$ She won the Prix Bordin for her work on what is now known as the Kovalevsky top, and she proved the full version of the Cauchy-Kovalevsky theorem, which stood for nearly 50 years as the central result in the theory of partial differential equations, until the theory of weak solutions came along. $\endgroup$ – Bob Pego Apr 22 '14 at 13:41 If your main interest is to provide a role model that students can identify with, you might want to look at Danica McKellar. According to her Wikipedia entry: McKellar studied mathematics at UCLA, graduating summa cum laude in 1998. As an undergraduate, she coauthored a scientific paper with Professor Lincoln Chayes and fellow student Brandy Winn entitled "Percolation and Gibbs states multiplicity for ferromagnetic Ashkin-Teller models on $\mathbb{Z}^2$." Their results are termed the 'Chayes–McKellar–Winn theorem'. McKellar has authored several mathematics-related books primarily targeting adolescent readers [those in middle-school and high-school] interested in succeeding at the study of mathematics. Joel Reyes NocheJoel Reyes Noche $\begingroup$ This was going to be my answer! She was World News' Person of the Week (youtu.be/nXsfBQlXb6Y) and her Math Doesn't Suck book has great reviews. $\endgroup$ – aknauft Apr 22 '14 at 7:09 $\begingroup$ Winnie Cooper from The Wonder Years! That's pretty awesome (for anyone that still remembers). $\endgroup$ – icc97 May 17 '17 at 14:32 Classically speaking, Maria Agnesi is the best classical mathematician to study. She published calculus texts that expanded and reflected upon the works of Leonhard Euler. oemb1905oemb1905 $\begingroup$ There is also a curve called the Witch of Agnesi. $\endgroup$ – kan Apr 22 '14 at 19:33 $\begingroup$ Indeed! Fun for calculus students to explore! $\endgroup$ – oemb1905 Apr 23 '14 at 2:05 $\begingroup$ Yes. This is one to mention in calculus class. Although women were not allowed in the university, she learned mathematics from her father, a professor of mathematics at the University of Pisa. In later years, on occasion she would substitute for her father when he was away ... although the rules prohibited women students, there was (through oversight) no rule prohibiting women instructors... $\endgroup$ – Gerald Edgar Aug 14 '14 at 17:05 Adm. Grace Hopper earned a Ph.D. in mathematics at Yale (1934), helped program the Mark I (1944), developed the first compiler (1952) and some early computer languages, and worked on the development of the UNIVAC I. user1527user1527 $\begingroup$ I recently saw an old interview David Letterman did with her. She was a real character. $\endgroup$ – Glen_b Apr 29 '14 at 0:27 Maria Gramegna, the brilliant student of Giuseppe Peano. When you use matrices to solve systems of differential equations, you rely in many ways to her ideas. She defined the exponential function of a matrix through its power series and used it as we do it now. Though this is not strictly speaking high school mathematics, you can mention her story to every undergraduate. She also generalized this to infinite systems and integrodifferential equations, and many ideas of her 1910 thesis belong to the foundation of functional analysis. After that, she became a school teacher and died in an earthquake in 1915. András BátkaiAndrás Bátkai I think for young girls Ruth Lawrence is a great role model since she got her phd at age of 17: At the age of 9, Ruth Lawrence gained an O-level in mathematics, setting a new age record. Also at the age of 9 she achieved a Grade A at A-level Pure Mathematics. In 1981 Ruth Lawrence passed the Oxford University interview entrance examination in mathematics, coming first out of all 530 candidates sitting the examination, and joining St Hugh's College in 1983 at the age of just twelve. Another example would be Shelly Harvey from Rice University. 0x900x90 $\begingroup$ Under this answer I thought I'd mention Lisa Sauermann, whose accomplishments (IMO performances) are recent enough that most likely know of her. $\endgroup$ – Dave L Renfro Apr 21 '14 at 15:34 $\begingroup$ RL is fairly controversial as a role model, as many people at the time felt that her father was using her to live out his own dreams of being an Oxford undergraduate (he accompanied her there). (Only she is qualified to give a judgement on that.) Also, while she is clearly very bright, she has not produced any groundbreaking work as an adult. $\endgroup$ – S List Apr 24 '14 at 13:30 $\begingroup$ I'm not sure of the logic behind your first sentence: Does getting a Ph.D at a young age imply one is a great role model? $\endgroup$ – Benjamin Dickman Apr 25 '14 at 16:16 Alicia Boole Stott, the daughter of George Boole (Boolean Algebra), had a deep understanding of 4D geometry. She got married and lived the life that entailed back then (1890s and on). Coxeter gives her husband some credit for connecting her to Pieter Schoute. They worked together and published some papers on 4D polytopes. Coxeter's book, Regular Polytopes has a brief biography. Coxeter worked with her later in her life. George Boole's household was rather interesting according to Coxeter. Alicia was introduced to 4D geometry through C.H. Hinton, who married her sister and wrote a book on the fourth dimension. $\begingroup$ There is something wrong with the sentence "Coxeter gives her husband some credit for connecting her to Pieter Schoute". As Wikipedia shows and is probably well known, Coxeter is a man; however you write feminine adverb "her". $\endgroup$ – kan Apr 22 '14 at 19:32 $\begingroup$ @kan Coxeter gives Alicia Stott's husband Walter credit for putting Alicia in contact with Pieter Schoute. The "her" is Alicia Stott in both cases. $\endgroup$ – user1527 Apr 22 '14 at 19:54 $\begingroup$ That is now very clear. Thanks for the clarification. I am not a native speaker as you probably figured but if you think it is appropriate, you could consider rewording your answer a little bit. $\endgroup$ – kan Apr 22 '14 at 21:05 Grace Chisholm Young seems overlooked so far (13 answer so far) and in my opinion she is worth considering. She worked mostly in real analysis and what is sometimes called classical point set theory (among other things, she's the "Young" in the Denjoy-Young-Saks theorem and she wrote a well known survey paper on nowhere differentiable continuous functions in 1916), and she played a large role in her husband's 200+ papers. Besides math, she knew 6 languages, completed all the requirements for a medical degree except for residency, had 6 children in a period of 9 years, and wrote a book on reproduction for children. Despite all her activities, she was also very devoted to her children (teaching each to play a musical instrument), who went on to become: both a son and a daughter were fairly well known mathematicians (the son was also a chess grandmaster), a daughter became a medical doctor and the first female member of the Royal College of Surgeons, a son earned a Ph.D. in chemistry at University of Oxford and later pursued public finance and diplomacy, a daughter completed an undergraduate degree in math and was an Associate Professor of French at Bryn Mawr from 1927 to 1935, a son earned an undergraduate engineering degree and was killed as a pilot in World War 1. I recommend looking through Ivor Grattan-Guinness' 1972 paper A mathematical union: William Henry and Grace Chisholm Young, which contains many interesting details about her life. Dave L RenfroDave L Renfro More famous for computer science than maths, but a strong mathematician none the less and creator of the Liskov substitution principle (the L in SOLID), Barbara Liskov. JMKJMK The story of Sarah Flannery may interest high school students, as she worked on non-trivial mathematics related to codes as a sixteen year old. byron schmulandbyron schmuland I have several daughters and I like talking to them about my female friends in the computer graphics industry. In particular, an acquaintance of mine, Kelly Ward, who did the physics (and math) for the hair in Disney's Tangled. A few links: http://www.gpb.org/blogs/passion-for-learning/2012/06/06/disneys-tangled-an-exercise-in-physics-and-computer-animation http://unc.edu/meet-a-tar-heel/kelly-ward/ Émilie du Châtelet ,(17 December 1706 – 10 September 1749) was a French mathematician, physicist, and author during the Age of Enlightenment. Her crowning achievement is considered to be her translation and commentary on Isaac Newton's work Principia Mathematica. The translation, published posthumously in 1759, is still considered the standard French translation. Voltaire, one of her lovers, declared in a letter to his friend King Frederick II of Prussia that du Châtelet was "a great man whose only fault was being a woman" (From Wikipedia) AlanAlan While in high school Britney Gallivan folded a piece of (very special) paper twelve times, when most people thought it couldn't be folded more than 7 or 8 times, and wrote a paper about it. Like the amateur mathematician, Marjorie Rice, mentioned in another answer, she shows that people who are deeply engaged in a problem can make advances, and that mathematics has room for new discoveries. Sue VanHattum♦Sue VanHattum Ingrid Daubechies, known for Daubechies wavelets and former president of the IMU. Ruth Moufang, known for Moufang loops. FrunobulaxFrunobulax Mary Somerville (1780-1872) was a self-taught mathematician and an expert on theoretical astronomy. The Dictionary of National Biography (London, 1897) described her as 'the most remarkable woman of her generation'. See this article about her. CharoCharo Thanks to Google Doodle, today I learned about Olga Ladyzhenskaya and her fantastic life. From Wikipedia. Ladyzhenskaya was born and grew up in Kologriv. She was the daughter of a mathematics teacher who is credited with her early inspiration and love of mathematics. In October 1939 her father was arrested by the NKVD and soon killed. Young Olga was able to finish high school but, because her father was an "enemy of the people", she was forbidden to enter the Leningrad University. After the death of Joseph Stalin in 1953, Ladyzhenskaya presented her doctoral thesis and was given the degree she had long before earned. She went on to teach at the university in Leningrad and at the Steklov Institute, staying in Russia even after the collapse of the Soviet Union and the rapid salary deflation for professors. Ladyzhenskaya was on the shortlist for potential recipients for the 1958 Fields Medal, ultimately awarded to Klaus Roth and René Thom. Her Mathematical Work. She was known for her work on partial differential equations (especially Hilbert's nineteenth problem) and fluid dynamics. I think some aspect of the fluid dynamics might be introduced in high school. There is an attempt here. In particular, looking at the male-dominated history of the subject, I feel she should find her place in the photo used in the aforementioned attempt. edited Mar 7 at 9:41 answered Mar 7 at 6:18 Amir AsghariAmir Asghari Whitman, Betsey S. Women of Mathematics: A Biobibliographic Sourcebook, Louise Grinstein and Paul Campbell, Editors, Greenwood Press, 1987. Bailey, Martha J. American Women in Science: A Biographical Dictionary, ABC-CLIO, 1994. Herbert RobertsHerbert Roberts The Association for Women in Mathematics has a great annual essay contest. Here is a link. There are some great essays and the subjects run the gamut from industry to academia to the arts. ncrncr 2,64611 gold badge99 silver badges99 bronze badges Shakuntala Devi a.k.a Human Calculator Her talents eventually earned her a place in the 1982 edition of The Guinness Book of World Records.As a writer, Devi wrote a number of books, including novels and non-fiction texts about mathematics, puzzles, and astrology. Link to wikipedia Other Human calculators Mr_GreenMr_Green $\begingroup$ While perhaps an interesting person, this woman is not a mathematician. Mental calculation is closer to magic (a kind of performance art or theater) than to mathematics. $\endgroup$ – Dan Fox Jan 27 '18 at 13:50 I always make sure to mention Hypatia in my classes and at least show them parts of this video (The Story of Maths - Marcus Du Sautoy BBC). I've linked to the part where she's mentioned. http://youtu.be/rDBdT1Dl_QY?t=55m6s CarlosCarlos $\begingroup$ The video says "her cult status eclipsed her mathematical achievements" -- if I had time to discuss 30 mathematicians with a class, I wouldn't spend much time on one who requires a proviso like that. $\endgroup$ – user173 Apr 24 '14 at 14:07 35 years ago Florence Nightingale was taught at my school to be the female mathematician to look up to because of her many disciplines. However, I've replaced that image with Lady Ada Lovelace who is considered the first computer programmer. Dave KeaysDave Keays $\begingroup$ Welcome to the site! These are helpful suggestions, yet they are also both already mentioned in other answers. $\endgroup$ – Brendan W. Sullivan Apr 22 '14 at 19:04 $\begingroup$ Can you talk a little about what motivated you to make the switch and perhaps what you have noticed about any differences in how students talk about the two people? Has anything interesting happened in the wake of the switch? $\endgroup$ – JPBurke Apr 22 '14 at 19:23 protected by quid♦ Apr 22 '14 at 20:54 Not the answer you're looking for? Browse other questions tagged secondary-education examples history women or ask your own question. Was there an SMSG (New Math) "Algebra 2" text? What are some recent, interesting, accessible pieces of mathematics What are some good final project prompts for a calculus class (where they have to write a "paper")? What arguments can I give a high school student why mathematics is important? Ideas for high school pure maths projects Interacting with high school teachers (US) Teaching the History of Mathematics in High School How early to start "abstract" math education, or, How to prevent smart kids from never getting exposed to math? How do I nicely tell my coworkers that they are NOT mathematicians? Are there any math certificates/qualifications on high school maths that you can do after high school? Introducing an axiomatic method to high-school students Can undergraduate math courses be taught at a high school level? How can I measure the mathematical computation skills of high school students through a test?
CommonCrawl
Team:Valencia Biocampus/Modeling Human Practices Modeling overview Biobricks Modeling Single worm behavior Group behavior The main goal of our modeling project is to accurately predict the behavior of our system in several issues, from the mounting of bacteria on C. elegans to the performance of our worms reaching the place of interest. In order to do that, we use several modeling techniques. The movement of C. elegans in the presence of a chemoattractant in order to carry our bacteria to that source is the main issue of our project, so we consider modeling this aspect and employing it as scaffold for the whole modeling project. Several layers show up that it must be modeled in different ways. In our approach, we mathematically describe each layer, from the simplest to the most complex, integrating each one. The workflow in the C. elegans movement is the following: Single Worm Chemotaxis Considerations Chemoattractant Modeling approach Random Walk Biased Random Walk Partial Differential Equations Ordinary Differential Equations The model is improved adding the data obtained from the experiments in order to achieve a holistic model which can predict the distribution of our worms, the kinetics of the present bacteria and the concentration of the substrate in each moment. In this section, PHA production is modeled. PHA production By extracting data from fermentation assays we obtained a good simple linear regression by means of the least squares method between the dry weight of the microorganismos (Pseudomona Putida) and the PHA weight produced, the relationship we obtained is as follows: $$ PHA = 0.2022 \cdot CellDryWeight $$ Where $ PHA $ and $ CellDryWeight $ are in grams ($ g $). The average for the prediction error is $\mu = 0.2008$ and the variance $\sigma^2 = 1.9005$. The coefficient of determination is $R^2 = 0.9374 (93.74\%)$. The equation with all the data is represented below: The red dots near the origin correspond to a batch fermented in a flask, the blue dots correspond to an assay with two fedbatches in a fermenter of 2 liters. The black line is the linear regression line obtained by the least squares' method. C. elegans behavior In order to understand the system as a whole, we started with the simplest case, so our first study is based on the behavior of a single worm, initially in the absence of chemoattractant and later adding it. Our first approach consists in the study of a single worm in a surface without chemoattractant, under this condition it is observed that a C. elegans moves randomly, this is defined as a Random Walk. Random Walk is the mathematical formalization of a trajectory that consists of taking successive random steps. We ran several simulations in Scilab and C++, in order to represent a single C. elegans moving this way in the absence of attractants. Random walks are used to describe the trajectories of many motile animals and microorganisms. They are useful for both qualitative and quantitative descriptions of the behavior of such creatures. In our case a single C. elegans was simulated as a point: $(x_t, y_t)$ At each instant $t$ in the simulation, step length $l_t$ and direction $\theta_t$ the equations are the following: $$l_t = \Delta t\;\nu_t$$ $$\theta_t = \theta_{t-1} + \Delta t \left(\frac{d\theta_t}{dt} + \delta\right) $$ Where $\Delta t$ is the duration of the time step, $\nu_t$ is the instantaneous speed, $\frac{d\theta_t}{dt}$ $(= \dot{\theta_t})$ is the instantaneous turning rate (using the convention that $\frac{d\theta_t}{dt}$$ > 0$ is a right turn and $\frac{d\theta_t}{dt}$$ < 0$ is a left turn) and $\delta$ is the turning bias. The values of these variables were taken randomly from different gaussian distributions, that we identified by sampling but also obtained from some papers: $\nu_t$: normal distribution with $\sigma = 0.0152\;cm/s$ and $\mu = 0.00702\;cm/s$ $\dot{\theta_t}$: normal distribution with $\sigma = 0.0150273\;rad/s$ and $\mu = 0.6789331\;rad/s$ $\delta$: normal distribution with $\sigma = 0.0076969\;rad/s$ and $\mu = 0.0370010\;rad/s$ Simulations were performed in Scilab and C++, proceeding as described in (Pierce-Shimomura et al., 1999). We contacted the authors but they could not provide us with the code, so we developed our own script based on the article, obtaining similar results: You can also simulate C.elegans random walk in our Simuelegans online application Simulated with our own software SimuElegans. Please note that the actual movements of C. elegans are much slower and thus the simulation in this video is accelerated for convenience. In these simulations we considered $\Delta t = 1\;s$, but actually we simulated it every $0.01\;s$, so its $\frac{1\;s}{0.01\;s} = 100$ times accelerated. Chemotaxis of a single worm However, we are interested in the movement of the nematode in a gradient of chemoattractant. Studies revealed that the behavior of E. coli during chemotaxis is remarkably similar to the behavior of a single C. elegans in the presence of a chemotactic source (Bargmann, 2006), a mechanism called the pirouette model in C. elegans (Pierce-Shimomura et al., 1999) and the biased random walk in bacteria (Berg, 1993; Berg, 1975). The basis of these models is a strategy that uses a short-term memory of attractant concentration to decide whether to maintain the current direction of movement or to change to a new one. Following this model, we defined the turning rate as a function of $\frac{\partial [C]}{\partial t}$. Where $[C]$ is the concentration ($[\;]$) of chemoattracant ($C$). Simulations were performed as expected, showing a bias in the random walk: You can also simulate C.elegans chemotaxis in our Simuelegans online application Chemotaxis of a single worm consuming attractant As a last step of the study of a single worm, we decided to modelize the most realistic behavior. This behavior happens when, in addition to moving randomly but biased toward the gradient of chemoattractant, we consider the fact that it is constantly consuming its "food", so these gradients will not be any more constants in time. In order to implement this model, we realized that was impossible to study still considering the space as an attractant function, cause it changes its shape in every time step, and it made simulations really slow (maybe, days of computation). So, we meshed the space, and gave every point a different weight, depending on time: that was the starting of the matrix and approximated (numerical) calculations. What we obtained was close to have relevant impacts in the course of our project, because we could interfere in the C. elegans path, and not only in its final position, making it moving from one source of chemoattractant to another: Simulated using Scilab Note that this simulation has been speeded 600%. Moreover, these numerical methods opened our eyes to study their movement as a whole, with partial differential equations. Bargmann CI (2006) Chemosensation in C. elegans. Wormbook Berg HC (1993) Random walks in biology. Princeton, NJ: Princeton UP. Pierce-Shimomura JT, Morse TM, Lockery SR (1999) The Fundamental Role of Pirouettes in C. elegans Chemotaxis. The journal of Neuroscience, 19(21):9557-9569 One of the 3D simulations performed in SciLab. In practice, a single C. elegans is not employed to perform the task but a group of worms. In this case, the random walk equations obtained for a single worm behavior can be transformed into partial differential equations (PDEs), using Taylor series, to depict the distribution of the population, as the ones describing a diffusion process ( Click here for a mathematical proof of one dimensional diffusion process using PDEs from random walk) However, in our case we face a diffusion system with drift, in which the last term arises from the biased movement of C. elegans. To obtain a PDE that reflects this behavior, we employed difference equations also using Taylor series and apropiate limits ( Click here for a mathematical proof of two dimensional biased diffusion process (C. elegans chemotaxis behavior) using PDEs from biased random walk) The equation is the following: $$\frac{\partial [C]}{\partial t} = D\;\nabla^2[C] - \nabla\cdot(\underline{\nu}\;[C]) $$ Once obtanied the equation that governs our system, we proceed to study if it works correctly for our purpose. However, analytical solutions for this equation are known for few ideal cases. For example, for a monodimensional bistable potential the stationary state is the following (Okopinska, 2002): $$ P_{ss}=\mathscr{N}^{-1} e^{\left(-V(x)/D\right)}$$ where $ \mathscr{N} $ is the normalization constant. As seen, our system, can theoretically work as expected, depending on the shape of the chemoattractant only. This is a great result because it shows our worms are capable of reaching each region. Our final goal in this section is predicting the temporal and spatial behavior in 2D for non-ideal problems. In this case, using numerical methods is the only way to solve the system. For the 2-dimensional case, $[C] = [C(x, y, t)]$ is the concentration of C. elegans at a given instant $t$ and at a given point $(x, y)$, $D$ is the diffusion coefficient and $\overrightarrow{\nu}$ is the attraction field for the C. elegans, which basically stands for its velocity. This expression naturally expands to: $$\frac{\partial [C]}{\partial t} = D \; \left(\frac{\partial^2 [C]}{\partial x^2} + \frac{\partial^2 [C]}{\partial y^2}\right) - \left(\frac{\partial\nu_x}{dx}[C] + \nu_x\frac{\partial [C]}{\partial x} + \frac{\partial\nu_y}{dy}[C] + \nu_y\frac{\partial [C]}{\partial y}\right) $$ We first started by building up an explicit finite difference method, this is numerically fast but it is prone to instabilities in the solutions. Therefore, we decided to develop a Crank-Nicolson method which is unconditionally stable, thus being suitable to carry out parameter identification of a model. Nevertheless, it has the drawback of being numerically more intensive, because a set of algebraic equations must be solved in each iteration ( Click here for a mathematical explanation of the Crank-Nicolson method). Now, we can predict the behavior of a set of worms in presence of a chemoatractant. One must have into account that the system may behave differently depending on some variables, for example, the diffusion coefficient $D$, a constant $k$ that determines the weight that the drift variable $\nu$ has and the time constants for the chemoattractant diffusion $t_1$ and $t_2$, where $t_1$ is the elapsed time since a first chemoattractant drop is put into the petri plate, $t_2$ stands for the elapsed time since a second drop of chemoattractant is put into the petri plate until a C.elegans initial distribution is added, the diffusion of chemoattractants is then assumed to be negligible for simplicity reasons. According to these parameters, we performed a lot of simulations using SciLab, what they all have in common is the position of the four chemotactic sources. The simulations were performed by first varying only $t_1$ and $t_2$ for some given $D$ and $k$ ($D = 0.2$ & $k = 0.001$): $t_1$ = 5, $t_2$ = 6 $t_1$ = 5, $t_2$ = 12 $t_1$ = 10, $t_2$ = 12 $t_1$ = 10, $t_2$ = 24 $t_1$ = 15, $t_2$ = 24 $t_1$ = 15, $t_2$ = 48 It can be observed by ploting the drift field for each case, that the final distribution is mostly dependent on the mentioned drift field, which in fact is mostly dependent on $t_1$ and $t_2$. Here we present the drift fields for each case: Click here to open / close the simulations Qualitatively, it can be seen through these simulations that incrementing $t_1$ or $t_2$ disperses and smoothes the drift field which actually affects the final distribution of C.elegans by dispersing it. Videos corresponding to each simulation can also be seen here: And then by varying only $D$ and $k$ (separately, of course) for some given $t_1$ and $t_2$ ($t_1 = 10$ hours & $t_2 = 12$ hours): D = 0.05, k = 0.001, D/k = 50 D = 0.1, k = 0.001, D/k = 100 D = 0.2, k = 0.001, D/k = 200 D = 0.05, k = 0.0005, D/k = 100 D = 0.1, k = 0.0005, D/k = 200 D = 0.2, k = 0.0005, D/k = 400 According to these results, qualitatively, higher values of $D$ or lower values of $k$ (and thus a greater $D/k$ relation) give a much more disperse distribution of C.elegans. Videos corresponding to each simulation can be seen here: We also carried out some simulations with different number of attractant sources. One can find simulations with $1$, $2$ and $3$ sources, and parameters $D\;=\;0.2$, $k\;=\;0.001$ (thus $D/k\;=\;200$) here: Parameter identification in PDEs is a rather complex topic, we did some research concerning our Group Behavior Model which is a Fokker-Planck Equation with a somewhat complex drift term $\nu$, which depends on position in our model (it could also depend on more factors which would add even more complexity to the system and therefore we didn't study them). We weren't lucky enough to find anything related to the topic (at least with the time we had), so we thought about implementing our own algorithms, based on an educated guess (As usual all the code we wrote can be found under the Software section). We wanted to study the spatial time constant of the system, that is, how long does it take for the system to reach an stable spatial distribution? We thought about a obtaining a variable $\mathcal{V}$ that would grow asimptotically after some time $\tau_{xy}$ (the spatial time constant). That is, our variable $\mathcal{V} \rightarrow C$ as $\tau_{xy} \rightarrow +\infty$. We then came up with the idea that local maxima would be a good way to define when the spatial distribution was stable, so we decided that the slower local maximum (that is the local maximum which is the slowest in stabilizing to its final position) would be a good approach to define the spatial time constant $\tau_{xy}$. Therefore we needed an algorithm to calculate all the local maxima at a given instant and also to determine when they have "stopped" moving, so we programmed an approximate algorithm to find the local maxima in SciLab and used a more or less small band centered around the final value to determine $\tau_{xy}$. An example of the graphs we obtained through this process can be seen here: So we carried out several simulations with different values for our parameters $D$ and $k$, in order to obtain the forementioned relationship between $D$, $k$ and $\tau_{xy}$. First of all, we managed to plot and obtain the function that relates $D$ with $\tau_{xy}$, for $k=0.00025,\;0.0003,\;0.000375,\;0.0005$ and $0.001$. And as one can observe, different values of $k$ just move the curves but the main shape of the functionals remains the same, that is, exponential (with rather great coefficients of determination). From the results we hypothesized that the general form of the functional (depending on $k$) was as follows: $$D(\tau_{xy},k) = f(k)e^{\left(g(k)\tau_{xy}\right)}$$ We then plotted the coefficients $f(k)$ and $g(k)$ vs. $k$, as shown here: We first obtained a quadratic regression and the resulting equations for $f(k)$ and $g(k)$ were: $$f(k) = 3e6\;k^2 - 4658.1\;k+1.9523$$ $$g(k) = 6177.4\;k^2 - 11.907\;k-0.0013$$ But $R^2$ was about $80.17\%$ for $f(k)$, way too low, so then we decided to obtain a better model by using Lagrange's Interpolating Polynomials, which are represented by the green curves on the graphs above. Thus we obtained the following final equation for $D$: $$D(\tau_{xy},k) = \left(-3.4061e13\;k^4+4.22776e10\;k^3+3.2955e6\;k^2-15277\;k+4.19004 \right)\;e^{\left(2.31802e11\;k^4-4.23644e8\;k^3+241467\;k^2-57.5001\;k+0.0008832 \right)\;\tau_{xy}}$$ From this the equation for $\tau_{xy}$ can be obtained: $$\tau_{xy}(D,k) = \frac{1}{0.0008832 - 57.5001\; k + 241467\; k^2 - 4.23644e8\; k^3+2.31802e11\; k^4} log\left(\frac{D}{4.19004 - 15277\; k + 3.2955e6\; k^2 + 4.22776e10\; k^3 - 3.4061e13\; k^4}\right)$$ Where $D$ and $k$ are the mentioned parameters of the model and $\log$ refers to the natural logarithm (base $e$). The average of the relative error is somewhat lower (at least in the studied region), about $8.00677\%$, compared to about $9.4\%$ for the first approximation. From this equations we tried to obtain the parameters $D$ and $k$ for our model by stablishing a relationship that would not lead to dirac deltas in the final solution, which is what usually happened and is the theoretical solution provided the initial distribution of worms is also a dirac delta. Therefore we obtained the relationship $\frac{D}{k} = 502.2$ that would lead to a temporal stabilisation of the distribution, meaning no dirac deltas would arise. The intention was to provide our wetlab mates with theoretical data to improve their experiments and therefore creating a feedback loop in which they could provide data to us to improve our model even more and they would receive result back and so on. But, in fact, with the obtained relationship and our data we weren't able to find correct values for $D$ and $k$. We think this maybe due to the restricted region for $\tau_{xy}$, $D$ and $k$ that we studied. We also think that through the same process we would be able to achieve correct relationships for the variables in other regions of interests the problem is with increasingly small $D$ and $k$ parameters, come increasingly large times for the simulations to run and we weren't able to do so in our personal computers. This would therefore be a future task, to carry out more simulations maybe on a computer cluster with parallelization. We think that then the principles we followed would still hold. Okopinska A. (2002) Fokker-Planck equation for bistable potential in the optimized expansion. Physical review E, Volume 65, 062101 All this modeling part was made to track a set of worms in a given soil. This can be used to compare our system, a non-stirred solid bioreactor, with a conventional stirred liquid bioreactor. Simple ODEs for bacterial growth and substrate consumpton are used in a cualitative fashion to determinate the industrial applications. Modelling this part is difficult because there are a lot of factors involved. Therefore, asumptions must be made. Our first approach is the study of the stationary state in both systems: In a stirred bioreactor, the concentration of both bacteria and substrate are constant whereas in a non-stirred one exists a determinated distribution. For the concentration distribution in the non-stirred bioreactor is gaussian and the bacteria distribution (consecuence of the attraction of C. elegans for the substrate) is given by the stationary state distribution for an one-dimensional case. The pictures are graphical examples of both situations: Stirred case (left) and non-stirred (rigth). Substrate is shown in purple, bacteria in blue, and the relation between them in grey. The total concentration is the area below the curve. Values are arbitrary. As can be seen, a higher bacteria/concentration ratio is achieved in our system than the conventional one. Anyone can appreaciate that the ratio tends to one, being lower than the conventional case ratio, but in that region the substrate concentration is nearly zero. So it is not interesting to study. But one question arises: Is this significant? In fact, it is not. In both cases, the average ratio in the non-stirred case is the same as the stirred one. But it is not a drawback. It means that our system can perform the same as a conventional bioreactor but with the advantages of a higher concentration. Therefore, the production yield in higher. In general, our system behaves as well as a conventional one for the stationary, linear, ideal case. Particular cases are not studied because they have to be analyzed independently. A complex model can be developed for each case to study its viability for its practical use, but it needs more time. Because of that, we made a cualitative study for the system. While time is a problem (stirring is faster than nematode crawling), higher yields and better suitability for several problems are important advantages of our system. Also, the combination of our model and the tracking device helps to determinate the final result. Retrieved from "http://2013.igem.org/Team:Valencia_Biocampus/Modeling"
CommonCrawl
The relationship between the effect-site concentration of propofol and sedation scale in children: a pharmacodynamic modeling study Young-Eun Jang1 na1, Sang-Hwan Ji1 na1, Ji-Hyun Lee1, Eun-Hee Kim1, Jin-Tae Kim1 & Hee-Soo Kim1 Continuous infusion of propofol has been used to achieve sedation in children. However, the relationship between the effect-site concentration (Ce) of propofol and sedation scale has not been previously examined. The objective of this study was to investigate the relationship between the Ce of propofol and the University of Michigan Sedation Scale (UMSS) score in children with population pharmacodynamic modeling. A total of 30 patients (aged 3 to 6 years) who underwent surgery under general anesthesia with propofol and remifentanil lasting more than 1 h were enrolled in this study. Sedation levels were evaluated using the UMSS score every 20 s by a 1 μg/mL stepwise increase in the Ce of propofol during the induction of anesthesia. The pharmacodynamic relationship between the Ce of propofol and UMSS score was analyzed by logistic regression with nonlinear mixed-effect modeling. The estimated Ce50 (95% confidence interval) of propofol to yield UMSS scores equal to or greater than n were 1.84 (1.54–2.14), 2.64 (2.20–3.08), 3.98 (3.66–4.30), and 4.78 (4.53–5.03) μg/mL for n = 1, 2, 3, and 4, respectively. The slope steepness for the relationship of the Ce versus sedative response to propofol (95% confidence interval) was 5.76 (4.00–7.52). We quantified the pharmacodynamic relationship between the Ce of propofol and UMSS score, and this finding may be helpful to predict the sedation score at the target Ce of propofol in children. http://www.clinicaltrials.gov (No.: NCT03195686, Date of registration: 22/06/2017). Procedural sedation induces anxiolysis, unconsciousness, and analgesia for patient's comfort. Pediatric patients often require sedation for examination, brief procedures, and imaging studies because of anxiety, fear, distress, and agitation owing to parental separation [1]. Propofol is widely used for pediatric procedural sedation as it has a potent dose-dependent hypnotic action [2,3,4,5,6]. Additionally, propofol reduces airway reflex, postoperative nausea and vomiting, and emergence delirium in pediatric patients and is necessary for children susceptible to malignant hyperthermia [7, 8]. The important issues during propofol sedation are avoiding overdosage or underdosage [9, 10], and maintaining adequate spontaneous ventilation and vital signs [5, 11,12,13]. For longer procedural sedation, continuous infusion of propofol should be used [14, 15]. Several studies have investigated the manual infusion dosage required to achieve sedation in children [7, 14, 15]. However, as the central volume of distribution and clearance of propofol changes during development, manual infusion of propofol need age-specific dosage adjustment [7, 8, 10, 14,15,16]. Additionally, manual infusion of propofol guided by clinical assessment of the sedation scale is associated with an increased risk of overdosage or underdosage when compared to that with target-controlled infusion (TCI) [8,9,10]. Hypotension, bradycardia, apnea, airway obstruction, and delayed recovery are associated with overdosage of propofol [11, 17, 18] while insufficient concentrations of propofol can result in awakeness, sympathetic stimulation, and unsatisfactory procedural condition [10, 11]. TCI of propofol has been used in anesthesia for more than 20 years, and its use has spread widely because of the convenience in usage, stable blood concentration estimations, and rapid recovery it offers [10, 11, 19, 20]. The development of pediatric TCI models for propofol, such as those by Kataria et al. [21], Absalom et al. [22], and Choi et al. [16] has led to the widespread usage of TCI of propofol for inducing sedation and general anesthesia in children [8, 10, 12, 16, 23,24,25]. Recently, studies have shown that TCI of propofol has a more stable bispectral index (BIS) within the target range with less dose adjustment than that of manual infusion during general anesthesia in children [9, 10]. However, electroencephalography-derived monitors are not always feasible in various clinical settings (e.g., magnetic resonance imaging) of pediatric procedural sedation. Therefore, establishing the relationship between the Ce of propofol and sedation scale will be helpful for targeted procedural sedation in pediatric patients. We hypothesized that an adequate pharmacodynamic model between Ce of propofol and University of Michigan Sedation Scale (UMSS) can be made in children, and planned a prospective modeling study. Patient recruitment and anesthetic methods The study was approved by the Institutional Review Board of Seoul National University Hospital (Ref No; 1705–110-855). Written informed consent was obtained from one of the parents or legal guardians for minor patients, and patients were given a verbal explanation and had the opportunity to ask questions about the study methods and purposes. Informed consent was obtained from each patient. All procedures followed the principles of the Declaration of Helsinki and its subsequent revisions. This study was registered at http://www.clinicaltrials.gov (NCT03195686, Principal investigator; Hee-Soo Kim, Date of registration; 22/06/2017) prior to patient enrollment. A total of 30 patients (aged 3 to 6 years) who underwent surgery under general anesthesia lasting more than 1 h were enrolled in this study. The exclusion criteria were obstructive sleep apnea, expected difficult airway management, or any conditions affected by propofol, such as mitochondrial diseases. All patients fasted according to the guidelines of the American Society of Anesthesiologists. Baseline heart rate (HR) and noninvasive blood pressure (BP) were measured on admission. An intravenous route was established before transferring the patient to the operating theater, and patients received intravenous midazolam 0.1 mg/kg as premedication. Standard monitoring, including an electrocardiogram, HR, noninvasive BP at 1-min intervals, peripheral oxygen saturation (SpO2), and end-tidal carbon dioxide (ETCO2), was performed after arrival in the operating theater. A facial mask was applied, and oxygen (6 L/min) was administered. Lidocaine administration (0.5 mg/kg) was followed by propofol infusion with the effect site TCI mode (Kim and Choi's model [16]) set at a Ce of 1 μg/mL. The propofol infusion line was connected just near the patient's intravenous catheter site to minimize the dead space. After reaching the target, Ce was maintained for 2 min to ensure equilibration, and the target Ce was increased by 1 μg/mL for each step (up to Ce = 6 μg/mL). The Kim and Choi's pediatric propofol model was recently developed and externally validated in previous studies [16, 25]. Administration of propofol was conducted using an infusion control software (ASAN pump program, http://fit4NM.org/d_asanpump, last accessed: 03 Nov, 2020) with a syringe pump (Pilot Anesthesia 2, Fresenius Kabi AG, Bad Homburg vdh, Germany). Detailed logs of infusion, including time and rate of infusion, were automatically recorded during the whole infusion period [26]. The criteria for determining the UMSS score are presented in Table 1. Table 1 University of Michigan Sedation Scale UMSS was assessed by an experienced pediatric anesthesiologist (Y.E.J.) who was blinded to the Ce of the propofol. Response to a verbal conversation (UMSS = 1, sleepy appropriate response to sound) was assessed by calling the patient's name. Light tactile stimulation was applied (UMSS = 2, moderately sedated) by touching the patient's eyebrows, and significant physical stimulation was applied (UMSS = 3, deeply sedated) by squeezing the trapezius. After the patient exhibited a UMSS = 4 (unarousable), rocuronium 0.6 mg/kg was administered to facilitate tracheal intubation, and remifentanil was started for general anesthesia. During anesthesia, the target Ce values for propofol and remifentanil were controlled by the attending anesthesiologists. Ventilation was adjusted to a tidal volume of 7 mL/kg, and the respiratory rate was adjusted to maintain ETCO2 of 35–40 mmHg; the inspired oxygen fraction was 0.4 in 2 L of fresh gas. Body temperature was continuously monitored and maintained above 35.5 °C with active warming. On the closure of the surgical wound, propacetamol 15 mg/kg was administered for postoperative pain control. Remifentanil administration was halted 10 min before emergence and propofol administration was stopped by the attending anesthesiologist's decision. In addition, HR, BP, SpO2, and ETCO2 were continuously monitored throughout the study period and anesthesia. When patients met the extubation criteria (UMSS = 0 or 1, train-of-four ratio > 0.9, and adequate tidal volume > 6 mL/kg), they were extubated and transferred to a post-anesthesia care unit (PACU). All adverse events (hypertension, BP increased more than 20% of baseline values; hypotension, BP decreased more than 20% of baseline values; tachycardia, HR increased more than 20% of baseline values; bradycardia, HR decreased more than 20% of baseline values; apnea, no spontaneous breathing for 15 s or desaturation; and < 95% of SpO2) were observed and recorded during the study period (from the induction of anesthesia to the discharge from PACU). On completion of the study, the infusion history for propofol was obtained from the ASAN pump program. To eliminate the confounding effect of surgical pain and analgesic medications (remifentanil and propacetamol), only UMSS scores and the Ce values of propofol during induction of anesthesia were used for pharmacodynamic modeling of propofol sedation. Statistical analyses for descriptive statistics were performed using SPSS 23.0 for Windows (IBM SPSS Statistical Software, Chicago, IL, USA). Investigation of the relationship between Ce of propofol and sedation scale; pharmacodynamic modeling To transfer ordinal UMSS scores into binary outcomes, we defined the UMSS score being equal to or greater than a given level n as a "response" and otherwise as a "non-response," which were converted to 1 and 0, respectively. Logistic regression analysis was performed to examine the pharmacodynamic relationship between the Ce of propofol and UMSS scores. Referring to previous similar studies [27, 28], the probability of response to a given level of sedation score [P (UMSS) ≥ n] at a given Ce of propofol was analyzed using the following sigmoid Emax model: $$P\ \left(\mathrm{UMSS}\ge n\right)=\frac{{C_e}^{\gamma }}{{C_{e50\mathrm{UMSS}\ge n}}^{\gamma }+{C_e}^{\gamma }},$$ where Ce50UMSS ≥ n is defined as a steady-state effect-site concentration of propofol with a 50% probability of a UMSS score equal to or greater than n, and γ is the Hill coefficient describing the slope steepness for the relationship of the Ce versus sedative response. The value of γ was assumed to be the same for all the sedation scores. Based on the notion of "response" and "non-response" mentioned above, observation of a specific UMSS score n at a given Ce can be explained as the co-occurrence of a "response" for scores 0, 1, ..., n and a "non-response" for levels n + 1, …, 4. Therefore, the probability of observing the UMSS score of n at a given Ce can be calculated as the product of probabilities for "response" for levels 0, 1, …, n and "non-response" for levels n + 1, …, 4. Hence, P (UMSS = n) can be calculated as follows: $$P\left(\mathrm{UMSS}=n\right)=\prod_{k=0}^nP\left(\mathrm{UMSS}\ge k\right)\times \prod_{k=n+1}^4\left(1-P\left(\mathrm{UMSS}\ge k\right)\right)$$ $$\left(0\le n\le 4\right).$$ By this estimation, we defined the predicted UMSS score for a given Ce as the score with the highest probability and compared predicted and observed scores. Model building and evaluation We set our primary outcome as Ce50UMSS ≥ n and their relative standard errors obtained from the pharmacodynamic model. Secondary outcomes were distribution of UMSS scores against Ce of propofol, other derived parameters such as gamma, P (UMSS ≥ n), or P (UMSS = n) for each UMSS score from the model, demographic data, and incidence of adverse events such as desaturation, apnea, or hemodynamic instability. Among the data obtained during infusion of propofol, several points were selected, including the start of infusion and increment of UMSS score to 1, 2, 3, and 4. In addition, to reflect the change in Ce and obtain sufficient sample points, one additional point for each UMSS score was taken, which was set as the midpoint of the period in which the specific UMSS score was maintained, except for the score of 4. For UMSS score of 4, data at the time point when 20 s were elapsed after the score was changed were extracted. The pharmacodynamic model was built using the Laplace method of NONMEM® 7.4.4 (ICON Development Solutions, Dublin, Ireland) via an interface program named Pirana® 2.9.9 (http://pirana-software.com, currently only provided by Certara, Princeton, NJ, USA). The NONMEM software calculated the likelihood (L) of the observed response on the UMSS score (R) as follows: $$L=R\times P\ \left(\mathrm{UMSS}\ge n\right)+\left(1-R\right)\times \left[1-P\ \left(\mathrm{UMSS}\ge n\right)\right]$$ Inter-individual variability for each of Ce50UMSS ≥ n and gamma were estimated via a log-normal method or fixed to zero if necessary. To evaluate model performance, we used the methods described by previous investigators to calculate the prediction probability (Pk) of the predicted UMSS score given the observed UMSS score. With a pharmacokinetic tool program named fit4NM 4.6.0 (Eun-Kyung Lee and Gyu-Jeong Noh; http://www.fit4nm.org/download/246; last accessed: 03 Nov, 2020), Pk was calculated as follows [27,28,29]: $${P}_k=\frac{\mathrm{Somer}{\mathrm{s}}^{\prime }\ d+1}{2},$$ where Pk = 1 indicates the complete agreement between the observed level of sedation and calculated index; Pk = 0.5 is the random relationship between the two; and Pk = 0 indicates complete disagreement. In addition, we used a non-parametric bootstrap method to perform an internal validation of the model using the Perl-speaks-NONMEM (PsN) software ver. 4.9.0 (https://uupharmacometrics.github.io/PsN/). Bootstrapping was performed by resampling with the replacement of individuals to create a new dataset with an equal number of individuals as that in the original dataset (n = 30). Dataset formation and parameter estimation were repeated 1000 times. The 2.5–97.5% percentiles of the distribution of the parameter estimates across the nonparametric bootstrap replicates were used to build a 95% confidence interval and compared with the parameter estimates of the final model. Comparison with other pediatric models As Kim and Choi's model is not of widespread use nor commercially available, we compared the estimated Ce values from Kim and Choi's model and from other commercially available models, which are the Kataria model [21] and the Schüttler's model [30] used in the Paedfusor. As we have complete infusion history for each patient, we simulated the infusion and obtained estimated plasma concentration and Ce of propofol for every time point using PKPD tools for excel by C. Minto and T. Schnider (http://pkpdtools.com/excel/downloads/). We assumed actual use of the Kataria model and the Paedfusor model via commercially available Agilia® SP TIVA infusion pump (Software version 2.2, Fresenius Kabi, Fresenius Kabi AG, Bad Homburg vdh, Germany) and used pharmacokinetic-pharmacodynamic parameters based on literature [21, 31] and offered by the manufacturer. Afterward, we sorted estimated Ce values for time points included in the pharmacodynamic modeling and compared predictions from Kataria model or Paedfusor model with predictions from Kim and Choi's model via the Bland-Altman plot using MedCalc® (ver. 20.008, MedCalc Software Ltd., Ostend, Belgium). Sample size calculation Since this study was an exploratory study not intended to test a specific hypothesis, the calculation of the sample size was not necessary. We referred to a similar study on the pharmacodynamics of propofol study in terms of modified observer's assessment of the alertness/sedation scale, in which data from 30 patients were used [32]. A total of 32 patients were recruited, and 30 completed the study. Incomplete data collection resulted in the loss of two patients from the study. Demographic data are presented in Table 2. Table 2 Demographics and characteristics of the patients (n = 30) The relationship between the Ce of propofol and corresponding UMSS scores is presented in Fig. 1. At a given UMSS score, the Ce of propofol varied between individuals. UMSS score vs Ce of propofol. This figure shows a scattered plot of observation of the University of Michigan Sedation Scale (UMSS) score versus effect-site concentration (Ce) of propofol All 237 pairs of Ce of propofol and UMSS score were used for the development of the model. The estimated population parameters of the Ce50 of propofol at a given UMSS score and gamma are presented in Table 3. When the Ce of propofol increased, the depth of sedation was deeper and UMSS score was higher. Table 3 Parameter estimates of the population pharmacodynamics model for Ce of propofol and UMSS score (n = 30) Figure 2 presents the probability of showing the degree of sedation corresponding to a given UMSS score or higher according to the Ce of propofol in children. In addition, the calculated probability for each specific UMSS score according to Ce of propofol is presented. Estimation of probabilities of UMSS score. 2A depicts the estimation of the probability that the University of Michigan Sedation Scale (UMSS) score is n or more (n = 1, 2, 3, 4) according to the effect-site concentration (Ce) of propofol. Probabilities were calculated for Ce50UMSS ≥ n with a pharmacodynamic model using the Laplace method. 2B shows the probability for each specific UMSS score according to Ce of propofol. Detailed calculation methods are presented in the methods section. The probabilities for UMSS scores = 1, 2, and 3 show a single peak, and the probability of UMSS score = 4 shows a gradual increase as the Ce of propofol increases. Ce50UMSS ≥ n, the steady-state effect-site concentration of propofol with a 50% probability of UMSS score being equal to or greater than n The predicted UMSS scores that had the highest probability according to changes in Ce are shown in Table 4. Table 4 Predicted UMSS scores according to change in the Ce of propofol via Kim and Choi's model The calculated Pk (95% CI) was 0.770 (0.731–0.809). This value implies an excellent degree of agreement between the observed and predicted UMSS scores and, therefore, acceptable performance of the model. In addition, the observed and predicted distributions of the UMSS score according to the range of Ce are shown in Fig. 3. Observed vs predicted distribution of UMSS. The proportions of each observed and predicted University of Michigan Sedation Scale (UMSS) score according to the range of the effect-site concentration (Ce) of propofol are shown. 3A is for observed and 3B is for predicted UMSS score. The predicted UMSS score was determined as the score with the highest probability for a given Ce of propofol. Each section of Ce is set such that the values rounded from the first decimal place are included in the same section For comparison of commercially available models with the Kim and Choi's model, data from 23 patients were used, since infusion logs for seven patients were flawed. Excluding points before the start of the infusion, total of 166 time-points were included. Figure 4 shows the Bland-Altman plot for differences in the estimated Ce of propofol among different models. The Kim and Choi's model predicted the Ce of propofol to be higher than the Kataria model and the Paedfusor model, with biases (95% CI) of 24.2% (21.8–26.6%) and 27.9% (25.1–30.7%), respectively. Comparison of predicted Ce of propofol between models. Bland-Altman plots showing the agreement between the estimation of Ce of propofol by the Kim and Choi's model, and Ce of propofol by the Kataria model (4A) and by the Paedfusor model (4B). The differences were obtained by subtracting predictions by the Kataria model or the Paedfusor model from predictions by the Kim and Choi's model. Lines for 95% limits of agreements were drawn There were no complications such as respiratory depression after premedication with midazolam. During the induction of sedation, all patients were able to breathe spontaneously, and there were no desaturation events. At UMSS score = 4, five patients required jaw thrust to maintain airway patency with spontaneous breathing, but desaturation did not occur. No hypertension, hypotension, bradycardia, or tachycardia was observed in any patient. All patients were transferred to the PACU without any desaturation events or other respiratory complications after extubation and had a UMSS score = 1. In the PACU, no patients experienced apnea or desaturation or had unstable blood pressure or heart rate. This study was the first to quantify the relationship between Ce of propofol and the sedation probability in children according to the UMSS score using the pharmacodynamic model. The findings of this study may provide important information based on which anesthesiologists can estimate the sedation probability in children as assessed by the UMSS at a given Ce of propofol. Previous studies by McFarlan et al. [14] and Steur et al. [15] have suggested a manual propofol dosage scheme for total intravenous anesthesia in children aged 3–11 years and < 3 years, respectively. To produce a steady-state blood concentration of 3 μg/mL by manual infusion of propofol for children aged 3–11 years [14], a loading dose of 2.5 mg/kg was followed by continuous infusion of 15 mg/kg/hr for the first 15 min, 13 mg/kg/hr from 15 to 30 min, 11 mg/kg/hr from 30 to 60 min, 10 mg/kg/hr from 1 to 2 h, and 9 mg/kg/hr from 2 to 4 h. For children aged < 3 years, Steur and colleagues proposed different propofol dosage schemes for each four age group [15] (0–3 months, 3–6 months, 6–12 months, and 1–3 years), and they reduced the continuous infusion rate every 10 min to prevent delayed recovery. This complexity of manual infusion is due to the pharmacological properties of propofol in pediatric patients [14, 15]. Younger children require higher induction and maintenance doses because of a greater volume of distribution and elevated systemic clearance [16]. Additionally, lighter children require a higher weight-based infusion rate of propofol to maintain a certain level of Ce [14,15,16]. The context-sensitive half time in children is significantly longer than that in adults, and it is increased by prolonged infusion [14]. Therefore, maintaining an appropriate level of sedation with the manual infusion of propofol is cumbersome and difficult in various clinical situations of pediatric sedation [10, 33]. TCI uses allometric scaling to describe age-related changes in the volume of distribution and metabolic clearance in pediatric patients of various ages [4, 16]. Changes in the pharmacological parameters are automatically calculated in pharmacokinetic models for more accurate drug delivery and reduced variability [11]. TCI is typically associated with less respiratory depression, less dose of propofol, and faster recovery [11]. In pediatric patients, TCI of propofol shows less variability in the BIS with less dose adjustment compared with that of manual infusion during general anesthesia [9, 10]. Therefore, TCI gives a more stable sedation than manual infusion does in pediatric patients. A previous study investigated the population pharmacodynamics of midazolam and sedation score in adult patients after coronary artery bypass grafting [28]. The basic idea of the present study was adapted from this previous study. The clinical observational findings of our study were similar to those of this study. Although the drugs investigated in the studies (midazolam versus propofol) are different, the range of drug concentrations at a given sedation score varied, which is commonly observed during sedation (Fig. 1). Therefore, it would be reasonable to expect the probability of a certain sedation score at a given drug concentration. In our final model, inter-individual variability for Ce50 and gamma was fixed to zero. When we assumed inter-individual variability during the modeling, significant shrinkage, which was greater than 30%, occurred for all of the parameters except for the Ce50UMSS ≥ 2, while not substantially reducing the objective function value. As a high level of shrinkage is indicative of a high level of estimation error [34], we decided not to assume inter-individual variability. Therefore, we used a naïve pooled data approach that assumes inter-individual variability as zero, and accordingly, within subject correlation was not assumed. A similar method has been used in previous studies [27, 32]. The bootstrap results showed fair agreement with the original estimates from the final model. We did not present classical tools for model performance such as the goodness-of-fit plot or visual predictive check in this study. Such methods were difficult to apply in this model because the observed UMSS scores were ordinal variables rather than continuous variables and were expressed as integers, whereas the predicted values were completely different probability values. Although we presented the distribution of predicted UMSS scores in Fig. 3, it was only intended to provide an interpretation to aid the application of our model in clinical settings. In pediatric patients, Munoz et al. investigated the Ce of propofol required to produce hypnosis in children aged 3–11 years using BIS monitoring [35]. In that study, the mean \({EC}_{e_{50}}\) for hypnosis assessed at BIS value = 50 was 3.65 μg/mL. Additionally, a retrograde study of propofol sedation with the Paedfuor plasma TCI model in children aged < 7 years reported that the target Ce of propofol for long-duration of immobilization and spontaneous ventilation was 4.3 μg/mL during proton radiation therapy [12]. These values matched the 3.98 [3.66–4.30] μg/mL of Ce50, which corresponded to a UMSS = 3, in our study. From the results of these two studies, we inferred that a UMSS = 3 may correspond to a BIS value of 50 and can produce deep sedation for long-duration radiologic procedures. Based on the present study (Table 2), we suggest an initial target Ce of 1.5–2.0 μg/mL for minimal sedation (UMSS = 1), 2.0–3.0 μg/mL for moderate sedation with light tactile stimuli (UMSS = 2), and 3.5–4.0 μg/mL for deep sedation with significant physical stimuli or require immobilization (UMSS = 3). The Ce of propofol should be adjusted for specific procedure-related stimulation, such as noise, tactile stimuli, and pain because the depth of sedation and degree of respiratory depression can change accordingly. Because the depth of sedation by the UMSS score changes rapidly in the Ce range of 2.0–4.0 μg/mL, and the change cannot be predicted 100% accurately, we suggest that the concentration should be more precisely controlled with a smaller incremental dose and close observation of the patient. Consequently, monitoring the depth of sedation using electroencephalography-based devices would be helpful for propofol-induced sedation in pediatric patients [9]. On comparison of the Kim and Choi's model with commercially available Kataria model and Paedfusor model, the Kim and Choi's model predicts the Ce to be about 24 to 28% higher than other models. Considering that bias within 30% is usually regarded acceptable when evaluating performance of pharmacokinetic-pharmacodynamic models, we can say that our model for UMSS score can be applied to other popular models such as the Kataria model and the Paedfusor model. Still, we want to recommend using slightly lower target Ce values when using models on the market, along with continuous feedback from patients' observed state of sedation. This study has several limitations. First, the relationship between the Ce of propofol and UMSS score was evaluated during general anesthesia, not during procedural sedation. To eliminate various confounding effects, this model used data only during anesthesia induction. As TCI ensures a certain Ce of propofol, the UMSS score did not change during a constant target Ce of propofol. Therefore, although this study lacks data from during the maintenance of sedation, the pharmacodynamic model can provide meaningful information for pediatric sedation using propofol TCI. Second, the depth of anesthesia was not measured with electroencephalography-based monitors in this study. Additionally, the depth of sedation after reaching the UMSS = 4 could not be assessed. To minimize the bias in estimating the Ce50 of propofol at UMSS = 4, we used the Ce of propofol when the patient reached the UMSS = 4 and 20 s after the subject reached the UMSS = 4. Third, the plasma concentration of propofol was not measured in this study. As minimal premedication was given to the patients of this study, serial blood sampling was not possible. Kim and Choi's pediatric propofol model, which was used in this study, was externally validated and showed good performance in achieving the target propofol plasma concentration in children aged < 12 years [16, 25]. Also, as mentioned above, prediction of Ce values with the Kim and Choi's model is acceptable when regarding predictions from previous pediatric models as gold standard. Finally, premedication with intravenous midazolam could affect the sedation score given the Ce of propofol. However, 0.1 mg/kg of midazolam was not sufficient to cause a significant sedative effect, and as premedication with midazolam has been widely performed in general anesthesia and procedural sedation in children [36], it would not be different from clinical practice. In conclusion, the relationship between Ce of propofol and UMSS score with probability by the population pharmacodynamic approach in children was established. This finding may be helpful in predicting the depth of sedation when using TCI of propofol in various pediatric procedural sedation settings. The datasets generated during and analysed during the current study are not publicly available since we did not store the data at a public web-based archive, but are available from the corresponding author on reasonable request. Bispectral index C e : Effect site concentration ETCO2 : End-tidal carbon dioxide Postanesthetic care unit P k : Prediction probability SpO2 : Peripheral oxygen saturation TCI : Target controlled infusion UMSS : University of Michigan Sedation Scale Cote CJ, Wilson S, American Academy Of P, American Academy Of Pediatric D. Guidelines for monitoring and Management of Pediatric Patients before, during, and after sedation for diagnostic and therapeutic procedures. Pediatrics. 2019;143(6):e20191000. Roback MG, Carlson DW, Babl FE, Kennedy RM. Update on pharmacological management of procedural sedation for children. Curr Opin Anaesthesiol. 2016;29(Suppl 1):S21–35. Na SH, Song Y, Kim SY, Byon HJ, Jung HH, Han DW. A simulation study of Propofol effect-site concentration for appropriate sedation in pediatric patients undergoing brain MRI: Pharmacodynamic analysis. Yonsei Med J. 2017;58(6):1216–21. Eleveld DJ, Colin P, Absalom AR, Struys M. Pharmacokinetic-pharmacodynamic model for propofol for broad application in anaesthesia and sedation. Br J Anaesth. 2018;120(5):942–59. Kim S, Hahn S, Jang MJ, Choi Y, Hong H, Lee JH, et al. Evaluation of the safety of using propofol for paediatric procedural sedation: a systematic review and meta-analysis. Sci Rep. 2019;9(1):12245. Chidambaran V, Costandi A, D'Mello A. Propofol: a review of its role in pediatric anesthesia and sedation. CNS Drugs. 2015;29(7):543–63. Gaynor J, Ansermino JM. Paediatric total intravenous anaesthesia. BJA Educ. 2016;16(11):369–73. Anderson BJ, Bagshaw O. Practicalities of Total intravenous anesthesia and target-controlled infusion in children. Anesthesiology. 2019;131(1):164–85. Louvet N, Rigouzzo A, Sabourdin N, Constant I. Bispectral index under propofol anesthesia in children: a comparative randomized study between TIVA and TCI. Paediatr Anaesth. 2016;26(9):899–908. Mu J, Jiang T, Xu XB, Yuen VM, Irwin MG. Comparison of target-controlled infusion and manual infusion for propofol anaesthesia in children. Br J Anaesth. 2018;120(5):1049–55. Green SM, Krauss BS. Target-controlled infusions could improve the safety and efficacy of emergency department Propofol sedation. Anesth Analg. 2016;122(1):283–4. Oh TK, Lee SJ, Kim JH, Park B, Eom W. The administration of high-dose propofol sedation with manual and target-controlled infusion in children undergoing radiation therapy: a 7-year clinical investigation. Springerplus. 2016;5:376. Burton FM, Lowe DJ, Millar J, Corfield AR, Watson MJ, Sim MAB. Propofol target-controlled infusion in emergency department sedation (ProTEDS): a multicentre, single-arm feasibility study. Emerg Med J. 2021;38(3):205-10. McFarlan CS, Anderson BJ, Short TG. The use of propofol infusions in paediatric anaesthesia: a practical guide. Paediatr Anaesth. 1999;9(3):209–16. Steur RJ, Perez RS, De Lange JJ. Dosage scheme for propofol in children under 3 years of age. Paediatr Anaesth. 2004;14(6):462–7. Choi BM, Lee HG, Byon HJ, Lee SH, Lee EK, Kim HS, et al. Population pharmacokinetic and pharmacodynamic model of propofol externally validated in children. J Pharmacokinet Pharmacodyn. 2015;42(2):163–77. Cravero JP, Blike GT. Review of pediatric sedation. Anesth Analg. 2004;99(5):1355–64. Lamond DW. Review article: safety profile of propofol for paediatric procedural sedation in the emergency department. Emerg Med Australas. 2010;22(4):265–86. Absalom AR, Glen JI, Zwart GJ, Schnider TW, Struys MM. Target-controlled infusion: a mature technology. Anesth Analg. 2016;122(1):70–8. Struys MM, De Smet T, Glen JI, Vereecke HE, Absalom AR, Schnider TW. The history of target-controlled infusion. Anesth Analg. 2016;122(1):56–69. Kataria BK, Ved SA, Nicodemus HF, Hoy GR, Lea D, Dubois MY, et al. The pharmacokinetics of propofol in children using three different data analysis approaches. Anesthesiology. 1994;80(1):104–22. Absalom A, Amutike D, Lal A, White M, Kenny GN. Accuracy of the 'Paedfusor' in children undergoing cardiac surgery or catheterization. Br J Anaesth. 2003;91(4):507–13. Anderson BJ, Hodkinson B. Are there still limitations for the use of target-controlled infusion in children? Curr Opin Anaesthesiol. 2010;23(3):356–62. McCormack J, Mehta D, Peiris K, Dumont G, Fung P, Lim J, et al. The effect of a target controlled infusion of propofol on predictability of recovery from anesthesia in children. Paediatr Anaesth. 2010;20(1):56–62. Ji SH, Lee JH, Cho JY, Kim HS, Jang YE, Kim EH, et al. External validation of a pharmacokinetic model of Propofol for target-controlled infusion in children under two years old. J Korean Med Sci. 2020;35(11):e70. Malviya S, Voepel-Lewis T, Tait AR, Merkel S, Tremper K, Naughton N. Depth of sedation in children undergoing computed tomography: validity and reliability of the University of Michigan Sedation Scale (UMSS). Br J Anaesth. 2002;88(2):241–5. Kim TK, Niklewski PJ, Martin JF, Obara S, Egan TD. Enhancing a sedation score to include truly noxious stimulation: the extended Observer's assessment of alertness and sedation (EOAA/S). Br J Anaesth. 2015;115(4):569–77. Somma J, Donner A, Zomorodi K, Sladen R, Ramsay J, Geller E, et al. Population pharmacodynamics of midazolam administered by target controlled infusion in SICU patients after CABG surgery. Anesthesiology. 1998;89(6):1430–43. Smith WD, Dutton RC, Smith NT. Measuring the performance of anesthetic depth indicators. Anesthesiology. 1996;84(1):38–51. Schuttler J, Ihmsen H. Population pharmacokinetics of propofol: a multicenter study. Anesthesiology. 2000;92(3):727–38. Absalom A, Kenny G. 'Paedfusor' pharmacokinetic data set. Br J Anaesth. 2005;95(1):110. Ki S, Kim KM, Lee YH, Bang JY, Choi BM, Noh GJ. Phase lag entropy as a hypnotic depth indicator during propofol sedation. Anaesthesia. 2019;74(8):1033–40. Lauder GR. Total intravenous anesthesia will supercede inhalational anesthesia in pediatric anesthetic practice. Paediatr Anaesth. 2015;25(1):52–64. Xu XS, Yuan M, Karlsson MO, Dunne A, Nandy P, Vermeulen A. Shrinkage in nonlinear mixed-effects population models: quantification, influencing factors, and impact. AAPS J. 2012;14(4):927–36. Munoz HR, Cortinez LI, Ibacache ME, Leon PJ. Effect site concentrations of propofol producing hypnosis in children and adults: comparison using the bispectral index. Acta Anaesthesiol Scand. 2006;50(7):882–7. Gupta A, Dalvi NP, Tendolkar BA. Comparison between intranasal dexmedetomidine and intranasal midazolam as premedication for brain magnetic resonance imaging in pediatric patients: a prospective randomized double blind trial. J Anaesthesiol Clin Pharmacol. 2017;33(2):236–40. Young-Eun Jang and Sang-Hwan Ji have contributed equally to this work as the first authors. Department of Anesthesiology and Pain Medicine, Seoul National University Hospital, College of Medicine, Seoul National University, Seoul, Republic of Korea, #101 Daehak-no, Jongno-gu, 03080, Seoul, Republic of Korea Young-Eun Jang, Sang-Hwan Ji, Ji-Hyun Lee, Eun-Hee Kim, Jin-Tae Kim & Hee-Soo Kim Young-Eun Jang Sang-Hwan Ji Ji-Hyun Lee Eun-Hee Kim Jin-Tae Kim Hee-Soo Kim YEJ helped in the study design, patient recruitment, data collection, data analysis, writing, and revision of the manuscript. SHJ helped in the study design, patient recruitment, data collection, writing, and revision of the manuscript. JHL helped in the study design, patient recruitment, data collection, and writing of the manuscript. EHK helped in the study design, patient recruitment, data collection, and writing of the manuscript. JTK helped in the study design, patient recruitment, data collection, and writing of the manuscript. HSK done the study design, patient recruitment, data collection, data analysis, and writing of the manuscript. All authors read and approved the final manuscript. Correspondence to Hee-Soo Kim. The study was approved by the Institutional Review Board of Seoul National University Hospital (Ref No: 1705–110-855). Written informed consent was obtained from one of the parents or legal guardians for minor patients, and patients were given a verbal explanation and had the opportunity to ask questions about the study methods and purposes. Verbal consent was obtained from each patient. Jang, YE., Ji, SH., Lee, JH. et al. The relationship between the effect-site concentration of propofol and sedation scale in children: a pharmacodynamic modeling study. BMC Anesthesiol 21, 222 (2021). https://doi.org/10.1186/s12871-021-01446-y Anesthesia depth assessment
CommonCrawl
Annual Report Readability and the Cost of Equity Capital Rjiba, Hatem,Saadi, Samir,Boubaker, Sabri,Ding, Xiaoya (Sara) Using a large panel of U.S. public firms, we examine the relation between annual report readability and cost of equity capital. We hypothesize that complex textual reporting deters investors' ability to process and interpret annual reports, leading to higher information risk, and thus higher cost of equity financing. Consistent with our prediction, we find that greater textual complexity is associated with higher cost of equity capital. Our results are robust to a battery of sensitivity checks, including use of multiple estimation methods, alternative proxies of annual report readability and cost of equity capital measures, and potential endogeneity concerns. In addition, we hypothesize and test whether the nature of the relation between readability and cost of capital depends on the tone of 10-K filings. Our results show that the effect of annual report complexity on cost of equity is greater when disclosure tone is more negative or more ambiguous. We also find that the effect of annual report readability on cost of equity capital depends on the degree of stock market competition, level of institutional investors' ownership, and analyst coverage. Asymptotics for rough stochastic volatility models Martin Forde,Hongzhong Zhang Using the large deviation principle (LDP) for a re-scaled fractional Brownian motion $B^H_t$ where the rate function is defined via the reproducing kernel Hilbert space, we compute small-time asymptotics for a correlated fractional stochastic volatility model of the form $dS_t=S_t\sigma(Y_t) (\bar{\rho} dW_t +\rho dB_t), \,dY_t=dB^H_t$ where $\sigma$ is $\alpha$-H\"{o}lder continuous for some $\alpha\in(0,1]$; in particular, we show that $t^{H-\frac{1}{2}} \log S_t $ satisfies the LDP as $t\to0$ and the model has a well-defined implied volatility smile as $t \to 0$, when the log-moneyness $k(t)=x t^{\frac{1}{2}-H}$. Thus the smile steepens to infinity or flattens to zero depending on whether $H\in(0,\frac{1}{2})$ or $H\in(\frac{1}{2},1)$. We also compute large-time asymptotics for a fractional local-stochastic volatility model of the form: $dS_t= S_t^{\beta} |Y_t|^p dW_t,dY_t=dB^H_t$, and we generalize two identities in Matsumoto&Yor05 to show that $\frac{1}{t^{2H}}\log \frac{1}{t}\int_0^t e^{2 B^H_s} ds$ and $\frac{1}{t^{2H}}(\log \int_0^t e^{2(\mu s+B^H_s)} ds-2 \mu t)$ converge in law to $ 2\mathrm{max}_{0 \le s \le 1} B^H_{s}$ and $2B_1$ respectively for $H \in (0,\frac{1}{2})$ and $\mu>0$ as $t \to \infty$. COVID-19 Information Consumption and Stock Market Return Muktadir-al-Mukit, Dewan This paper examines the impact of COVID-19 information attention on stock market return. We use Google search volume index to proxy investor information attention in respect to case, death, lockdown and vaccine. We uncover that information attention towards lockdown and vaccine positively affects investor sentiment, reflected through increase in stock market return. By contrast, we do not find significant impact of case information attention on stock market return. However, we uncover that COVID-19 death information attention is negatively associated with stork market return for only initial 90 days trading period. Our results are consistent to a battery of robustness tests. Our study contributes to the understanding of the relationship between investor information attention and asset pricing during economic uncertainty and particularly, during pandemic state. Can Machine Learning Help to Select Portfolios of Mutual Funds? DeMiguel, Victor,Gil-Bazo, Javier,Nogales, Francisco J.,A. P. Santos, Andre Identifying outperforming mutual funds ex-ante is a notoriously difficult task. We use machine learning methods to exploit the predictive ability of a large set of mutual fund characteristics that are readily available to investors. Using data on US equity funds in the 1980-2018 period, the methods allow us to construct portfolios of funds that earn positive and significant out-of-sample risk-adjusted after-fee returns as high as 4.2% per year. We further show that such outstanding performance is the joint outcome of both exploiting the information contained in multiple fund characteristics and allowing for flexibility in the relationship between predictors and fund performance. Our results confirm that even retail investors can benefit from investing in actively managed funds. However, we also find that the performance of all our portfolios has declined over time, consistent with increased competition in the asset market and diseconomies of scale at the industry level. Central Bank Digital Currency and Balance Sheet Policy Fraschini, Martina,Somoza, Luciano,Terracciano, Tammaro This paper studies a stylized economy in which the central bank can hold either treasuries or risky securities against central bank digital currency (CBDC) deposits. The key mechanism driving the results is the reduction in bank deposits that follows the introduction of a CBDC and its impact on the banking sector. With CBDC funds invested in treasuries, the central bank channels funds back to the banking sector via open market operations and the introduction of a CBDC is neutral, consistently with the equivalence theorem of Brunnermeier and Niepelt (2019). However, it is not neutral when accounting for liquidity requirements, quantitative easing, or for CBDC deposits held against risky securities. We reach two main conclusions. First, current monetary policy regimes do matter for CBDC equilibrium effects. Second, there is a trade-off between bank lending to the economy and taxes, as holding risky assets against CBDC deposits leads to lower expected taxes and lower bank lending. Chinese Outward Foreign Direct Investment in Belt and Road Countries: Trends, Characteristics and Policies Chang, Le,Cheong, Kee-Cheok Chinese outward foreign direct investment (OFDI) has attracted more attention in recent years, especially after the Chinese government proclaimed the Belt and Road Initiative (BRI) in 2013. The BRI countries play a key role in receiving Chinese OFDI. This paper analyzes the characteristics and trends of Chinese investment in BRI countries from the geographical and industrial perspectives by using both micro- and macro-level data. Meanwhile, it explores the Chinese government policy that promotes investment in BRI countries. This shows the way of Chinese investment strategy, especially in terms of country chosen and industry chosen. The analysis points out the reasons of Chinese investment, which is natural resource-seeking and market-­seeking. Meanwhile, Chinese government policy affects the decision of Chinese enterprises by using economic incentive. Complete and competitive financial markets in a complex world Gianluca Cassese We investigate the possibility of completing financial markets in a model with no exogenous probability measure and market imperfections. A necessary and sufficient condition is obtained for such extension to be possible. Credit Refinancing and Tax Avoidance Alexander, Anna,Pisa, Magdalena This paper examines the disciplining effects of credit markets on corporate tax avoidance strategies. We show that, during adverse credit market conditions, firms with refinancing needs prefer to forgo the after-tax cash flow benefits of tax avoidance to regain the access to the traditionally risk-averse credit markets. Our results show that firms increase their effective tax rate by 2 percentage points when facing refinancing constraints. This effect is more pronounced for firms with lower asset tangibility and higher default probability. Adverse credit market conditions together with refinancing needs prompt firms to become more conservative in their future tax avoidance strategy for up to three years. Moreover, we show that firms engage in less tax deferral strategies, while leaving their leverage and debt shield unchanged. Overall, these findings are consistent with credit markets putting pressure on tax avoiding firms. Cross Currency Valuation and Hedging in the Multiple Curve Framework Alessandro Gnoatto,Nicole Seiffert We generalize the results of Bielecki and Rutkowski (2015) on funding and collateralization to a multi-currency framework and link their results with those of Piterbarg (2012), Moreni and Pallavicini (2017), and Fujii et al. (2010b). In doing this, we provide a complete study of absence of arbitrage in a multi-currency market where, in each single monetary area, multiple interest rates coexist. We first characterize absence of arbitrage in the case without collateral. After that we study collateralization schemes in a very general situation: the cash flows of the contingent claim and those associated to the collateral agreement can be specified in any currency. We study both segregation and rehypothecation and allow for cash and risky collateral in arbitrary currency specifications. Absence of arbitrage and pricing in the presence of collateral are discussed under all possible combinations of conventions. Our work provides a reference for the analysis of wealth dynamics, we also provide valuation formulas that are a useful foundation for cross-currency curve construction techniques. Our framework provides also a solid foundation for the construction of multi-currency simulation models for the generation of exposure profiles in the context of xVA calculations. Decentralising the United Kingdom: the Northern Powerhouse strategy and urban ownership links between firms since 2010 Natalia Zdanowska,Robin Morphet This paper explores a decentralisation initiative in the United Kingdom - the Northern Powerhouse strategy (NPS) - in terms of its main goal: strengthening connectivity between Northern cities of England. It focuses on economic interactions of these cities, defined by ownership linkages between firms, since the NPS's launch in 2010. The analysis reveals a relatively weak increase in the intensity of economic regional patterns in the North, in spite of a shift away from NPS cities' traditional manufacturing base. These results suggest potential directions for policy-makers in terms of the future implementation of the NPS. Determinants of Earnout as Acquisition Payment Currency and Bidder's Value Gains Barbopoulos, Leonidas G.,Sudarsanam, Puliyur We examine the wealth effects of a comprehensive sample of UK bidders offering contingent payment, or earnout, as consideration for their acquisitions. We show that bidders using earnout generate signifi- cantly higher announcement and post-acquisition value gains than bidders using non-earnout currencies (such as cash, stock exchange, or mixed payments). We construct a logistic model to predict when it is optimal for a bidder to offer earnout. We show that bidders offering earnout optimally enjoy significantly higher announcement and post-acquisition gains than bidders offering non-earnout currencies, consis- tent with our model of the choice of the optimal method of payment. Overall, we provide robust evidence that earnout is an effective payment mechanism to mitigate valuation risk to acquirers, and also enhances acquirer value during the announcement and post-acquisition periods. Our paper contributes to the broader literature on how corporate acquirers use payment currency to manage information asym- metry and the attendant valuation risk. Digesting Three-Factor Model by XGBoost and SHAP Huang, Weige This paper innovatively digests three-factor model by exploring the average impacts of factors on portfolio returns and how factors interact with each other. To do this, we use SHapley Additive exPlanations method (SHAP) to interpret the results obtained by XGBoost. We find that the factors have different impacts on portfolio returns and interact with each other in different (sometimes surprising) ways. We also find that the average impacts of factors on portfolio returns are similar before and after the publication of the three-factor model and the 2008 financial crisis but the interaction between factors vary across times. Dissemination, Publication, and Impact of Finance Research: When Novelty Meets Conventionality Dai, Rui,Donohue, Lawrence,Drechsler, Qingyi (Freda) Song,Jiang, Wei Using numeric and textual data extracted from over 50,000 finance articles posted in the Social Science Research Network (SSRN) between 2001 and 2019, this study examines the relationship between measured qualities and a paper's eventual outlet as well as its impact. Based on semantic characteristics uncovered with newly developed machine learning tools, we find that conventionality (semantic similarity with existent research) helps boost readership and publication prospects, after controlling for various article and author characteristics including reputation and research resources of the affiliated institution and author network. Novelty through investigating emerging topics and adopting fresh databases boosts the attention from a broader audience and also the probability of top tier publication, but novelty from introducing atypical ideas from non-finance focused fields or inter-field research does not achieve the same goals. Do Corporations Learn from Mispricing? Evidence from Takeovers and Corporate Performance Adra, Samer,Barbopoulos, Leonidas G. In this article we form the simple prediction that mispricing encourages traders to collect costly information that guides managerial decisions at corporate level. Our findings support this prediction based on evidence derived from both the US market for corporate control and the overall variation in aggregate corporate profits. The trading activity in response to the temporary mispricing of the merging companies provides useful information that leads to the design of high-synergy deals. Such synergies are reflected in an increase in the announcement period acquirer abnormal returns and are not reversed in the long-run. At the market-wide level, our results suggest that the growth in the overall stock trading volume in response to market mispricing is associated with high future corporate profit growth. Overall, after controlling for several economic and financial conditions, the temporary mispricing in a developed and generally efficient stock market stimulates informative trading, ultimately leading to value- and performance-enhancing corporate decisions. Do Macroprudential Policies Affect Non-Bank Financial Intermediation? Claessens, Stijn,Cornelli, Giulio,Gambacorta, Leonardo,Manaresi, Francesco,Shiina, Yasushi We analyse how macroprudential policies (MaPs), largely applied to banks and to a lesser extent borrowers, affect non-bank financial intermediation (NBFI). Using data for 24 of the jurisdictions participating in the Financial Stability Board's monitoring exercise over the period 2002â€"17, we study the effects of MaP episodes on bank assets and on those NBFI activities that may involve bank-like financial stability risks (the narrow measure of NBFI). We find that a net tightening of domestic MaPs increases these NBFI activities and decreases bank assets, raising the NBFI share in total financial assets. By contrast, a net tightening of MaPs in foreign jurisdictions leads to a reduction of the NBFI share â€" the effect of a drop in NBFI activities and an increase in domestic banking assets. Tightening and easing MaPs have largely symmetric effects on NBFI. We find that the effect of MaPs (both domestic and foreign) is economically and statistically significant for all those NBFI economic functions that may pose risks to financial stability. Does Board Gender Diversity Matter? Evidence From Hostile Takeover Vulnerability Chatjuthamard, Pattanaporn,Jiraporn, Pornsit,Lee, Sang Mook,Uyar, Ali,Kilic, Merve Purpose â€" Theory suggests that the market for corporate control, which constitutes an important external governance mechanism, may substitute for internal governance. Consistent with this notion, using a novel measure of takeover vulnerability primarily based on state legislation, we investigate the effect of the takeover market on board characteristics with special emphasis on board gender diversity.Design/methodology/approach â€" We exploit a novel measure of takeover vulnerability based on state legislations. This novel measure is likely exogenous as the legislation were imposed from outside the firm. By using an exogenous measure, our analysis is less vulnerable to endogeneity and is thus more likely to show a causal effect.Findings â€" Our results show that a more active takeover market leads to lower board gender diversity. Specifically, a rise in takeover vulnerability by one standard deviation results in a decline in board gender diversity by 10.01%. Moreover, stronger takeover market susceptibility also brings about larger board size and less board independence, corroborating the substitution effect. Additional analysis confirms the results, including propensity score matching, generalized method of moments dynamic panel data analysis, and instrumental-variable analysis.Originality â€" Our study is the first to explore the effect of the takeover market on board gender diversity. Unlike most of the previous research in this area, which suffers from endogeneity, we use a novel measure of takeover vulnerability that is probably exogenous. Our results are thus much more likely to demonstrate causality. Earnout Financing in the Financial Services Industry Barbopoulos, Leonidas G.,Molyneux, Philip,Wilson, John O. S. This paper explores the effects of earnout contracts used in US financial services M&A. We use propensity score matching (PSM) to address selection bias issues with regard to the endogeneity of the decision of financial institutions to use such contracts. We find that the use of earnout contracts leads to significantly higher acquirer abnormal returns (short- and long-run) compared to counterpart acquisitions (control deals) which do not use such contracts. The larger the size of the deferred (earnout) payment, as a fraction of the total transaction value, the higher the acquirers' gains in the short- and long-run. Both acquirer short- and long-run gains increase when the management team of the target institution is retained in the post-acquisition period. Electricity intraday price modeling with marked Hawkes processes Thomas Deschatre,Pierre Gruet We consider a 2-dimensional marked Hawkes process with increasing baseline intensity in order to model prices on electricity intraday markets. This model allows to represent different empirical facts such as increasing market activity, random jump sizes but above all microstructure noise through the signature plot. This last feature is of particular importance for practitioners and has not yet been modeled on those particular markets. We provide analytic formulas for first and second moments and for the signature plot, extending the classic results of Bacry et al. (2013) in the context of Hawkes processes with random jump sizes and time dependent baseline intensity. The tractable model we propose is estimated on German data and seems to fit the data well. We also provide a result about the convergence of the price process to a Brownian motion with increasing volatility at macroscopic scales, highlighting the Samuelson effect. Endogenous Option Pricing Gamba, Andrea,Saretto, Alessio We show that a dynamic model of investment and capital structure choices, where the firm faces real and financial frictions, can generate option prices and implied volatilities that are in line with those of the average optionable stock. As the balance between the fundamental economic forces that are responsible for the way options are priced is state-dependent, the model is also able to generate a wide cross-sectional dispersion in implied volatility surfaces that matches what we observe in the data. Estimating Peer Effects in Corporate Innovation: Evidence from Compensation Peer Groups Hsu, Yuan-Teng,Huang , Chia-Wei,Koedijk, Kees This paper uses compensation peer groups to measure peer effects in corporate innovation. This approach provides a true peer group and better leader-follower link and thus can mitigate the reflection problem suggested by Manski (1993). We find that the average innovation activity of the compensation peers is a significant and first-order predictor for corporate innovation. Further analyses show that (1) the peer effect is stronger when peer companies experience higher innovation competition and are closer to the median peer company in the peer group, (2) the results are not likely to be attributed to the knowledge spillover mechanism but rather are more consistent with the peer pressure mechanism, and (3) the SEC's 2006 executive compensation disclosure rules can generate the peer effects. Overall, the results suggest that corporate innovation decisions are responses to innovation activities and, to a lesser extent, the characteristics of the compensation peer groups. Feature Learning for Stock Price Prediction Shows a Significant Role of Analyst Rating Jaideep Singh,Matloob Khushi To reject the Efficient Market Hypothesis a set of 5 technical indicators and 23 fundamental indicators was identified to establish the possibility of generating excess returns on the stock market. Leveraging these data points and various classification machine learning models, trading data of the 505 equities on the US S&P500 over the past 20 years was analysed to develop a classifier effective for our cause. From any given day, we were able to predict the direction of change in price by 1% up to 10 days in the future. The predictions had an overall accuracy of 83.62% with a precision of 85% for buy signals and a recall of 100% for sell signals. Moreover, we grouped equities by their sector and repeated the experiment to see if grouping similar assets together positively effected the results but concluded that it showed no significant improvements in the performance rejecting the idea of sector-based analysis. Also, using feature ranking we could identify an even smaller set of 6 indicators while maintaining similar accuracies as that from the original 28 features and also uncovered the importance of buy, hold and sell analyst ratings as they came out to be the top contributors in the model. Finally, to evaluate the effectiveness of the classifier in real-life situations, it was backtested on FAANG equities using a modest trading strategy where it generated high returns of above 60% over the term of the testing dataset. In conclusion, our proposed methodology with the combination of purposefully picked features shows an improvement over the previous studies, and our model predicts the direction of 1% price changes on the 10th day with high confidence and with enough buffer to even build a robotic trading system. Financial Flexibility Robustness Qi, Qian I examine how the robustness of investment opportunities influence firm payout policy and cash holdings. By exploiting new measures, the perturbations of q, a novel counterintuitive yet reasonable fact emerge: low robustness of investment opportunities (high perturbations of q) is able to spur firm propensity to pay dividends, lower repurchase shares, and decrease the cash a firm holds simultaneously. Specifically, firms that are likely to hold fewer amounts of cash when the robustness of investment opportunities is low, which is distinct from the standard channel of uncertainty. These results are consistent with firms' liquidity management policies being significantly shaped by robustness concerns. Foreign Direct Investment in Emerging Markets and Acquirers' Value Gains Barbopoulos, Leonidas G.,Marshall, Andrew P.,MacInnes, Cameron,McColgan, Patrick [enter Abstract Body]We investigate the shareholder wealth effects of 306 foreign direct investment (FDI) announcements by UK firms in seventy-five emerging markets (EM). Our results show that acquirers enjoy highly significant gains during the announcement period of FDI. Perhaps surprisingly, the highest gains are accrued to acquirers investing in countries with high political risk and high corruption ratings. The type of asset acquired has also a significant effect on the gains of acquirers' shareholders, with the highest gains accrued to acquirers of physical assets. Also, investments in physical assets in EM with a high corruption rating elicit the highest gains. We contend that UK firms following resource-seeking strategies in EM with a high corruption rating are facilitated access to resources on favorable terms and this is viewed positively by the market participants. Our results are robust to alternative model specifications and the endogenous choice to expand internationally. Further Evidence on Calendar Anomalies Hsu, Yuan-Teng,Koedijk, Kees,Liu, Hung-Chun,Wang, Jying-Nan This study aims to investigate the day-of-the-week (DoW) effect of cross-market leveraged exchange-traded funds (LETFs) in the Taiwanese stock market. We find that Wednesday's overnight returns are significantly positive for bull 2X LETFs tracking major stock indices of the Chinese market, whereas no such an effect is found for ETFs tracking local or other international stock markets. The "T+1" trading rule and a lagged Monday effect potentially explain this anomaly. Finally, simulation analysis of various simple trading rules further shows that there exist exploitable profit opportunities in cross-market bull 2X LETF markets. Gender Differences in Stock Market Participation: Evidence From Chinese Households Li, Ling,lee, shuyi,Luo, Deming Using micro household survey data from China, this study finds that women are less likely to participate in the stock market than men. This gender pattern in market participation rates holds when we control for household features, characteristics of both spouses, and their interactions and relative decision-making power. We also show that the gender difference in stock market participation operates through investors' behavioral factors, particularly their risk preference. Moreover, we confirm the influence of cultural gender norms among older households. The gender difference in stock market participation is reduced where women's influence over the decision of stock investments is likely constrained by a male-biased social norm. The study's findings suggest the crucial role of gender and gender-related cultural norms in shaping household financial decisions in the presence of gender inequality, such as in China. It extends our understanding of the interrelations between financial decisions and social and behavioral dimensions in a developing context. Has Financial Inclusion Made the Financial Sector Riskier? Ozili, Peterson K This paper examines whether high levels of financial inclusion is associated with greater financial risk. The findings reveal that higher account ownership is associated with greater financial risk through high nonperforming loan and high cost inefficiency in the financial sector of developed countries, advanced countries and transition economies. Increased use of debit cards, credit cards and digital finance products reduced risk in the financial sector of advanced countries and developed countries but not for transition economies and developing countries. The findings also show that the combined use of digital finance products with increased formal account ownership improves financial sector efficiency in developing countries while the combined use of credit cards with increased formal account ownership reduces insolvency risk and improves financial sector efficiency in developing countries. Hedging and Competition Giambona, Erasmo,Kumar, Anil,Phillips, Gordon M. We study how risk management through hedging impacts firms and competition among firms in the insurance industry. We show that firms that are likely to face costly external finance increase hedging after staggered state-level financial reform that reduces the costs of hedging. These firms have lower risk and higher market stability. Post hedging, product market competition is also impacted. Firms that increase hedging lower price and increase policy sales relative to unaffected companies. These firms increase their market share after they engage in hedging. Is the Merger of Banks on the Path of Expected Yields? Venkatesh, K.A ,Narasimhan, Pushkala Government of India announced the merger of 10 Public sector banks into 4 Banks in the month of August 2019 in order to reduce the NPA's as well as reducing the burden of capital infusion to comply Bessel norms. Further mergers of Public Sector banks led to the emergence of 12 bigger banks. This article examines whether this consolidation of banks is really on the path of expected yields by assuming efficiencies as a parameter using inverse DEA. The M & A in banking sector has been evergreen research topic in the global arena and that gives impetus to the current research on Indian Mergers. Generally, the global research studies on M&A focused on the positive impact and the outcomes tend to be the conflicting nature. The InvDEA is the model which supports to identify the necessary level of inputs and outputs for a given decision making unit (Bank) to reach its predefined efficiency. This model facilitates in identifying the outcomes of mergers in terms of efficiency though the objectives of mergers where defined in different perspective. The elements of cost and profit were considered as inputs and outputs in ascertaining the efficiency and that will reveal the true picture of attaining the expected yield of the merger in short-term. Judicial Efficiency and Banks Credit Risk Exposure Canzian, Giulia,Ferrara, Antonella We exploit the outset of a regulation seeking to improve judicial efficiency through the rearrangement of courts' geography in Italy to provide causal evidence on the relationship between judiciary structural reforms and banks financial stability. To this end, we apply a difference-in-differences approach on a dataset on annual proceedings handled by each court over the period 2010-2017, complemented by banks balance sheet information. Our findings yield a negative effect of the reform on both judicial efficiency and Non-Performing Loans ratio. Furthermore, we identify heterogeneous effects based on the existing capacity of the courts to dispose of pending proceedings and geographical location. Digging deeper into this mechanism, we set up a causal mediation analysis to prove that the judicial system affects banks credit risk exposure both indirectly (through judicial efficiency) and directly, thereby influencing borrowers who react to the perceived enforcement. Legal Systems and Gains from Cross-Border Acquisitions Barbopoulos, Leonidas G.,Paudyal, Krishna,Pescetto, Gioia While cross-border-acquisitions (CBA) constitute a significant and expanding proportion of the total mergers and acquisitions transactions, their benefits and costs to the acquiring firm's shareholders are not fully understood. The documented evidence that a country's legal environment and level of investor protection affect firm's value suggests that these same factors should also impact on the profitability of CBA. The paper con- tributes to the literature by examining the implications of the legal tradition of the target's country of domicile on bidders' gains, using a sample of UK acquiring firms. The findings, which are controlled for several factors that are known to affect acquirers' gains, show that the difference in the legal tradition of the target's nation plays a significant role in influencing the gains from CBA of UK acquirers. They also show that during both the announcement period and in the long-run acquiring firms of targets based in civil-law countries out-perform acquirers of targets based in common-law countries. In addition, acquiring a firm in markets that have higher restrictions on capital mobility, or stricter capital controls, can add more value to shareholders' wealth. Finally, the legal tradition of the target's nation interacts with several deal- and firm-specific factors in shaping the gains to acquiring firms. Liquidity and Information Asymmetry Considerations in Corporate Takeovers We examine how stock market liquidity and information asymmetry considerations influence the wealth effects of Mergers and Acquisitions (M&As). We present a simple model predicting that M&As of listed targets that have relatively illiquid stocks are profitable for acquirers due to (a) the weak bargaining power of the targets' shareholders, and (b) the limited information asymmetry concerns when evaluating takeover synergies. Our results show that cash-financed M&As of listed targets that have relatively illiquid stocks are associated with an increase in acquirer risk-adjusted returns. These gains are equivalent to those realized from comparable private target M&As. When engaging in stock-financed listed-target M&As, acquirers with liquid stocks enjoy significant gains when the targets have relatively illiquid stocks. This result holds especially when the deal is announced during periods of deterioration in the overall stock market liquidity. Lastly, we find that liquidity considerations affect the acquirer's choice of the target firm's listing status, as well as the M&A method of payment. Local Banks and the Effects of Oil Price Shocks Wang, Teng In this paper, I study the effects of the oil price shocks on local banks, and the propagation of the shocks through banks' branching networks. Exposed banks with significant operations in oil-concentrated counties experienced a decline in demand deposit, a surge in credit line drawdowns, and a jump in troubled loans. Facing liquidity pressure, banks were forced to sell liquid assets, offer higher deposit rates, and reduce lending to small businesses and mortgage borrowers in unaffected markets. The effect is magnified when banks do not have strong community ties, while it is mitigated if banks have higher liquidity buffers, or sufficiently dispersed branching networks. I further document that healthy unexposed banks' capacity to substitute credit supply is quite limited, providing fresh evidence from the perspective of bank competition. Mathematics, Psychology, and Law: The Legal Ramifications of the Exponential Growth Bias Zamir, Eyal,Teichman, Doron Many human decisions, ranging from the taking of loans with compound interest to fighting deadly pandemics, involve phenomena that entail exponential growth. Yet a wide and robust body of empirical studies demonstrates that people systematically underestimate exponential growth. This phenomenon, dubbed the exponential growth bias (EGB), has been documented in numerous contexts, across different populations, using both experimental and observational methods. Despite its centrality to human decision making, legal scholarship has thus far failed to account for the EGB. This Article presents the first comprehensive study of EGB and the law. Incorporating the EGB into legal analysis sheds new light on legal measures that are already in use, and highlights new solutions to numerous problems that the law strives to solve. More concretely, the EGB calls for the introduction of new disclosure duties that would assist people grasp the long-term implications of their choices; the imposition of new mandatory rules that would minimize the exploitation of the EGB by savvy profit-maximizing entrepreneurs; and the adoption of new debiasing techniques that could improve policymakers' decisions. Modeling Credit Spreads: The Cross-Section Method Revised Vermylen, Aurélien The cross-section method for estimating credit spreads as proposed in [NOM01] has a drawback when used in applications different than estimating a CVA VaR. Indeed, we show that by optimizing in least-squares on log-spreads, the method tends to approximate the geometric average of observed spreads instead of the arithmetic average, which by the AMâ€"GM inequality will always be higher. This can lead to significant underestimation of Credit Risk if used carelessly in financial institutions. We thus propose an alternative methodology that optimizes on simple differences instead of log-differences. Modern Information Technology and Firm Acquisitiveness: Evidence from EDGAR Implementation Gu, Ming,Li, Dongxu,Ni, Xiaoran Exploiting the staggered implementation of the EDGAR system from 1993 to 1996 as an exogenous shock to information dissemination technologies, we find evidence that internet dissemination of corporate disclosures reduces firm acquisitiveness. This effect is more pronounced for growth firms than value firms. In addition, EDGAR implementation discourages equity-based acquisitions and results in lower announcement returns, especially for growth firms. Our overall findings indicate that EDGAR implementation discourages managerial learning through dampening revelatory price efficiency, yielding inefficient acquisition decisions. Pareto-optimal Reinsurance with Default Risk and Solvency Regulation Boonen, Tim J.,Jiang, Wenjun This paper studies an optimal reinsurance problem of Pareto-optimality when the contract is subject to default of the reinsurer. We assume that the reinsurer can invest a share of its wealth in a risky asset and default occurs when the reinsurer's end-of-period wealth is insufficient to cover the indemnity. We show that without the solvency regulation, the optimal indemnity function is of excess-of-loss form, regardless of the investment decision. We model solvency regulation as a constraint on the probability of default. Under solvency regulation, by assuming the investment decision remains the same as in the unconstrained solution, the optimal indemnity function is derived element-wisely. Partial results are given when the indemnity function and investment decision are jointly constrained by the solvency regulation. Numerical examples are provided to illustrate the implications of our results and the sensitivity of the solutions to the model parameters. Predicting the Behavior of Dealers in Over-The-Counter Corporate Bond Markets Yusen Lin,Jinming Xue,Louiqa Raschid Trading in Over-The-Counter (OTC) markets is facilitated by broker-dealers, in comparison to public exchanges, e.g., the New York Stock Exchange (NYSE). Dealers play an important role in stabilizing prices and providing liquidity in OTC markets. We apply machine learning methods to model and predict the trading behavior of OTC dealers for US corporate bonds. We create sequences of daily historical transaction reports for each dealer over a vocabulary of US corporate bonds. Using this history of dealer activity, we predict the future trading decisions of the dealer. We consider a range of neural network-based prediction models. We propose an extension, the Pointwise-Product ReZero (PPRZ) Transformer model, and demonstrate the improved performance of our model. We show that individual history provides the best predictive model for the most active dealers. For less active dealers, a collective model provides improved performance. Further, clustering dealers based on their similarity can improve performance. Finally, prediction accuracy varies based on the activity level of both the bond and the dealer. Premium Rating Without Losses: How To Estimate the Loss Frequency of Loss-Free Risks Fackler, Michael In insurance and even more in reinsurance it occurs that about a risk you only know that it has suffered no losses in the past e.g. seven years. Some of these risks are furthermore such particular or novel that there are no similar risks to infer the loss frequency from. In this paper we propose a loss frequency estimator that copes with such situations, by just relying on the information coming from the risk itself: the "amended sample mean". It is derived from a number of practice-oriented first principles and turns out to have desirable statistical properties. Some variants are possible, which enables insurers to align the method to their preferred business strategy, by trading off between low initial premiums for new business and moderate premium increases for renewal business after a loss. We further give examples where it is possible to assess the average loss from some market or portfolio information, such that overall one has an estimator of the risk premium. Pricing of debt and equity in a financial network with comonotonic endowments Tathagata Banerjee,Zachary Feinstein In this paper we present formulas for the valuation of debt and equity of firms in a financial network under comonotonic endowments. We demonstrate that the comonotonic setting provides a lower bound and Jensen's inequality provides an upper bound to the price of debt under Eisenberg-Noe financial networks with bankruptcy costs. Such financial networks encode the interconnection of firms through debt claims. The proposed pricing formulas consider the realized, endogenous, recovery rate on debt claims. Special consideration is given to the CAPM setting in which firms invest in correlated portfolios so as to provide analytical stress testing formulas. We endogenously construct the comonotonic endowment setting from a equity maximizing standpoint with capital transfers. We conclude by, numerically, comparing the network valuation problem with two single firm baseline heuristics which can, respectively, approximate the price of debt and equity. Randentropy: a software to measure inequality in random systems Guglielmo D'Amico,Stefania Scocchera,Loriano Storchi The software Randentropy is designed to estimate inequality in a random system where several individuals interact moving among many communities and producing dependent random quantities of an attribute. The overall inequality is assessed by computing the Random Theil's Entropy. Firstly, the software estimates a piecewise homogeneous Markov chain by identifying the changing-points and the relative transition probability matrices. Secondly, it estimates the multivariate distribution function of the attribute using a copula function approach and finally, through a Monte Carlo algorithm, evaluates the expected value of the Random Theil's Entropy. Possible applications are discussed as related to the fields of finance and human mobility Reinventing Pareto: Fits for All Losses, Small and Large Fitting loss distributions in insurance is sometimes a dilemma: either you get a good fit for the small/medium losses or for the very large losses. To be able to get both at the same time, this paper studies generalizations and extensions of the Pareto distribution. This leads not only to a classification of potentially suitable, piecewise defined, distribution functions, but also to new insights into tail behavior and exposure rating. Relative Equity Market Valuation Conditions and Acquirers' Gains Andriosopoulos, Dimitris,Barbopoulos, Leonidas G. We examine whether the relative equity market valuation conditions (EMVCs) in the countries of merging firms help acquirers' managers to time the announcements of both domestic and foreign targets. After controlling for several deal- and merging firm-specific features we find that the number of acquisitions and acquirers' gains are higher during periods of high-EMVCs at home, irrespective of the domicile of the target. We also find that the higher gains of foreign target acquisitions realized during periods of high- EMVCs at home stem from acquiring targets based in the RoW (=World-G7), rather than the G6 (=G7-UK) group of countries. We argue that this is due to the low correlation of EMVCs between the UK (home) and the RoW group of countries. However, these gains disappear or even reverse during the post-announcement period. Moreover, acquisitions of targets domiciled in the RoW (G6) countries yield higher (lower) gains than acquisitions of domestic targets during periods of high-EMVCs at home. This suggests that the relative EMVCs between the merging firms' countries allow acquirers' managers to time the market and acquire targets at a discount, particularly in countries in which acquirers' stocks are likely to be more overvalued than the targets' stocks. Risk-dependent centrality in the Brazilian stock market Michel Alexandre,Kauê Lopes de Moraes,Francisco Aparecido Rodrigues The purpose of this paper is to calculate the risk-dependent centrality (RDC) of the Brazilian stock market. We computed the RDC for assets traded on the Brazilian stock market between January 2008 to June 2020 at different levels of external risk. We observed that the ranking of assets based on the RDC depends on the external risk. Rankings' volatility is related to crisis events, capturing the recent Brazilian economic-political crisis. Moreover, we have found a negative correlation between the average volatility of assets' ranking based on the RDC and the average daily returns on the stock market. It goes in hand with the hypothesis that the rankings' volatility is higher in periods of crisis. Robust equilibrium strategies in a defined benefit pension plan game Guohui Guan,Jiaqi Hu,Zongxia Liang This paper investigates the robust {non-zero-sum} games in an aggregated {overfunded} defined benefit (abbr. DB) pension plan. The sponsoring firm is concerned with the investment performance of the fund surplus while the participants act as a union to claim a share of the fund surplus. The financial market consists of one risk-free asset and $n$ risky assets. The firm and the union both are ambiguous about the financial market and care about the robust strategies under the worst case scenario. {The union's objective is to maximize the expected discounted utility of the additional benefits, the firm's two different objectives are to maximizing the expected discounted utility of the fund surplus and the probability of the fund surplus reaching an upper level before hitting a lower level in the worst case scenario.} We formulate the related two robust non-zero-sum games for the firm and the union. Explicit forms and optimality of the solutions are shown by stochastic dynamic programming method. In the end of this paper, numerical results are illustrated to depict the economic behaviours of the robust equilibrium strategies in these two different games. Tax-Loss Harvesting Under Uncertainty McKeever, Daniel,Rydqvist, Kristian Numerical calculations imply that tax-loss harvesting is valuable to holders of taxable stock accounts. These calculations are based on the assumption that a capital loss on a stock portfolio can always be netted against ordinary income (up to a limit) or a capital gain on the same stock portfolio. We provide market-based evidence that a capital loss that is realized in the beginning of the year is substantially less valuable than a loss that is taken at the end of the year. A simple binomial tree model that captures the resolution of tax rate uncertainty closely mimics observed market prices. Allowing investors to postpone unused losses into the future does not alter the conclusion that realized losses are less valuable early in the year. The Adoption of Blockchain-based Decentralized Exchanges: A Market Microstructure Analysis of the Automated Market Maker Agostino Capponi,Ruizhe Jia We analyze the market microstructure of Automated Market Maker (AMM) with constant product function, the most prominent type of blockchain-based decentralized crypto exchange. We show that, even without information asymmetries, the order execution mechanism of the blockchain-based exchange induces adverse selection problems for liquidity providers if token prices are volatile. AMM is more likely to be adopted for pairs of coins which are stable or of high personal use for investors. For high volatility tokens, there exists a market breakdown such that rational liquidity providers do not deposit their tokens in the first place. The adoption of AMM leads to a surge of transaction fees on the underlying blockchain if token prices are subject to high fluctuations. The Banking Sector and National Economy Uddin, Godwin,Ashogbon, Bode,Martins, Bolaji,Momoh, Omowumi,Agbonrofo, Hope E.,Alika, Samson,Oserei, Kingsley The banks are central elements of a market economy. In more than one way, they facilitate business transactions by acting as depositor and lender for many actors in the domestic and international economy. The banking industry in Nigeria has expanded in size in terms of assets in the last 60 years since the country's independence from British colonial rule and undergone large-scale reforms vis a vis transformation in the global economy. What are the dimensions of this growth? How has it affected market efficiency and economic wellbeing of the people? This article provides answers to these questions and argue that growth has indeed happened in the banking sector by a quantification of liquid assets, investment securities and loans. It also captured its transnational dimension and how that has boosted international transactions as well as repatriation of Diaspora transfers to the national economy. This article also focused on the contradictions of the economy arising from inconsistent policies of government and meddlesomeness of global financial institutions, and their impact on the banking sector. This article ends on a prescriptive note by suggesting ways to make the banking sector more relevant in promoting productive activities in the national economy. The Check Is in the Mail: Can Disclosure Reduce Late Payments to Suppliers? Chuk, Elizabeth,Lourie, Ben,Yoo, Il Sun A common corporate cash management strategy is to delay payments owed to suppliers. We examine whether buyers pay suppliers faster in response to a recent regulatory change in the United Kingdom that mandates the public disclosure of buyers' payment practices. We find that after the regulatory change, UK firms subject to this regulation shortened their payment periods relative to several control samples of firms that were not subject to the regulation. In cross-sectional tests, we predict and find that this shortening of the payment period is attenuated for firms that are less able to bear the costs of paying suppliers faster. Specifically, we find that the reduction in the payment period is smaller for buyers that (i) have longer operating cycles, (ii) depend more heavily on trade credit as a source of external financing, and (iii) pay dividends. In supplemental tests using proprietary data, we find that buyers subject to this regulation reduced their trade credit that is overdue by 30 days or more. However, this reduction is partially offset by an increase in the trade credit that is overdue by less than 30 days. Our findings are important in light of the ongoing debate in other regimes (e.g., the United States) on whether to require additional disclosures of trade credit. The Earnout Structure Matters: Takeover Premia and Acquirer Gains in Earnout Financed M&As Barbopoulos, Leonidas G.,Adra, Samer In this article, based on both parametric and non-parametric methods, we provide a robust solution to the long-standing issue on how earnouts in corporate takeovers are structured and how their structure influences the takeover premia and the abnormal returns earned by acquirers. First, we quantify the effect of the terms of earnout contract (relative size and length) on the takeover premia. Second, we demonstrate how adverse selection considerations lead the merging firms to set the initial payment in an earnout financed deal at a level that is lower than, or equal to, the full deal payment in a comparable non-earnout financed deal. Lastly, we show that while acquirers in non-earnout financed deals experience negative abnormal returns from an increase in the takeover premia, this effect is neutralised in earnout financed deals. The Impact of Margin Requirements on Voluntary Clearing Decisions Onur, Esen,Reiffen, David,Sharma, Rajiv We analyze the determinants of financial entities' choices of whether to voluntarily designate trades for central clearing. In particular, we evaluate the impact of the Uncleared Margin Rule (UMR) on the trading and clearing decisions of firms transacting in the multi-trillion-dollar cash-settled foreign exchange swap markets. The UMR is a multi-jurisdiction rule that requires certain market participants to exchange collateral (margin) for uncleared transactions between themselves. We specifically study the effect of this rule for non-deliverable forwards (NDF) since similar contracts â€" FX deliverable forwards and swaps - are exempt from the rule in the U.S. and hence constitute a useful control group. In a difference-in-differences setting, we find that compared to FX forwards and swaps, the UMR resulted in firms choosing to clear a higher percentage of their swaps in the NDF market. To understand the forces behind firms' clearing decisions, we make use of a regulatory data set that includes the identities of the parties to each swap. We show that increased clearing is due almost exclusively to the actions of clearing members, suggesting that the differential cost of clearing for these entities plays a major role in clearing decisions. Consistent with theory, we find the ability to net swap exposure at the central clearing party is one of the drivers behind the decision to clear. The Impact of Monetary Policy on M&A Outcomes Adra, Samer,Barbopoulos, Leonidas G.,Saunders, Anthony Monetary policy influences a wide range of Mergers and Acquisitions (M&A) outcomes. First, an increase in the federal funds rate predicts a negative market reaction to M&A announcements, an increase in the likelihood of deal withdrawal, and significant financing challenges for the acquirer in the post-acquisition phase. Second, M&As announced during periods of high monetary policy uncertainty are associated with significant declines in acquirer value. This negative market reaction reflects a unique discount to compensate for the high riskiness of M&As in an uncertain monetary environment. Finally, we show that monetary contraction, rather than monetary policy uncertainty, is a key contributor to the decline in the aggregate M&A activity. The Influence of Policy Uncertainty on Exchange Rate Forecasting Smales, Lee A. Using the economic policy uncertainty (EPU) index of Baker et al. (2016), we examine the influence of EPU on the characteristics of USD/JPY exchange rate forecasts. Our sample period, which spans two decades, incorporates a range of economic and political conditions for the US and Japan. Consistent with higher EPU engendering a more complex information environment, our results clearly demonstrate that analyst forecast errors, and forecast dispersion, increase with EPU. US monetary policy uncertainty and Japanese trade policy uncertainty are particularly important in generating forecast dispersion. The empirical findings are consistent across forecast horizons ranging from 1-month to 1-year. This has important implications for market participants who use exchange rate forecasts when making business and investment decisions. The Role of Real Options in the Takeover Premia in Mergers and Acquisitions Barbopoulos, Leonidas G.,Cheng, Louis T. W.,Cheng, Yi,Marshall, Andrew P. This paper applies a real option framework to suggest that the takeover premia in mergers and acquisitions can be influenced by (a) the pre-bid ownership of target and (b) the real option characteristics of both acquirer and target firms. Our findings show that pre-bid ownership reduces the takeover premia, which is consistent with the argument that pre-bid ownership reduces in- formation asymmetry. However, we find that the takeover premia is higher when both the acquirer and target firms exhibit real option capacity as measured by positive risk-return sensitivity. As a result, an acquirer with real option capacity is willing to pay higher takeover premia for an option embedded in the target firm. The Structure of the Board of Directors: Boards and Governance Strategies in the US, the UK and Germany Hopt, Klaus J.,Leyens, Patrick C. The chapter continues and advances our earlier research on 'Board Models in Europe'.** We explore 'The Structure of the Board of Directors' with a view to the basic governance structure as provided by a board model vis-à -vis techniques of structuring the decision-making body, which can be used independent of the chosen board model. We focus on boards of large business corporations with a stock exchange listing to secure cross-country comparability. Our three sample jurisdictions are the US, the UK and Germany. France and Italy are also considered to round out the discussion of selected issues. Our key findings are as follows:1. Board models like the one-tier board, as used in the US and the UK, or the two-tier board, as used in Germany, provide a basic governance structure that enables the use of specific governance strategies. It is the use of specific governance strategies, not the choice of a board model, which determines the role of the board in alleviating agency problems between owners and managers, controlling and non-controlling shareholders, and shareholder and stakeholder constituencies. Based on this finding, the choice of the suitable board model should be left to private parties.2. The market for corporate control is known as a removal strategy that alleviates the agency problem between owners and managers of potential target companies. To achieve this effect, it must be ensured that takeover defenses are adopted in the interest of shareholders rather than as a means to shield the incumbent board from removal by the acquirer. The governance options include focusing the board structure through the allocation of decision-making power to independent directors (US) or to the supervisory board (Germany), and, as an alternative, reinstalling shareholder decision-making and thus removing the board from its coordination task (UK). Counter-intuitively, one might group US and German law together, despite differences in their basic board structures and despite the European Union's adoption of UK-style control shift regulation.3. The three sample jurisdictions follow a similar pattern for securing fairness of related party transactions (RPTs). The UK relies on a structuring of the shareholder body, requiring ex-ante approval of the disinterested shareholders (MOM approval), a strategy that is also used in France but in a weaker form due to the possibility of ex-post authorization. In the US, the predominant choice seems to be structuring the board so as to leave the decision to independent directors, a strategy that Italy has, on one hand, sought to enhance with the obligatory involvement of a minority appointed director but, on the other hand, has weakened by allowing the board to override a recommendation of the independent directors. Germany also relies on board structuring in that it requires supervisory board approval of RPTs, but compared to the use of independent directors, the cooperation between the two boards provides a basis for manager-friendly results one would expect only from a jurisdiction that openly promotes board empowerment.4. The most far-reaching advance of the corporate purpose debate relates to a further structuring of the board so as to provide employee representatives with a voice, as known from German co-determination. Proposals to reallocate a proportion of the appointment rights from shareholders to employees have not found their way into legal reform in the US or the UK. Out of the governance strategies discussed in this chapter, it is only employee co-determination that calls for a basic governance structure which solely a two-tier board model can provide. The Valuation Effects of Investor Attention in Stock-Financed Acquisitions Limited investor attention allows overvalued companies to engage in stock-financed acquisitions of listed target firms without experiencing significant reductions in existing valuations. Our robust findings show that overvalued stock-paying acquirers that are subject to limited investor attention do not experience significant announcement period wealth losses. However, the overvaluation of these acquirers is corrected in the post-announcement period. By contrast, the overvalued acquirers that receive high investor attention and use stock as the payment method in their listed target acquisitions experience negative announcement period abnormal returns. The widely documented evidence that stock-financed acquisitions are associated with significant announcement period wealth losses is primarily driven by deals in which the acquirers are subject to high investor attention. The Value of Countercyclical Capital Requirements: Evidence from COVID-19 Koont, Naz,Walz, Stefan We evaluate the implications of relaxing the Supplementary Leverage Ratio during the COVID-19 market disruption for bank balance sheet composition and credit provision. To the best of our knowledge, we are the first to causally identify the effect of the SLR regulation change on bank level outcomes. We find that the relaxation may have eased Treasury market liquidity by allowing banks to hold modestly greater inventories of Treasuries, and further allowed for a significant expansion of traditional bank credit. Our findings suggest that this risk-invariant leverage ratio was binding for banks during COVID-19, weakly affected bank liquidity provision in Treasury markets, and strongly affected banks' portfolio composition across asset classes, amounting to a shift of banks' loan supply schedules. Thus, we highlight that countercyclical relaxation of uniform leverage constraints can increase bank credit provision during economic downturns. Given the binding nature of the SLR, the relaxation of this constraint may be more effective than other countercyclical measures in allowing banks to extend credit. Voice at Work Harju, Jarkko,Jäger, Simon,Schoefer, Benjamin We estimate the effects of worker voice on job quality and separations. We leverage the 1991 introduction of worker representation on boards of Finnish firms with at least 150 employees. In contrast to exit-voice theory, our difference-in-differences design reveals no effects on voluntary job separations, and at most small positive effects on other measures of job quality (job security, health, subjective job quality, and wages). Worker voice slightly raised firm survival, productivity, and capital intensity. A 2008 introduction of shop-floor representation had similarly limited effects. Interviews and surveys indicate that worker representation facilitates information sharing rather than boosting labor's power. Why ROE, ROA, and Tobin's Q in Regressions Aren't Good Measures of Corporate Financial Performance for ESG Criteria Gregory, Richard Paul It is demonstrated theoretically why ROE, ROA, and Tobin's Q aren't good measures of corporate financial performance for ESG criteria. Essentially due to E, S, and G criteria affecting productivity and debt costs, this causes measurement error in the dependent variables of regressing ROE, ROA, and Tobin's Q on E, S, and G criteria where the measurement error is correlated with the independent variables. This results in biased estimators and inflated standard errors.
CommonCrawl
Math Forum/Help College Math Numbers up to 20 Adding and Subtracting up to 10 Addition and Subtraction up to 10 Word Problems up to 10 Adding/Subtracting up to 20 Time, Days & Seasons Numbers up to 100 Addition and Subtraction in Tens Counting Patterns up to 100 Comparison of Numbers up to 100 Addition and Subtraction up to 100 Adding and Subtracting up to 100 Subtraction Equations up to 100 Even or Odd Numbers Multiplication & Division Facts Multiplication up to 5 Division up to 5 Multiplication up to 10 Division up to 10 Numbers up to 1,000 Order of Numbers up to 1,000 Counting Patterns up to 1,000 Addition and Subtraction up to 1,000 Adding and Subtracting up to 1,000 Comparison with Addition and Subtraction Place Value of Digits Multiplication and Division Division Word Problems Numbers up to 1,000,000 Word Names for Numbers Addition and Subtraction up to 10,000 Prime and Composite Numbers Word Problems of Addition and Subtraction Addition and Subtraction up 100,000 Adding/Subtracting up to 1,000,000 Word Problems of Difference Estimation Word Problems of Product Estimation Multiplication of 2-digit Numbers with Larger Numbers Multiplication up to 2-digits Each Word Problems of Division up to 12 Time Unit Conversion Triangle Types Comparison of Decimals Rounding Off Decimals Adding, Subtracting Decimals Multiplication and Division of Decimals With Same Denominators Least Common Multiple Adding, Subtracting Multiplying and Dividing Fractions Multiplication and Division of Two Fractions Multiplication and Division of Mixed Numbers Operations with Fractions Divisibility Divisibility Rules Negative numbers Integer Interpretation Signed Numbers Exponents - 2 Dividing by Monomial Polynomial Identities Coordinates and Quadrants Coordinates of Points Test - Fundamental Mathematics Test - Fundamental Mathematics 2 Test - Intermediate Mathematics Test - Intermediate Mathematics 2 Math Test For 8 Grade The Linear Equation The Linear Equation 2 Slope of a Line Simultaneous Equations Domain of Function Arithmetic Sequences Geometric Sequences Volume of Prisms and Pyramids Types of Angles Types of Triangles Volume Test Volume and Surface Area Geometry Basics Test Lines Test Angles between Parallel Lines Area of Polygons Classify Quadrilaterals Quadrilaterals Circle Test Circles and Angles SAT - 2 Tests (Quizzes) Addition and Subtraction of Fractions Complete the test and get an award. Which of the following is correct for $\frac{3}{4}+\frac{7}{4}=$ $\frac{3+7}{4+4}=\frac{10}{8}=\frac{5}{4}$ $\frac{3\cdot 4+7\cdot4}{4+4}=\frac{40}{8}=5$ $\frac{3+7}{4}=\frac{10}{4}=\frac{5}{2}$ None of these. Peter and Mary, bought a pizza divided into 16 pieces. Peter ate 5 pieces and Mary ate 4 pieces. How much of the pizza is left? $\frac{1}{3}$ $\frac{8}{16}$ What is the least common multiple (LCM) of 2, 3 and 6? Which of the following is correct for $\frac{9}{2}+\frac{10}{5}=$ $\frac{5 \cdot 9+2 \cdot 10}{2 \cdot 5}=\frac{65}{10}=\frac{13}{2}$ $\frac{9+10}{2\cdot 5}=\frac{19}{10}$ $\frac{9+10}{2+5}=\frac{19}{7}$ Which of the following is correct? $\frac{7}{3}+\frac{5}{3}-\frac{1}{3}=$ $\frac{7+5-1}{3+3+3}=\frac{11}{9}$ $\frac{7\cdot 3+5\cdot 3-1\cdot 3}{3+3-3}=\frac{33}{3}=11$ $\frac{7+5-1}{3}=\frac{11}{3}$ $\frac{1}{2}+\frac{5}{3}+\frac{1}{5}=$ $\frac{3 \cdot 5+2 \cdot 5 \cdot 5+2 \cdot 3}{2+3+5}=\frac{71}{10}$ $\frac{3\cdot 5+2\cdot 5 \cdot 5+2 \cdot 3}{2\cdot 3 \cdot 5}=\frac{71}{30}$ $\frac{1+5+1}{2 \cdot 3 \cdot 5}=\frac{7}{30}$ $\frac{5}{6}-\frac{1}{10}+\frac{2}{15}=$ $LCM(6,10,15)=30\Longrightarrow \frac{(30\div 6)\cdot 5-(30\div 10)\cdot 1+(30\div 15)\cdot 2}{30}=\frac{5\cdot 5-3\cdot 1+2\cdot 2}{30}=\frac{26}{30}=\frac{13}{30}$ $LCM(6,10,15)=30\Longrightarrow \frac{5-1+2}{30}=\frac{6}{30}=\frac{1}{5}$ $LCM(6,10,15)=30\Longrightarrow \frac{\frac{5}{6}-\frac{1}{10}+\frac{2}{15}}{30}$ Bearing in mind that $LCM(8,10)=40$, how do we add the fractions $\frac{3}{10}+\frac{9}{8}=$ $\frac{3}{10} \cdot 40+\frac{9}{8} \cdot 40=3\cdot 4+9 \cdot 5=57$ $\frac{3+9}{40}=\frac{12}{40}=\frac{3}{10}$ $\frac{(40\div 10)\cdot3+(40\div 8)\cdot9}{40}=\frac{4\cdot3+5\cdot9}{40}=\frac{57}{40}$ Bearing in mind that $LCM(2,3)=6$ and $LCM(3,5)=15$, how do we solve this? $\frac{\frac{1}{2}+\frac{5}{3}}{\frac{2}{3}+\frac{1}{5}}=$ $\frac{\frac{1}{2}+\frac{5}{3}}{\frac{2}{3}+\frac{1}{5}}=\frac{\frac{6}{5}}{\frac{3}{8}}=\frac{6\cdot 8}{5\cdot 3}=\frac{48}{15}=\frac{16}{5}$ $\frac{\frac{1}{2}+\frac{5}{3}}{\frac{2}{3}+\frac{1}{5}}=\frac{\frac{1+5}{6}}{\frac{2+1}{15}}=\frac{\frac{6}{6}}{\frac{3}{15}}=\frac{1}{\frac{1}{5}}=5$ $\frac{\frac{1}{2}+\frac{5}{3}}{\frac{2}{3}+\frac{1}{5}}=\frac{\frac{3+10}{6}}{\frac{10+3}{15}}=\frac{\frac{13}{6}}{\frac{13}{15}}=\frac{13\cdot 15}{13\cdot 6}= \frac{15}{6}=\frac{5}{2}$ Given $\frac{1}{12}+\frac{3}{25}-\frac{5}{18}$ and $LCM(12,25,18)=p$ Which of these is correct? $\frac{1}{12}+\frac{3}{25}-\frac{5}{18}=\frac{(p\cdot 12)+(p \cdot 25)-(p \cdot 18)}{p}$ $\frac{1}{12}+\frac{3}{25}-\frac{5}{18} =\frac{(p\div 12)\cdot 1+(p\div 25)\cdot 3-(p\div 18)\cdot 5}{p}$ $\frac{1}{12}+\frac{3}{25}-\frac{5}{18}=\frac{1+3-5}{p}=-\frac{1}{9}$ Follow us on Twitter Facebook Author Math10 Banners
CommonCrawl
US20130069980A1 - Dynamically Cropping Images - Google Patents Dynamically Cropping Images Download PDF US20130069980A1 US13/233,245 US201113233245A US2013069980A1 US 20130069980 A1 US20130069980 A1 US 20130069980A1 US 201113233245 A US201113233245 A US 201113233245A US 2013069980 A1 US2013069980 A1 US 2013069980A1 user device Beau R. Hartshorne Nathaniel Gregory Roman Facebook Inc 2011-09-15 Application filed by Facebook Inc filed Critical Facebook Inc 2011-11-18 Assigned to FACEBOOK, INC. reassignment FACEBOOK, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HARTSHORNE, BEAU R., ROMAN, NATHANIEL GREGORY G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators G06T11/00—2D [Two Dimensional] image generation G06T11/60—Editing figures and text; Combining figures or text G06T2210/00—Indexing scheme for image generation or computer graphics G06T2210/22—Cropping G09G2340/00—Aspects of display data processing G09G2340/04—Changes in size, position or resolution of an image G09G2340/045—Zooming at least part of an image, i.e. enlarging it or shrinking it G09G2340/14—Solving problems related to the presentation of information to be displayed In one embodiment, in response to an action from a user, which results in an image to be displayed on a user device for the user: accessing information about the user and the image; cropping the image based at least on the information about the user and the image; and causing the cropped image to be displayed on the user device. This disclosure generally relates to image processing and more specifically relates to dynamically cropping images for display to users based on available information on the images, the users, the circumstances surrounding the display, and/or predefine policies. Cropping an image refers to the process of removing some parts of the image, usually for the purposes of improving framing, accentuating subject matter, or changing aspect ratio of the image. Many image processing software (e.g., Adobe Photoshop) provides image cropping capability, although most only supports manual cropping, in which case a user needs to manually select (e.g., using a mouse or a stylus) which portion of an image is to remain and the software removes the unselected portions of the image. Some image processing software also supports basic auto-cropping capability. For example, the software is able to automatically remove white spaces along the edges of an image. In particular embodiments, in response to an action from a user, which results in an image to be displayed on a user device for the user: accessing information about the user and the image; cropping the image based at least on the information about the user and the image; and causing the cropped image to be displayed on the user device. These and other features, aspects, and advantages of the disclosure are described in more detail below in the detailed description and in conjunction with the following figures. FIG. 1 illustrates an example image of a relatively high resolution. FIG. 2 illustrates resizing the image illustrated in FIG. 1 in order to display it in a relatively small area. FIGS. 3-4 illustrate cropping the image illustrated in FIG. 1 in order to display it in a relatively small area. FIG. 5 illustrates an example method for dynamically and automatically cropping an image. FIG. 6 illustrates an example computer system. FIG. 8 illustrates cropping the image illustrated in FIG. 7 in order to display it in a relatively small area. FIG. 10 illustrates cropping the image illustrated in FIG. 7 in order to display it in a relatively small area. DESCRIPTION OF EXAMPLE EMBODIMENTS This disclosure is now described in detail with reference to a few embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of this disclosure. However, this disclosure may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order not to unnecessarily obscure this disclosure. In addition, while the disclosure is described in conjunction with the particular embodiments, it should be understood that this description is not intended to limit the disclosure to the described embodiments. To the contrary, the description is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the disclosure as defined by the appended claims. Sometimes, an image of a relatively high resolution and thus a relatively large size needs to be displayed within a relatively small area. For example, a relatively large image may, at times, need to be presented as a thumbnail or an icon. In practice, this often happens when the screen of the device on which the image is displayed is relatively small, such as the screen of a mobile device (e.g., mobile telephone, tablet computer, etc.), although the same need may also arise with a device (e.g., a desktop computer) having a relatively large screen. FIG. 1 illustrates an example image 100 of a relatively high resolution and thus having a relatively large size. For example, image 100 may be a digital photograph, and there are four people 111, 112, 113, 114 captured in image 100. Suppose that image 100 needs to be displayed within a relatively small area on the screen of a computing device (e.g., a mobile telephone or a tablet computer). That is, the area available for displaying image 100 is smaller, and sometimes, much smaller, than the actual size of image 100. One way to achieve this is to resize image 100 down so that it fits inside the display area, as illustrated in FIG. 2. In FIG. 2, a display area 200 is smaller than the original size of image 100. Thus, image 100 is downsized using an appropriate resizing algorithm to get a smaller version of image 100, image 100R, which can fit inside display area 200. However, downsizing a larger image so that it fits into a smaller display area has several drawbacks. For example, once the image is downsized, many details in the original image are lost, and it is hard to determine which people or objects are captured in the image. Moreover, the aspect ratio of the display area may differ from the aspect ratio of the image (e.g., as in the case illustrated in FIG. 2), and so parts of the display area may be left blank. Instead of resizing a larger image to fit it inside a smaller display area, particular embodiments may automatically crop the larger image to fit the image inside the smaller display area, as illustrated in FIG. 3. In FIG. 3, again, a display area 300 is smaller than the original size of image 100. However, image 100 is cropped to obtain image 100C-1, which is the portion of image 100 that contains person 112. Image 100C-1 is displayed in area 300. In this case, because there is no downsizing of image 100, image 100C-1 retains all the details showing person 112 as contained in original image 100 so that person 112 is easily recognizable when cropped image 100C-1 is displayed in area 300. Moreover, image 100C-1 may be cropped to have the same aspect ratio as display area 300 so that it fills display area 300 completely. In particular embodiments, images are cropped dynamically. That is, the same image may be cropped differently when it is displayed at different times. In FIG. 3, cropped image 100C-1 contains person 112. On the other hand, in FIG. 4, image 100 is cropped differently to obtain image 100C-2, which is the portion of image 100 that contains person 113 and person 114. Image 100C-2 is displayed in an area 400. In particular embodiments, a cropped image may still need to be resized, up or down, in order to fit it inside a particular display area. For example, in FIG. 4, in order to fit both person 113 and person 114 inside display area 400, cropped image 100C-2 still needs to be downsized slightly. However, the amount of downsizing required to fit cropped image 100C-2 inside display area 400 is much less than the amount of downsizing required to fit original image 100 inside display area 400. Thus, not much detail is lost when downsizing cropped image 100C-2, and person 113 and person 114 are still easily recognizable when image 100C-2 is displayed in area 400. In particular embodiments, each time a specific image needs to be displayed in an area smaller than the original size of the image, the image may be cropped based on various factors and policies. Optionally, the cropped image may be resized, either up or down, so that it fits the display area better. FIG. 5 illustrates an example method for dynamically and automatically cropping an image. Suppose that an image is to be displayed to a user on a device associated with the user, as illustrated in STEP 501. This may be the result of a user action performed by the user on the user device. For example, the user may have requested to view the profile (e.g., social profile) of another person, and the image of that other person needs to be displayed as a part of the profile. The user may have conducted a search for images relating to a subject matter specified as a search query (e.g., images of the Golden Gate Bridge in San Francisco, or images of Angelina Jolie), and the image is one of the search results. The user may have clicked on a news article supplied to the user device through a news feed to review its content, and there is an image contained in the article or related to the news story. The present disclosure contemplates any applicable cause that results in an image being displayed to a user on a user device associated with the user. Particular embodiments may collect available information on the image, the user, the user device, and/or other relevant factors, as illustrated in STEP 503. Particular embodiments may also access predefined rules or policies that govern how the image should be cropped and/or resized. First, with respect to the image, particular embodiments may determine the original size or resolution of the image, metadata (e.g., tags) associated with the image (e.g., names of the people captured in the image, descriptions of the objects captured in the image, names of the places in the image, etc.), features (e.g., people, objects, places, etc.) captured in the image, the context in which the image is displayed (e.g., the image is displayed in connection with a profile of a person, a news item, an advertisement, etc.), the owner of the image and/or whether the owner is captured in the image, the profile of the owner of the image, the album, if any, to which the image belongs, and other relevant information about the image. For example, the image may be associated with various tags. If the image captures one or more people (e.g., image 100), the name of each person captured in the image may be tagged with the image or next to the person. If the image captures a specific location (e.g., the Golden Gate Bridge), the name of the location may be tagged with the image. If the image captures one or more objects, the description of each object captured in the image may be tagged with the image. Various image processing algorithms (e.g., facial recognition, object detection, etc.) may be applied to the image to extract the features (e.g., people, objects, places, etc.) captured in the image. For example, facial recognition algorithms may be applied to image 100 to determine the names (e.g., Jane, John, Michael, and Mary) of persons 111, 112, 113, and 114, respectively. The tags associated with image 100, if available, may also help determine the names of persons 111, 112, 113, and 114. As another example, FIG. 7 illustrates an example image 700 that contains an object 711 (e.g., a car). Object detection algorithms may be applied to image 700 to determine what object 711 is, and the tags associated with image 700, if available, may also help identify object 711. The image may be displayed in a specific context. For example, the image may be associated with a person whose profile the user has requested. The image may be associated with a news story from a news feed or a relationship story. The image may be a part of an online photo album the user wishes to view. The image may be associated with a social advertisement. The image may be one of the search results identified for a search query provided by the user (e.g., image search or browsing). Second, with respect to the user to whom the image is to be presented (i.e., the viewer of the image), particular embodiments may determine who the user is, the relationship between the user and the people, objects, or locations captured in the image or the owner of the image, the reason that the image is to be presented to the user, and other relevant information about the user. In particular embodiments, the user may be a member of a social-networking website. A social network, in general, is a social structure made up of entities, such as individuals or organizations, that are connected by one or more types of interdependency or relationships, such as friendship, kinship, common interest, financial exchange, dislike, or relationships of beliefs, knowledge, or prestige. In more recent years, social networks have taken advantage of the Internet. There are social-networking systems existing on the Internet in the form of social-networking websites. Such social-networking websites enable their members, who are commonly referred to as website users, to perform various social activities. For example, the social-networking website operated by Facebook, Inc. at www.facebook.com enables its users to communicate with their friends via emails, instant messages, or blog postings, organize social events, share photos, receive news of their friends or interesting events, play games, etc. For example, if the image captures one or more persons (e.g., image 100), some or all the people captured in the image may also be members of the same social-networking website to which the user belongs. There may be social connections or social relationships between the user and some or all the people captured in the image since they are all members of the same social-networking website. The connection between the user and one person captured in the image may differ or may be closer than the connection between the user and another person captured in the image. Sometimes, a person may specify who is or is not authorized to view an image containing the person or belonging to the person. In this case, for each person captured in the image, particular embodiments may determine whether the user is authorized to view an image of that person. As another example, if the image captures one or more objects (e.g., image 700), the user may own, may be interested in, may be an expert on, or may wish to purchase one of the objects captured in the image. As a third example, if the image captures a place, the owner may have been to the place, or may wish to visit the place. Third, with respect to the user device on which the image is to be displayed, particular embodiments may determine the size and aspect ratio of the display area in which the image is to be displayed, the size of the screen of the user device, the location on the screen where the image is to be displayed, and other relevant factors. In addition, in particular embodiments, there may be predefined image-processing rules or policies that help specify how an image should be cropped. The present disclosure contemplates any applicable image-processing rule or policy. For example, aesthetically speaking, it is usually desirable to place a subject matter (e.g., person, object, or place) of an image near the center of the image. In the case of image 700, object 711 is placed to the left side of image 700, while the right side of image 700 is left blank. Thus, from an aesthetic point of view, when cropping an image, a policy may indicate that it is desirable to remove large blank portions of the image. Thus, when cropping image 700, the blank right side of image 700 may be removed in order to move object 711 near the center of the cropped image, as illustrated in FIG. 8. As another example, FIG. 9 illustrates an image 900 where the bottom portion of the image contains some interesting landscaping features but the top portion of the image is mainly featureless sky. Based on what is generally considered good image composition, when cropping an image, a policy may indicate that it is desirable to remove large featureless portions of the image. Thus, when cropping image 900, it may be desirable to remove the mostly featureless top portion of image 900 in order to move the landscaping features near the center of the cropped image, as illustrated in FIG. 10. In addition, as illustrated in FIG. 10, the cropped image may be repositioned inside the display area so that it looks more aesthetically pleasing. As another example, when cropping an image, a policy may indicate that the cropped area preferably contains one or more people. Thus, if there is any person captured in an image, the cropped area may focus on the person. If there is no person captured in an image, the cropped area may then focus on objects or places. Particular embodiments may dynamically crop the image based on the collected information and/or the predefined image-processing policies, as illustrated in STEP 505. Different information may be used differently when determining how an image should be cropped. For example, consider image 100, which captures four people. Suppose that the user has requested to view the profile of person 111. In this case, persons 112, 113, and 114 are probably of no interest to the user. Therefore, when cropping image 100, the cropped image may only contain the face of person 111, and persons 112, 113, and 114 are left out of the cropped image. Alternatively, suppose that the user is one of the persons captured in image 100 (e.g., person 112), and the user wishes to view an album of photographs taken at an event, which both the user and person 113 have attended. In this case, when cropping image 100, the cropped image may contain both person 112 (i.e., the user) and person 113. Alternatively, suppose that the user has requested a search for images of Mary (i.e., person 114). In this case, when cropping image 100 as one of the search results, the cropped image may contain only person 114. Alternatively, suppose that the user wishes to read a news story about person 112. In this case, when cropping image 100 to be displayed in connection with the news story, the cropped image may contain only person 112. As another example, suppose that person 113 has specified viewing rights for his images, and the user is not authorized to view images that contain person 113. If image 100 is to be displayed for the user, person 113 may be cropped out so that his face is not shown to the user. As another example, suppose that an image contains several objects, and the user is interested in one specific object in particular (e.g., the user has expressed a desire to purchase the object). When cropping such an image for the user, the object of interest to the user may be included in the cropped image, while the other objects may be left out. As another image, again consider image 100, which captures four people. Suppose that the user has social connections or relationships with persons 113 and 114 but does not know persons 111 and 112 (e.g., according to their profiles with the social-networking website). In this case, when cropping image 100 for the user, the cropped image may only contain persons 113 and 114, as persons 111 and 112 are strangers and thus of no interest to the user. As another example, the aspect ratio of the display area may help determine the aspect ratio of the cropped image. The size of the display area may help determine whether the cropped image needs to be resized up or down so that it may fit better inside the display area. If the cropped image is smaller than the display area, it may be resized up accordingly. Conversely, if the cropped image is larger than the display area, it may be resized down accordingly. As the above examples illustrate, information about the user (i.e., the viewer of the image), about the image itself, about the user device, and other types of relevant information may all help determine how an image should be cropped for a given user in a given context at a given time and location. The same image may be cropped differently for different users or in different contexts or different times and locations. In particular embodiments, the cropped image may contain subject matter (e.g., person, object, location, etc.) that is relevant to the given user in the given context. In addition, image-processing policies may help determine how an image should be cropped so that the cropped image appears more aesthetically pleasing. In particular embodiments, the image may be stored on a server and sent to the user device (i.e., the client) for display when needed. The cropping of the image may be performed on the server (i.e., before the image is sent to the user device) or on the user device (i.e., after the image is received at the user device). If the cropping is performed at server side, in one implementation, the server may determine how the image should be cropped, generate a new image that only contains the cropped area of the original image (i.e., the cropped image), and send the cropped image to the user device for display. In this case, a client application (e.g., a web browser) executing on the user device may simply display the entire cropped image received from the server, as illustrated in STEP 507. Alternatively, in another implementation, the server may determine how the image should be cropped, obtain the coordinates of the cropped area in reference to the original image (e.g., using X and Y coordinates and width and height), send the original image together with the coordinates of the cropped area (e.g., using Cascading Style Sheets (CSS) code) to the user device for display, as illustrated in STEP 507. In this case, the client application (e.g., a web browser) executing on the user device may interpret the coordinates of the cropped area in reference to the original image so that when the image is displayed to the user, only the cropped area is visible. If the cropping is performed at client side (i.e., by the user device), in one implementation, the server may send the original image to the user device. The user device may collect additional information from various information sources (e.g., the social-networking website, other servers, the Internet, the storage of the user device, etc.), and then determine how the image should be cropped. The user device may then display the cropped area of the image to the user, as illustrated in STEP 507. The dynamic cropping of an image may be implemented as computer software, and the code may be written in any suitable programming language (e.g., C, Java, server-side or client-side scripting language such as PHP or JavaScript, etc.). An example code, written in PHP, for dynamically cropping an image is illustrated in the Appendix. Particular embodiments may be implemented on one or more computer systems. FIG. 6 illustrates an example computer system 600. In particular embodiments, one or more computer systems 600 perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems 600 provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems 600 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems 600. This disclosure contemplates any suitable number of computer systems 600. This disclosure contemplates computer system 600 taking any suitable physical form. As example and not by way of limitation, computer system 600 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these. Where appropriate, computer system 600 may include one or more computer systems 600; be unitary or distributed; span multiple locations; span multiple machines; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 600 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 600 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 600 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system 600 includes a processor 602, memory 604, storage 606, an input/output (I/O) interface 608, a communication interface 610, and a bus 612. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor 602 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 602 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 604, or storage 606; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 604, or storage 606. In particular embodiments, processor 602 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 602 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 602 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 604 or storage 606, and the instruction caches may speed up retrieval of those instructions by processor 602. Data in the data caches may be copies of data in memory 604 or storage 606 for instructions executing at processor 602 to operate on; the results of previous instructions executed at processor 602 for access by subsequent instructions executing at processor 602 or for writing to memory 604 or storage 606; or other suitable data. The data caches may speed up read or write operations by processor 602. The TLBs may speed up virtual-address translation for processor 602. In particular embodiments, processor 602 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 602 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 602 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 602. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory 604 includes main memory for storing instructions for processor 602 to execute or data for processor 602 to operate on. As an example and not by way of limitation, computer system 600 may load instructions from storage 606 or another source (such as, for example, another computer system 600) to memory 604. Processor 602 may then load the instructions from memory 604 to an internal register or internal cache. To execute the instructions, processor 602 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 602 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 602 may then write one or more of those results to memory 604. In particular embodiments, processor 602 executes only instructions in one or more internal registers or internal caches or in memory 604 (as opposed to storage 606 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 604 (as opposed to storage 606 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 602 to memory 604. Bus 612 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor 602 and memory 604 and facilitate accesses to memory 604 requested by processor 602. In particular embodiments, memory 604 includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory 604 may include one or more memories 604, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage 606 includes mass storage for data or instructions. As an example and not by way of limitation, storage 606 may include an HDD, a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage 606 may include removable or non-removable (or fixed) media, where appropriate. Storage 606 may be internal or external to computer system 600, where appropriate. In particular embodiments, storage 606 is non-volatile, solid-state memory. In particular embodiments, storage 606 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 606 taking any suitable physical form. Storage 606 may include one or more storage control units facilitating communication between processor 602 and storage 606, where appropriate. Where appropriate, storage 606 may include one or more storages 606. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface 608 includes hardware, software, or both providing one or more interfaces for communication between computer system 600 and one or more I/O devices. Computer system 600 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 600. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 608 for them. Where appropriate, I/O interface 608 may include one or more device or software drivers enabling processor 602 to drive one or more of these I/O devices. I/O interface 608 may include one or more I/O interfaces 608, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface 610 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 600 and one or more other computer systems 600 or one or more networks. As an example and not by way of limitation, communication interface 610 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 610 for it. As an example and not by way of limitation, computer system 600 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 600 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system 600 may include any suitable communication interface 610 for any of these networks, where appropriate. Communication interface 610 may include one or more communication interfaces 610, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus 612 includes hardware, software, or both coupling components of computer system 600 to each other. As an example and not by way of limitation, bus 612 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus 612 may include one or more buses 612, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, reference to a computer-readable storage medium encompasses one or more non-transitory, tangible computer-readable storage media possessing structure. As an example and not by way of limitation, a computer-readable storage medium may include a semiconductor-based or other integrated circuit (IC) (such, as for example, a field-programmable gate array (FPGA) or an application-specific IC (ASIC)), a hard disk, an HDD, a hybrid hard drive (HHD), an optical disc, an optical disc drive (ODD), a magneto-optical disc, a magneto-optical drive, a floppy disk, a floppy disk drive (FDD), magnetic tape, a holographic storage medium, a solid-state drive (SSD), a RAM-drive, a SECURE DIGITAL card, a SECURE DIGITAL drive, or another suitable computer-readable storage medium or a combination of two or more of these, where appropriate. Herein, reference to a computer-readable storage medium excludes any medium that is not eligible for patent protection under 35 U.S.C. §101. Herein, reference to a computer-readable storage medium excludes transitory forms of signal transmission (such as a propagating electrical or electromagnetic signal per se) to the extent that they are not eligible for patent protection under 35 U.S.C. §101. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. This disclosure contemplates one or more computer-readable storage media implementing any suitable storage. In particular embodiments, a computer-readable storage medium implements one or more portions of processor 602 (such as, for example, one or more internal registers or caches), one or more portions of memory 604, one or more portions of storage 606, or a combination of these, where appropriate. In particular embodiments, a computer-readable storage medium implements RAM or ROM. In particular embodiments, a computer-readable storage medium implements volatile or persistent memory. In particular embodiments, one or more computer-readable storage media embody software. Herein, reference to software may encompass one or more applications, bytecode, one or more computer programs, one or more executables, one or more instructions, logic, machine code, one or more scripts, or source code, and vice versa, where appropriate. In particular embodiments, software includes one or more application programming interfaces (APIs). This disclosure contemplates any suitable software written or otherwise expressed in any suitable programming language or combination of programming languages. In particular embodiments, software is expressed as source code or object code. In particular embodiments, software is expressed in a higher-level programming language, such as, for example, C, Perl, or a suitable extension thereof. In particular embodiments, software is expressed in a lower-level programming language, such as assembly language (or machine code). In particular embodiments, software is expressed in JAVA, C, or C++. In particular embodiments, software is expressed in Hyper Text Markup Language (HTML), Extensible Markup Language (XML), or other suitable markup language. Herein, "or" is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, "A or B" means "A, B, or both," unless expressly indicated otherwise or indicated otherwise by context. Moreover, "and" is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, "A and B" means "A and B, jointly or severally," unless expressly indicated otherwise or indicated otherwise by context. This disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Moreover, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. // Copyright 2004-present Facebook. All Rights Reserved. class PhotoCrop { $cropHeight, $cropWidth, $faceboxes, $photo, $tags, $thumbsize; * Constructor for the class * @param EntPhoto Photo to be cropped * @param string Constant thumbnail size * @param int Width of the cropping area * @param int Height of the cropping area public function _construct($photo, $thumbsize, $crop_width = PhotobarConsts::THUMB_WIDTH, $crap_height = PhotobarConsts::THUMB_HEIGHT) { $this->photo = $photo; $this->thumbsize = $thumbsize; $this->cropWidth = $crop_width; $this->cropHeight = $crop_height; public function setTags($tags) { $this->tags = $tags; return $this; public function setFaceboxes($faceboxes) { $this->faceboxes = $faceboxes; private function checkSubInterval($x1, $x2, $x3, $x4) { // Is interval [x1,x2] a proper sub interval of [x3,x4] return $x1 >= $x3 && $x1 <= $x4 && $x2 >= $x3 && $x2 <= $x4; private function computeBestOffset($face_pos) { $normal_size = $this->photo-> getDimensionsVector2(PhotoSizeConst::NORMAL); $size = $this->photo->getDimensionsVector2($this-> thumbsize); $invalid = ( !$face_pos || !$size->getWidth( ) || !$normal_size->getWidth( ) || !$size->getHeight( ) || !$normal size->getHeight( ) if ($invalid) { return new Vector2(0,0); $offset_x = null; $offset_y = null; $scaling_factor_x = $size->getWidth( ) / $normal_size-> getWidth( ); $scaling_factor_y = $size->getHeight( ) / $normal_size-> getHeight( ); $possible_x = array( ); $possible_y = array( ); foreach ($face_pos as $face) { // Transform the face dimension from normal dimensions // to thumb size scale $face['left'] *= $scaling_factor_x; $face['right'] *= $scaling_factor_x; $face['top'] *= $scaling_factor_y; $face['bottom'] *= $scaling_factor_y; $possible_x[ ] = $face['left']; $possible_y[ ] = $face['top']; $best_region = null; $max_face_cnt = −1; // If we ignore the photo boundaries there will be an // optimal bounding rectangle which is along the current // top and left boundaries of faces foreach ($possible_x as $left) foreach ($possible_y as $top) { $current_region = array( ); $current_region['left'] = $left; $current_region['top'] = $top; $current_region['right'] = $left + $this->cropWidth; $current_region['bottom'] = $top + $this->cropHeight; $current_face_cnt = 0; $x_overlap = $this-> checkSubInterval($face['left'],$face['right'], $current_region['left'],$current_region['right']); $y_overlap = $this-> checkSubInterval($face['top'],$face['bottom'], $current_region['top'],$current_region['bottom']); if ($x_overlap && $y_overlap){ $current_face_cnt++; if ($current_face_cnt > $max_face_cnt) { $max_face_cnt = $current_face_cnt; $best_region = $current_region; // we can't be more than _play away from 0 $x_play = $size->getWidth( ) − $this->cropWidth; $y_play = $size->getHeight( ) − $this->cropHeight; // center the faces $center_x = ($best_region['right'] − $best_region['left']) / 2 + $best_region['left']; $center_y = ($best_region['bottom'] − $best_region['top']) / 2 + $best_region['top']; $offset_x = min($x_play, round($center_x − $this->cropWidth / 2)); $offset_y = min($y_play, round($center_y − $this->cropHeight / 2)); $offset_x = max($offset_x, 0); $offset_y = max($offset_y, 0); return new Vector2(−$offset_x, −$offset_y); * Get the position attribute for fb:photos:cropped-thumb * @param int The user to be focused, if this parameter * is not passed or set to null, position which focuses on * the maximum number of people is returned * @return Vector2 with the given position public function getBestPosition($focus_user=null) { $face_collection = array( ); * NOTE: PhotoTagConstants::MAX_WIDTH stores the radius * instead of the actual width $tag_size = PhotoTagConstants::MAX_WIDTH; foreach ((array)$this->tags as $tag) { $current_face = array( ); $x = $tag->getX( ) / 100 * $normal_size->getWidth( ); $y = $tag->getY( ) / 100 * $normal size->getHeight( ); $current_face['left'] = $x − $tag_size; $current_face['right'] = $x + $tag_size; $current_face['top'] = $y − $tag_size; $current_face['bottom'] = $y + $tag_size; if ($focus_user === $tag->getSubjectID( )) { return $this->computeBestOffset(array($current_face)); $face_collection[ ] = $current_face; foreach ((array)$this->faceboxes as $facebox) { $rect = $facebox->getRect( ); $current_face['left'] = $rect->getLeft( ); $current_face['right'] = $rect->getRight( ); $current_face['top'] = $rect->getTop( ); $current_face['bottom'] = $rect->getBottom( ); return $this->computeBestOffset($face_collection); 1. A method comprising: by one or more computing devices, in response to an action from a user, which results in an image to be displayed on a user device for the user: accessing information about the user and the image; cropping the image based at least on the information about the user and the image; and causing the cropped image to be displayed on the user device. 2. The method of claim 1, wherein the information about the user comprises an identity of the user, and social-networking information about the user. 3. The method of claim 1, wherein the information about the image comprises a resolution of the image, metadata associated with the image, a context in which the image is to be displayed, an owner of the image, one or more features captured in the image, and one or more relationships between the user and at least one feature captured in the image. 4. The method of claim 1, further comprising accessing information about the user device, wherein the image is cropped further based on the information about the user device. 5. The method of claim 4, wherein the information about the user device comprises a display area inside which the cropped image is to be displayed, and a location of the display area on the user device. 6. The method of claim 1, further comprising accessing one or more predefined policies, wherein the image is cropped further based on the one or more predefined policies. 7. The method of claim 1, wherein causing the cropped image to be displayed on the user device comprises: cropping the image to obtain the cropped image; and sending the cropped image to the user device to be displayed. specifying a cropped area within the image; and sending the image and the specification of the cropped area to the user device. 9. A system comprising: a memory comprising instructions executable by one or more processors; and the one or more processors coupled to the memory and operable to execute the instructions, the one or more processors being operable when executing the instructions to: access information about the user and the image; crop the image based at least on the information about the user and the image; and cause the cropped image to be displayed on the user device. 10. The system of claim 9, wherein the information about the user comprises an identity of the user, and social-networking information about the user. 11. The system of claim 9, wherein the information about the image comprises a resolution of the image, metadata associated with the image, a context in which the image is to be displayed, an owner of the image, one or more features captured in the image, and one or more relationships between the user and at least one feature captured in the image. 12. The system of claim 9, wherein the one or more processors are further operable when executing the instructions to access information about the user device, wherein the image is cropped further based on the information about the user device. 13. The system of claim 12, wherein the information about the user device comprises a display area inside which the cropped image is to be displayed, and a location of the display area on the user device. 14. The system of claim 9, wherein the one or more processors are further operable when executing the instructions to access one or more predefined policies, wherein the image is cropped further based on the one or more predefined policies. 15. The system of claim 9, wherein causing the cropped image to be displayed on the user device comprises: crop the image to obtain the cropped image; and send the cropped image to the user device to be displayed. specify a cropped area within the image; and send the image and the specification of the cropped area to the user device. 17. One or more computer-readable non-transitory storage media embodying software operable when executed by one or more computer systems to: 18. The media of claim 17, wherein: the information about the user comprises an identity of the user, and social-networking information about the user; and the information about the image comprises a resolution of the image, metadata associated with the image, a context in which the image is to be displayed, an owner of the image, one or more features captured in the image, and one or more relationships between the user and at least one feature captured in the image. 19. The media of claim 17, wherein the software is further operable when executed by one or more computer systems to access information about the user device, wherein: the image is cropped further based on the information about the user device; and the information about the user device comprises a display area inside which the cropped image is to be displayed, and a location of the display area on the user device. 20. The media of claim 17, wherein the software is further operable when executed by one or more computer systems to access one or more predefined policies, wherein the image is cropped further based on the one or more predefined policies. US13/233,245 2011-09-15 2011-09-15 Dynamically Cropping Images Abandoned US20130069980A1 (en) US13/233,245 US20130069980A1 (en) 2011-09-15 2011-09-15 Dynamically Cropping Images US13/233,245 Abandoned US20130069980A1 (en) 2011-09-15 2011-09-15 Dynamically Cropping Images US20140347517A1 (en) * 2013-05-22 2014-11-27 Canon Kabushiki Kaisha Image processing apparatus, control method, and recording medium US20150138340A1 (en) * 2011-04-19 2015-05-21 Ford Global Technologies, Llc Target monitoring system and method US20150206169A1 (en) * 2014-01-17 2015-07-23 Google Inc. Systems and methods for extracting and generating images for display content US20160034144A1 (en) * 2014-08-01 2016-02-04 Axure Software Solutions, Inc. Documentation element for interactive graphical designs US9367568B2 (en) 2013-05-15 2016-06-14 Facebook, Inc. Aggregating tags in images US20160189415A1 (en) * 2012-08-08 2016-06-30 Google Inc. Intelligent cropping of images based on multiple interacting variables US20160269645A1 (en) * 2015-03-09 2016-09-15 Apple Inc. Automatic cropping of video content WO2015198323A3 (en) * 2014-06-24 2016-10-27 Pic2Go Ltd Photo tagging system and method US9489757B2 (en) 2013-10-16 2016-11-08 Microsoft Technology Licensing, Llc Resizable text backing shapes for digital images WO2018093372A1 (en) * 2016-11-17 2018-05-24 Google Llc Media rendering with orientation metadata WO2018118813A1 (en) * 2016-12-20 2018-06-28 Amazon Technologies, Inc. Intelligent auto-cropping of images US20090208118A1 (en) * 2008-02-19 2009-08-20 Xerox Corporation Context dependent intelligent thumbnail images US20090295787A1 (en) * 2008-06-02 2009-12-03 Amlogic, Inc. Methods for Displaying Objects of Interest on a Digital Display Device US20100266208A1 (en) * 2009-04-15 2010-10-21 Microsoft Corporation Automated Image Cropping to Include Particular Subjects US20100309226A1 (en) * 2007-05-08 2010-12-09 Eidgenossische Technische Hochschule Zurich Method and system for image-based information retrieval US20110280497A1 (en) * 2010-05-13 2011-11-17 Kelly Berger System and method for creating and sharing photo stories US20120084655A1 (en) * 2010-09-30 2012-04-05 Andrew Charles Gallagher Summarizing image collection using a social network US20120250951A1 (en) * 2011-04-01 2012-10-04 Yahoo! Inc. Adding privacy protection to photo uploading/ tagging in social networks US10196088B2 (en) * 2011-04-19 2019-02-05 Ford Global Technologies, Llc Target monitoring system and method US9508175B2 (en) * 2012-08-08 2016-11-29 Google Inc. Intelligent cropping of images based on multiple interacting variables US10311611B2 (en) 2013-05-15 2019-06-04 Facebook, Inc. Aggregating tags in images US9826163B2 (en) * 2013-05-22 2017-11-21 Canon Kabushiki Kaisha Image processing apparatus, control method, and recording medium US10275131B2 (en) 2014-08-01 2019-04-30 Axure Software Solutions, Inc. Facilitating the prototyping and previewing of design element state transitions in a graphical design environment US9753620B2 (en) * 2014-08-01 2017-09-05 Axure Software Solutions, Inc. Method, system and computer program product for facilitating the prototyping and previewing of dynamic interactive graphical design widget state transitions in an interactive documentation environment US10244175B2 (en) * 2015-03-09 2019-03-26 Apple Inc. Automatic cropping of video content US10325372B2 (en) 2016-12-20 2019-06-18 Amazon Technologies, Inc. Intelligent auto-cropping of images JP6122530B2 (en) 2017-04-26 Animation sequence associated with the content item CN103477350B (en) 2018-04-06 Based on the face of the spatial and temporal proximity identified US9256674B2 (en) 2016-02-09 Action clustering for news feeds US10055507B2 (en) 2018-08-21 Infinite scrolling US8824748B2 (en) 2014-09-02 Auto tagging in geo-social networking system KR101788499B1 (en) 2017-10-19 Photo composition and position guidance in an imaging device CA2855963C (en) 2016-06-14 Facial recognition using social networking information US9135631B2 (en) 2015-09-15 Computer-vision content detection for sponsored stories US9143573B2 (en) 2015-09-22 Tag suggestions for images on online social networks JP2017521737A (en) 2017-08-03 Capture multimedia, be sent as an electronic message US10091202B2 (en) 2018-10-02 Text suggestions for images JP2016509705A (en) 2016-03-31 Enables augmented reality using eye-tracking JP6063067B2 (en) 2017-01-18 Photo clustering of the moment JP2014505896A5 (en) 2014-11-27 JP2013543157A (en) 2013-11-28 Location rankings using the social graph information US9807298B2 (en) 2017-10-31 Apparatus and method for providing user&#39;s emotional information in electronic device US8935255B2 (en) 2015-01-13 Social static ranking for search JP6073494B2 (en) 2017-02-01 Presentation of the image KR20140062069A (en) 2014-05-22 Smart camera for sharing pictures automatically JP6151792B2 (en) 2017-06-21 Presentation of comments US9984729B2 (en) 2018-05-29 Facial detection, recognition and bookmarking in videos US10104164B2 (en) 2018-10-16 Aggregating plug-in requests for improved client performance AU2012333037B2 (en) 2016-07-07 Feature-extraction-based image scoring US20140136959A1 (en) 2014-05-15 Generating Multiple Versions of a Content Item for Multiple Platforms JP6215953B2 (en) 2017-10-18 Content composer for third-party applications Owner name: FACEBOOK, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARTSHORNE, BEAU R.;ROMAN, NATHANIEL GREGORY;SIGNING DATES FROM 20111108 TO 20111114;REEL/FRAME:027257/0931
CommonCrawl
Search Results: 1 - 10 of 401374 matches for " M. Sweany " Page 1 /401374 A Note on Neutron Capture Correlation Signals, Backgrounds, and Efficiencies N. S. Bowden,M. Sweany,S. Dazeley Abstract: A wide variety of detection applications exploit the timing correlations that result from the slowing and eventual capture of neutrons. These include capture-gated neutron spectrometry, multiple neutron counting for fissile material detection and identification, and antineutrino detection. There are several distinct processes that result in correlated signals in these applications. Depending on the application, one class of correlated events can be a background that is difficult to distinguish from the class that is of interest. Furthermore, the correlation timing distribution depends on the neutron capture agent and detector geometry. Here, we explain the important characteristics of the neutron capture timing distribution, making reference to simulations and data from a number of detectors currently in use or under development. We point out several features that may assist in background discrimination, and that must be carefully accounted for if accurate detection efficiencies are to be quoted. Large-scale Gadolinium-doped Water Cerenkov Detector for Non-Proliferation M. Sweany,A. Bernstein,N. S. Bowden,S. Dazeley,G. Keefer,R. Svoboda,M. Tripathi Abstract: Fission events from Special Nuclear Material (SNM), such as highly enriched uranium or plutonium, can produce simultaneous emission of multiple neutrons and high energy gamma-rays. The observation of time correlations between any of these particles is a significant indicator of the presence of fissionable material. Cosmogenic processes can also mimic these types of correlated signals. However, if the background is sufficiently low and fully characterized, significant changes in the correlated event rate in the presence of a target of interest constitutes a robust signature of the presence of SNM. Since fission emissions are isotropic, adequate sensitivity to these multiplicities requires a high efficiency detector with a large solid angle with respect to the target. Water Cerenkov detectors are a cost-effective choice when large solid angle coverage is required. In order to characterize the neutron detection performance of large-scale water Cerenkov detectors, we have designed and built a 3.5 kL water Cerenkov-based gamma-ray and neutron detector, and modeled the detector response in Geant4 [1]. We report the position-dependent neutron detection efficiency and energy response of the detector, as well as the basic characteristics of the simulation. Study of wavelength-shifting chemicals for use in large-scale water Cherenkov detectors M. Sweany,A. Bernstein,S. Dazeley,J. Dunmore,J. Felde,R. Svoboda,M. Tripathi Abstract: Cherenkov detectors employ various methods to maximize light collection at the photomultiplier tubes (PMTs). These generally involve the use of highly reflective materials lining the interior of the detector, reflective materials around the PMTs, or wavelength-shifting sheets around the PMTs. Recently, the use of water-soluble wavelength-shifters has been explored to increase the measurable light yield of Cherenkov radiation in water. These wave-shifting chemicals are capable of absorbing light in the ultravoilet and re-emitting the light in a range detectable by PMTs. Using a 250 L water Cherenkov detector, we have characterized the increase in light yield from three compounds in water: 4-Methylumbelliferone, Carbostyril-124, and Amino-G Salt. We report the gain in PMT response at a concentration of 1 ppm as: 1.88 $\pm$ 0.02 for 4-Methylumbelliferone, stable to within 0.5% over 50 days, 1.37 $\pm$ 0.03 for Carbostyril-124, and 1.20 $\pm$ 0.02 for Amino-G Salt. The response of 4-Methylumbelliferone was modeled, resulting in a simulated gain within 9% of the experimental gain at 1 ppm concentration. Finally, we report an increase in neutron detection performance of a large-scale (3.5 kL) gadolinium-doped water Cherenkov detector at a 4-Methylumbelliferone concentration of 1 ppm. NEST: A Comprehensive Model for Scintillation Yield in Liquid Xenon M. Szydagis,N. Barry,K. Kazkaz,J. Mock,D. Stolp,M. Sweany,M. Tripathi,S. Uvarov,N. Walsh,M. Woods Abstract: A comprehensive model for explaining scintillation yield in liquid xenon is introduced. We unify various definitions of work function which abound in the literature and incorporate all available data on electron recoil scintillation yield. This results in a better understanding of electron recoil, and facilitates an improved description of nuclear recoil. An incident gamma energy range of O(1 keV) to O(1 MeV) and electric fields between 0 and O(10 kV/cm) are incorporated into this heuristic model. We show results from a Geant4 implementation, but because the model has a few free parameters, implementation in any simulation package should be simple. We use a quasi-empirical approach, with an objective of improving detector calibrations and performance verification. The model will aid in the design and optimization of future detectors. This model is also easy to extend to other noble elements. In this paper we lay the foundation for an exhaustive simulation code which we call NEST (Noble Element Simulation Technique). A search for cosmogenic production of $β$-neutron emitting radionuclides in water S. Dazeley,M. Askins,M. Bergevin,A. Bernstein,N. S. Bowden,P. Jaffke,S. D. Rountree,T. M. Shokair,M. Sweany Abstract: Here we present the first results of WATCHBOY, a water Cherenkov detector designed to measure the yield of $\beta$-neutron emitting radionuclides produced by cosmic ray muons in water. In addition to the $\beta$-neutron measurement, we also provide a first look at isolating single-$\beta$ producing radionuclides following showering muons as a check of the detection capabilities of WATCHBOY. The data taken over $207$ live days indicates a $^{9}$Li production yield upper limit of $1.9\times10^{-7}\mu^{-1}g^{-1}\mathrm{cm}^2$ at $\sim400$ meters water equivalent (m.w.e.) overburden at the $90\%$ confidence level. In this work the $^{9}$Li signal in WATCHBOY was used as a proxy for the combined search for $^{9}$Li and $^{8}$He production. This result will provide a constraint on estimates of antineutrino-like backgrounds in future water-based antineutrino detectors. Towards energy resolution at the statistical limit from a negative ion time projection chamber Peter Sorensen,Mike Heffner,Adam Bernstein,Josh Renner,Melinda Sweany Abstract: We make a proof-of-principle demonstration that improved energy resolution can be obtained in a negative-ion time projection chamber, by individually counting each electron produced by ionizing radiation. The Physics and Nuclear Nonproliferation Goals of WATCHMAN: A WAter CHerenkov Monitor for ANtineutrinos M. Askins,M. Bergevin,A. Bernstein,S. Dazeley,S. T. Dye,T. Handler,A. Hatzikoutelis,D. Hellfeld,P. Jaffke,Y. Kamyshkov,B. J. Land,J. G. Learned,P. Marleau,C. Mauger,G. D. Orebi Gann,C. Roecker,S. D. Rountree,T. M. Shokair,M. B. Smy,R. Svoboda,M. Sweany,M. R. Vagins,K. A. van Bibber,R. B. Vogelaar,M. J. Wetstein,M. Yeh Abstract: This article describes the physics and nonproliferation goals of WATCHMAN, the WAter Cherenkov Monitor for ANtineutrinos. The baseline WATCHMAN design is a kiloton scale gadolinium-doped (Gd) light water Cherenkov detector, placed 13 kilometers from a civil nuclear reactor in the United States. In its first deployment phase, WATCHMAN will be used to remotely detect a change in the operational status of the reactor, providing a first- ever demonstration of the potential of large Gd-doped water detectors for remote reactor monitoring for future international nuclear nonproliferation applications. During its first phase, the detector will provide a critical large-scale test of the ability to tag neutrons and thus distinguish low energy electron neutrinos and antineutrinos. This would make WATCHMAN the only detector capable of providing both direction and flavor identification of supernova neutrinos. It would also be the third largest supernova detector, and the largest underground in the western hemisphere. In a follow-on phase incorporating the IsoDAR neutrino beam, the detector would have world-class sensitivity to sterile neutrino signatures and to non-standard electroweak interactions (NSI). WATCHMAN will also be a major, U.S. based integration platform for a host of technologies relevant for the Long-Baseline Neutrino Facility (LBNF) and other future large detectors. This white paper describes the WATCHMAN conceptual design,and presents the results of detailed simulations of sensitivity for the project's nonproliferation and physics goals. It also describes the advanced technologies to be used in WATCHMAN, including high quantum efficiency photomultipliers, Water-Based Liquid Scintillator (WbLS), picosecond light sensors such as the Large Area Picosecond Photo Detector (LAPPD), and advanced pattern recognition and particle identification methods. Intraspecific Aflatoxin Inhibition in Aspergillus flavus Is Thigmoregulated, Independent of Vegetative Compatibility Group and Is Strain Dependent Changwei Huang, Archana Jha, Rebecca Sweany, Catherine DeRobertis, Kenneth E. Damann PLOS ONE , 2011, DOI: 10.1371/journal.pone.0023470 Abstract: Biological control of preharvest aflatoxin contamination by atoxigenic stains of Aspergillus flavus has been demonstrated in several crops. The assumption is that some form of competition suppresses the fungus's ability to infect or produce aflatoxin when challenged. Intraspecific aflatoxin inhibition was demonstrated by others. This work investigates the mechanistic basis of that phenomenon. A toxigenic and atoxigenic isolate of A. flavus which exhibited intraspecific aflatoxin inhibition when grown together in suspended disc culture were not inhibited when grown in a filter insert-plate well system separated by a .4 or 3 μm membrane. Toxigenic and atoxigenic conidial mixtures (50:50) placed on both sides of these filters restored inhibition. There was ~50% inhibition when a 12 μm pore size filter was used. Conidial and mycelial diameters were in the 3.5–7.0 μm range and could pass through the 12 μm filter. Larger pore sizes in the initially separated system restored aflatoxin inhibition. This suggests isolates must come into physical contact with one another. This negates a role for nutrient competition or for soluble diffusible signals or antibiotics in aflatoxin inhibition. The toxigenic isolate was maximally sensitive to inhibition during the first 24 hrs of growth while the atoxigenic isolate was always inhibition competent. The atoxigenic isolate when grown with a green fluorescent protein (GFP) toxigenic isolate failed to inhibit aflatoxin indicating that there is specificity in the touch inhibiton. Several atoxigenic isolates were found which inhibited the GFP isolate. These results suggest that an unknown signaling pathway is initiated in the toxigenic isolate by physical interaction with an appropriate atoxigenic isolate in the first 24 hrs which prevents or down-regulates normal expression of aflatoxin after 3–5 days growth. We suspect thigmo-downregulation of aflatoxin synthesis is the mechanistic basis of intraspecific aflatoxin inhibition and the major contributor to biological control of aflatoxin contamination. The LUX Prototype Detector: Heat Exchanger Development D. S. Akerib,X. Bai,S. Bedikian,A. Bernstein,A. Bolozdynya,A. Bradley,S. Cahn,D. Carr,J. J. Chapman,K. Clark,T. Classen,A. Curioni,C. E. Dahl,S. Dazeley,L. deViveiros,M. Dragowsky,E. Druszkiewicz,S. Fiorucci,R. J. Gaitskell,C. Hall,C. Faham,B. Holbrook,L. Kastens,K. Kazkaz,J. Kwong,R. Lander,D. Leonard,D. Malling,R. Mannino,D. N. McKinsey,D. Mei,J. Mock,M. Morii,J. Nikkel,P. Phelps,T. Shutt,W. Skulski,P. Sorensen,J. Spaans,T. Steigler,R. Svoboda,M. Sweany,J. Thomson,M. Tripathi,N. Walsh,R. Webb,J. White,F. L. H. Wolfs,M. Woods,C. Zhang Abstract: The LUX (Large Underground Xenon) detector is a two-phase xenon Time Projection Chamber (TPC) designed to search for WIMP-nucleon dark matter interactions. As with all noble element detectors, continuous purification of the detector medium is essential to produce a large ($>$1ms) electron lifetime; this is necessary for efficient measurement of the electron signal which in turn is essential for achieving robust discrimination of signal from background events. In this paper we describe the development of a novel purification system deployed in a prototype detector. The results from the operation of this prototype indicated heat exchange with an efficiency above 94% up to a flow rate of 42 slpm, allowing for an electron drift length greater than 1 meter to be achieved in approximately two days and sustained for the duration of the testing period. Status of the LUX Dark Matter Search S. Fiorucci,D. S. Akerib,S. Bedikian,A. Bernstein,A. Bolozdynya,A. Bradley,D. Carr,J. Chapman,K. Clark,T. Classen,A. Curioni,E. Dahl,S. Dazeley,L. de Viveiros,E. Druszkiewicz,R. Gaitskell,C. Hall,C. Hernandez Faham,B. Holbrook,L. Kastens,K. Kazkaz,R. Lander,K. Lesko,D. Malling,R. Mannino,D. McKinsey,D. Mei,J. Mock,J. Nikkel,P. Phelps,U. Schroeder,T. Shutt,W. Skulski,P. Sorensen,J. Spaans,T. Stiegler,R. Svoboda,M. Sweany,J. Thomson,J. Toke,M. Tripathi,N. Walsh,R. Webb,J. White,F. Wolfs,M. Woods,C. Zhang Physics , 2009, DOI: 10.1063/1.3327777 Abstract: The Large Underground Xenon (LUX) dark matter search experiment is currently being deployed at the Homestake Laboratory in South Dakota. We will highlight the main elements of design which make the experiment a very strong competitor in the field of direct detection, as well as an easily scalable concept. We will also present its potential reach for supersymmetric dark matter detection, within various timeframes ranging from 1 year to 5 years or more.
CommonCrawl
Spatial Memory and Taxis-Driven Pattern Formation in Model Ecosystems Jonathan R. Potts ORCID: orcid.org/0000-0002-8564-29041 & Mark A. Lewis2,3 Bulletin of Mathematical Biology volume 81, pages2725–2747(2019)Cite this article Mathematical models of spatial population dynamics typically focus on the interplay between dispersal events and birth/death processes. However, for many animal communities, significant arrangement in space can occur on shorter timescales, where births and deaths are negligible. This phenomenon is particularly prevalent in populations of larger, vertebrate animals who often reproduce only once per year or less. To understand spatial arrangements of animal communities on such timescales, we use a class of diffusion–taxis equations for modelling inter-population movement responses between \(N \ge 2\) populations. These systems of equations incorporate the effect on animal movement of both the current presence of other populations and the memory of past presence encoded either in the environment or in the minds of animals. We give general criteria for the spontaneous formation of both stationary and oscillatory patterns, via linear pattern formation analysis. For \(N=2\), we classify completely the pattern formation properties using a combination of linear analysis and nonlinear energy functionals. In this case, the only patterns that can occur asymptotically in time are stationary. However, for \(N \ge 3\), oscillatory patterns can occur asymptotically, giving rise to a sequence of period-doubling bifurcations leading to patterns with no obvious regularity, a hallmark of chaos. Our study highlights the importance of understanding between-population animal movement for understanding spatial species distributions, something that is typically ignored in species distribution modelling, and so develops a new paradigm for spatial population dynamics. Mathematical modelling of spatial population dynamics has a long history of uncovering the mechanisms behind a variety of observed patterns, from predator–prey interactions (Pascual 1993; Lugo and McKane 2008; Sun et al. 2012) to biological invasions (Petrovskii et al. 2002; Hastings et al. 2005; Lewis et al. 2016) to inter-species competition (Hastings 1980; Durrett and Levin 1994; Girardin and Nadin 2015). These models typically start with a mathematical description of the birth and death processes and then add spatial aspects in the form of dispersal movements. Such movements are often assumed to be diffusive (Okubo and Levin 2013), but sometimes incorporate elements of taxis (Kareiva and Odell 1987; Lee et al. 2009; Potts and Petrovskii 2017). The resulting models are often systems of reaction–advection–diffusion (RAD) equations, which are amenable to pattern formation analysis via a number of established mathematical techniques (Murray 2003). An implicit assumption behind these RAD approaches is that the movement processes (advection and diffusion) take place on the same temporal scale as the birth and death processes (reaction). However, many organisms will undergo significant movement over much shorter timescales. For example, many larger animals (e.g. most birds, mammals, and reptiles) will reproduce only once per year, but may rearrange themselves in space quite considerably in the intervening period between natal events. These rearrangements can give rise to emergent phenomena such as the 'landscape of fear' (Laundré et al. 2010), aggregations of coexistent species (Murrell and Law 2003), territoriality (Potts and Lewis 2014), home ranges (Briscoe et al. 2002; Börger et al. 2008), and spatial segregation of interacting species (Shigesada et al. 1979). Indeed, the study of organism movements has led, in the past decade or two, to the emergence of a whole subfield of ecology, dubbed 'movement ecology' (Nathan et al. 2008; Nathan and Giuggioli 2013). This is gaining increasing attention by both statisticians (Hooten et al. 2017) and empirical ecologists (Kays et al. 2015; Hays et al. 2016), in part driven by recent rapid technological advances in biologging (Williams et al., in review). Often, a stated reason for studying movement is to gain insight into space-use patterns (Vanak et al. 2013; Avgar et al. 2015; Fleming et al. 2015; Avgar et al. 2016). Yet despite this, we lack a good understanding of the spatial pattern formation properties of animal movement models over timescales where birth and death effects are minimal. To help rectify this situation, we introduce here a class of models that focuses on one particular type of movement: taxis of a population in response to the current or recent presence of foreign populations. This covers several ideas within the ecological literature. One is the movement of a species away from areas where predator or competitor species reside, often dubbed the 'landscape of fear' (Laundré et al. 2010; Gallagher et al. 2017). The opposing phenomenon is that of predators moving towards prey, encapsulated in prey-taxis models (Kareiva and Odell 1987; Lee et al. 2009). Many species exhibit mutual avoidance, which can be either inter-species avoidance or intra-species avoidance. The latter gives rise to territoriality, and there is an established history of modelling efforts devoted to its study (Adams 2001; Lewis and Moorcroft 2006; Potts and Lewis 2014). Likewise, some species exhibit mutual attraction due to benefits of coexistence (Murrell and Law 2003; Kneitel and Chase 2004; Vanak et al. 2013). Since some of these phenomena are inter-specific and others are intra-specific, we use the word 'population' to mean a group of organisms that are all modelled using the same equation, noting once and for all that 'population' may be used to mean an entire species (for modelling inter-species interactions, e.g. the landscape of fear), or it may refer to a group within a single species (for intra-species interactions, e.g. territoriality). There are various processes by which one population can sense the presence of others. One is by directly sensing organism presence by sight or touch. However, it is perhaps more common for the presence of others to be advertised indirectly. This could either be due to marks left in the landscape, a process sometimes known as stigmergy (Giuggioli et al. 2013), or due to memory of past interactions (Fagan et al. 2013; Potts and Lewis 2016a). We show here that these three interaction processes (direct, stigmergic, memory) can all be subsumed under a single modelling framework. The resulting model is a system of N diffusion–taxis equations, one for each of N populations. We analyse this system using a combination of linear pattern formation analysis (Turing 1952), energy functionals (nonlinear), and numerical bifurcation analysis. We classify completely the pattern formation properties for \(N=2\), noting that here only stationary patterns can form. For \(N=3\), we show that, as well as there being parameter regimes where stationary patterns emerge, oscillatory patterns can emerge for certain parameter values, where patterns remain transient and never settle to a steady state. In these regimes, we observe both periodic behaviour and behaviour where the period is much less regular. These irregular regimes emerge through a sequence of period-doubling bifurcations, a phenomenon often associated with the emergence of chaos. The fact that inter-population taxis processes can give rise to perpetually changing, possibly chaotic, spatial patterns is a key insight into the study of species distributions. Researchers often look to explain such transient spatial patterns by examining changes in the underlying environment. However, we show that continually changing patterns can emerge without the need to impose any environmental effect. As such, our study highlights the importance of understanding inter-population movement responses for gaining a full understanding of the spatial distribution of ecological communities, and helps link movement ecology to population dynamics in a non-speculative way. The Modelling Framework Our general modelling framework considers N populations, each of which has a fixed overall size. For each population, the constituent individuals move in space through a combination of a diffusive process and a tendency to move towards more attractive areas and away from those that are less attractive. Denoting by \(u_i(\mathbf{x},t)\) the probability density function of population i at time t (\(i \in \{1,\dots ,N\}\)), and by \(A_i(\mathbf{x},t)\) the attractiveness of location \(\mathbf{x}\) to members of population i at time t, we construct the following movement model $$\begin{aligned} \frac{\partial u_i}{\partial t} = D_i\nabla ^2 u_i - c_{i} \nabla \cdot \left( u_i \nabla A_i \right) , \end{aligned}$$ where \(D_i>0\) is the magnitude of the diffusive movement of population i and \(c_i\ge 0\) is the magnitude of the drift tendency towards more attractive parts of the landscape. Here, we assume that the attractiveness of a point \(\mathbf{x}\) on the landscape at time t is determined by the presence of individuals from other populations. We look at three scenarios. For some organisms, particularly very small ones such as amoeba, there may be sufficiently many individuals constituting each population so that the probability density function is an accurate descriptor of the number of individuals present at each part of space. This is Scenario 1. In this case, the attractiveness of a part of space to population i may simply be proportional to the weighted sum of the probability density functions of all the other populations, or possibly a locally averaged probability density. In other words $$\begin{aligned} \mathbf{Scenario}\,\mathbf{1{:}}\,A_i(\mathbf{x},t) = \sum _{j\ne i} a_{ij} {\bar{u}}_j(\mathbf{x},t), \end{aligned}$$ where \(a_{ij}\) are constants, which can be either positive, if population i benefits from the presence of population j, or negative, if population i seeks to avoid population j, and $$\begin{aligned} {\bar{u}}_j(\mathbf{x},t) = \frac{1}{|C_\mathbf{x}|}\int _{C_\mathbf{x}}u_j(\mathbf{z},t)\mathrm{d}{} \mathbf{z}, \end{aligned}$$ where \(C_\mathbf{x}\) is a small neighbourhood of \(\mathbf{x}\) and \(|C_\mathbf{x}|\) is the Lebesgue measure of \(C_\mathbf{x}\). The importance of this spatial averaging will become apparent in Sect. 3. For larger organisms (e.g. mammals, birds, reptiles, etc.), individuals may be more spread out on the landscape. Here, the presence may be advertised by one of two processes (Scenarios 2 and 3). In Scenario 2, we model individuals as leaving marks on the landscape (e.g. urine, faeces, footprints etc.) to which individuals of the other populations respond. Denoting by \(p_i\) the presence of marks that are foreign to population i, we can model this using the following differential equation (cf. Lewis and Murray 1993; Lewis and Moorcroft 2006; Potts and Lewis 2016b) $$\begin{aligned} \frac{\partial p_i}{\partial t} = \sum _{j\ne i}\alpha _{ij} u_j-\mu p_i, \end{aligned}$$ where \(\mu >0\) and \(\alpha _{ij} \in {{\mathbb {R}}}\) are constants. If \(\alpha _{ij}>0\) (resp. \(\alpha _{ij}<0\)), then population i is attracted towards (resp. repelled away from), population j. In this scenario, we model \(A_i(\mathbf{x},t)\) as a spatial averaging of \(p_i(\mathbf{x},t)\) so that $$\begin{aligned} \mathbf{Scenario}\,\mathbf{2{:}}\,A_i(\mathbf{x},t) = {\bar{p}}_i(\mathbf{x},t), \end{aligned}$$ where \({\bar{p}}_i(\mathbf{x},t)\) is defined in an analogous way to \({\bar{u}}_j(\mathbf{x},t)\) in Eq. (3). Finally, Scenario 3 involves individuals remembering places where they have had recent encounters with individuals of another population, and moving in a manner consistent with a cognitive map. We assume here that individuals within a population are able to transmit information between themselves so that they all share common information regarding the expected presence of other populations, which we denote by \(k_i(\mathbf{x},t)\) for population i. This can be modelled as follows (cf. Potts and Lewis 2016a) $$\begin{aligned} \frac{\partial k_i}{\partial t} = \sum _{j\ne i}\beta _{ij} u_iu_j-(\zeta +\nu u_i) k_i, \end{aligned}$$ where \(\nu > 0, \zeta \ge 0\), and \(\beta _{ij}\in {{\mathbb {R}}}\) are constants. Here, \(\beta _{ij}\) refers to the tendency for animals from population i to remember a spatial location, given an interaction with an individual from population j, \(\zeta \) is the rate of memory decay, and \(\nu \) refers to the tendency for animals from population i to consider a location not part of j's range if individuals from i visit that location without observing an individual from j there. See Potts and Lewis (2016a) more explanation of the motivation and justification for the functional form in Eq. (6), in the context of avoidance mechanisms. In this scenario, we model \(A_i(\mathbf{x},t)\) as a spatial averaging of \(k_i(\mathbf{x},t)\) so that $$\begin{aligned} \mathbf{Scenario}\,\mathbf{3{:}}\,A_i(\mathbf{x},t) = {\bar{k}}_i(\mathbf{x},t), \end{aligned}$$ where \({\bar{k}}_i(\mathbf{x},t)\) is defined in an analogous way to \({\bar{u}}_j(\mathbf{x},t)\) in Eq. (3). Note the similarity between Scenarios 2 and 3 and the idea of a "landscape of fear", which has become increasingly popular in the empirical literature (Laundré et al. 2010). The landscape of fear invokes the idea that there are certain parts of space that individuals in a population tend to avoid because they perceive those areas to have a higher risk of aggressive interactions (either due to predation or competition). The degree to which this danger is perceived across space creates a spatial distribution of fear, and animals may be modelled as advecting down the gradient of this distribution. General Results in 1D Although our modelling framework can be defined in arbitrary dimensions, we will focus our analysis on the following 1D version of Eq. (1) $$\begin{aligned} \frac{\partial u_i}{\partial t} = D_i\frac{\partial ^2 u_i}{\partial x^2} - c_{i} \frac{\partial }{\partial x}\left( u_i \frac{\partial A_i}{\partial x} \right) . \end{aligned}$$ We also work on a line segment, so that \(x \in [0,L]\) for some \(L>0\). It is convenient for analysis to assume that, for Scenarios 2 and 3, the quantities \(p_i({x},t)\) and \(k_i({x},t)\) equilibrate much faster than \(u_i({x},t)\), so we can make the approximations \({\partial p_i}/{\partial t}\approx 0\) and \({\partial k_i}/{\partial t}\approx 0\). Making the further assumption that there is no memory decay (\(\zeta =0\) in Eq. 6), which turns out later to be convenient for unifying the three scenarios, we have the following approximate versions of Eqs. (5) and (7) $$\begin{aligned} \mathbf{Scenario}\,\mathbf{2{:}}\,A_i({x},t)&\approx \sum _{j\ne i}\frac{\alpha _{ij}}{\mu } {\bar{u}}_j({x},t), \end{aligned}$$ $$\begin{aligned} \mathbf{Scenario}\,\mathbf{3{:}}\,A_i({x},t)&\approx \sum _{j\ne i}\frac{\beta _{ij}}{\nu } {\bar{u}}_j({x},t). \end{aligned}$$ We non-dimensionalise our system by setting \({\tilde{u}}_i=Lu_i\), \({\tilde{x}}=x/L\), \({\tilde{t}}=tD_1/L^2\), \(d_i=D_i/D_1\) and $$\begin{aligned} \gamma _{ij}= {\left\{ \begin{array}{ll} \frac{c_ia_{ij}}{LD_1},&{}\quad \text{ in } \text{ Scenario } \text{1 }, \\ \frac{c_i\alpha _{ij}}{\mu LD_1},&{}\quad \text{ in } \text{ Scenario } \text{2 }, \\ \frac{c_i\beta _{ij}}{\nu LD_1},&{}\quad \text{ in } \text{ Scenario } \text{3 }. \end{array}\right. } \end{aligned}$$ Then, dropping the tildes over \({\tilde{u}}_i\), \({\tilde{x}}\), and \({\tilde{t}}\) for notational convenience, we obtain the following non-dimensional model for space use $$\begin{aligned} \frac{\partial u_i}{\partial t}&= d_i\frac{\partial ^2 u_i}{\partial x^2} - \frac{\partial }{\partial x}\left( u_i \sum _{j\ne i}\gamma _{ij}\frac{\partial {\bar{u}}_j}{\partial x} \right) , \end{aligned}$$ where \(d_1=1\), by definition. For simplicity, we assume that boundary conditions are periodic, so that $$\begin{aligned} u_i(0,t)=u_i(1,t). \end{aligned}$$ With this identification in place, we can define the 1D spatial averaging kernel from Eq. (3) to be \(C_x=\{z \in [0,1] | (x-\delta ) (\text{ mod } 1)<z<(x+\delta ) (\text{ mod } 1)\}\) for \(0<\delta \ll 1\). Here, \(z(\text{ mod } 1)\) is used so as to account for the periodic boundary conditions and is defined to be the unique real number \(z' \in [0,1)\) such that \(z-z' \in {\mathbb {Z}}\). Then, Eq. (3) becomes $$\begin{aligned} {\bar{u}}_j = \frac{1}{2\delta }\int _{(x-\delta )(\text{ mod } 1)}^{(x+\delta )(\text{ mod } 1)} u_j(z,t)\mathrm{d}z. \end{aligned}$$ Finally, since \(u_i(x,t)\) are probability density functions of x, defined on the interval \(x \in [0,1]\), we also have the integral condition $$\begin{aligned} \int _0^1 u_i(x,t)\mathrm{d}x=1. \end{aligned}$$ This condition means that we have a unique spatially homogeneous steady state, given by \(u_i^*(x)=1\) for all \(i \in \{1,\dots ,N\}, x \in [0,1]\). Our first task for analysis is to see whether this steady state is unstable to non-constant perturbations. We set \(\mathbf{w}(x,t)=(u_1-1,\dots ,u_N-1)^\mathrm{T}=(u_{1}^{(0)},\dots ,u_{N}^{(0)})^\mathrm{T}\exp (\sigma t + \mathrm{i}\kappa x)\), where \(u_1^{(0)},...,u_N^{(0)}\) and \(\sigma ,\kappa \) are constants, and the superscript T denotes matrix transpose. By neglecting nonlinear terms, Eq. (12) becomes $$\begin{aligned} \sigma \mathbf{w}=\kappa ^2 M(\kappa ,\delta )\mathbf{w}, \end{aligned}$$ where \(M(\kappa ,\delta )=[M_{ij}(\kappa ,\delta )]_{i,j}\) is a matrix with $$\begin{aligned} M_{ij}(\kappa ,\delta )={\left\{ \begin{array}{ll} -d_i, &{}\quad \text{ if } i=j, \\ \gamma _{ij}\text{ sinc }(\kappa \delta ), &{}\quad \text{ otherwise, } \end{array}\right. } \end{aligned}$$ where \(\text{ sinc }(\xi )=\sin (\xi )/\xi \). Therefore, patterns form whenever there is some \(\kappa \) such that there is an eigenvalue of \(M(\kappa ,\delta )\) with positive real part. It is instructive to examine the limit case \(\delta \rightarrow 0\). Here $$\begin{aligned} M_{ij}(\kappa ,0)={\left\{ \begin{array}{ll} -d_i, &{}\quad \hbox { if}\ i=j \\ \gamma _{ij}, &{}\quad \text{ otherwise. } \end{array}\right. } \end{aligned}$$ so \(M_{ij}(\kappa ,0)\) is, in fact, independent of \(\kappa \), and so we define the constant matrix \(M_0=[M_{ij}(\kappa ,0)]_{i,j}\). When \(\delta \rightarrow 0\), there are two cases pertinent to pattern formation: All the eigenvalues of \(M_0\) have negative real part, in which case no patterns form. At least one eigenvalue of \(M_0\) has positive real part, in which case the dominant eigenvalue of \(\kappa ^2 M_0\) is an increasing function of \(\kappa \). Therefore, patterns can form at arbitrarily high wavenumbers. In other words, the pattern formation problem is ill-posed. The problem posed by point (2) above can often be circumvented by using a strictly positive \(\delta \). For example, Fig. 1 shows the dispersion relation (plotting the dominant eigenvalue against \(\kappa \)) for a simple case where \(N=2\), \(d_i=1\), \(\gamma _{ij}=-5\) for all i, j, and \(\delta \) is varied. In this example, the dominant eigenvalue is real for all \(\kappa \). We see that, for \(\delta \rightarrow 0\), the dispersion relation is monotonically increasing. However, a strictly positive \(\delta \) means the eigenvalues are \(\kappa ^2[-2 \pm 5\text{ sinc }(\kappa \delta )]/2\), which is asymptotically \(\sigma \approx -\kappa ^2\) as \(\kappa \rightarrow \infty \). Hence, the dominant eigenvalue is positive only for a finite range of \(\kappa \) values, as long as \(\delta >0\). Example dispersion relations. Here we give dispersion relations for the system described by Eq. (12) with \(N=2\), \(d_i=1\), and \(\gamma _{ij}=-5\) for all i, j. In the left-hand panel, we examine three values of \(\delta \), showing that, for \(\delta \rightarrow 0\), the dispersion relation is monotonic, but this monotonicity is tamed by setting \(\delta >0\). In the right-hand panel, we extend the dispersion relation plot for \(\delta =0.1\) to a larger range of \(\kappa \) values, together with the analytically derived asymptotic trend The fact that the pattern formation problem is ill-posed for \(\delta \rightarrow 0\) suggests that classical solutions may not exist in this case. This phenomenon is not new and has been observed in very similar systems studied by Briscoe et al. (2002), Potts and Lewis (2016a, b). More generally, there are various studies that deal with regularisation of such ill-posed problems in slightly different contexts using other techniques, which incorporate existence proofs (e.g. Padrón 1998, 2004). We therefore conjecture that classical solutions do exist for the system given by Eq. (12) in the case where \(\delta >0\), and the numerics detailed in this paper give evidence to support this. However, we do not prove this conjecture here, since it is a highly non-trivial question in general, and the purpose of this paper is just to introduce the model structure and investigate possible types of patterns that could arise. Nonetheless, it is an important subject for future research. In the next two sections, we will examine specific cases where \(N=2\) and \(N=3\). The Case of Two Interacting Populations (\(N=2\)) When \(N=2\), the system given by Eqs. (12, 14, 15) is simple enough to categorise its linear pattern formation properties in full. Here $$\begin{aligned} M(\kappa ,\delta )=\left( \begin{array}{cc} -1 &{}\quad \gamma _{12}\text{ sinc }(\kappa \delta ) \\ \gamma _{21}\text{ sinc }(\kappa \delta ) &{}\quad -d_2 \end{array} \right) . \end{aligned}$$ The eigenvalues of \(M(\kappa ,\delta )\) are therefore $$\begin{aligned} \sigma (\kappa ) = \frac{-(1+d_2)\pm \sqrt{(1+d_2)^2+4[\gamma _{12}\gamma _{21}\text{ sinc }^2(\kappa \delta )-d_2]}}{2}. \end{aligned}$$ Notice first that if \(\sigma (\kappa )\) is not real, then the real part is \(\text{ Re }[\sigma (\kappa )]=-(1+d_2)/2\), which is always negative, since \(d_2>0\). Hence, patterns can only form when \(\sigma (\kappa )\in {{\mathbb {R}}}\), meaning that the discriminant, \(\varDelta =(1+d_2)^2+4[\gamma _{12}\gamma _{21}\text{ sinc }^2(\kappa \delta )-d_2]\), must be positive. In addition, \(\sigma (\kappa )>0\) only when \(\varDelta >(1+d_2)^2\). This occurs whenever \(\gamma _{12}\gamma _{21}\text{ sinc }^2(\kappa \delta ) > d_2\). Since the maximum value of \(\text{ sinc }^2(\kappa \delta )\) is 1, which is achieved at \(\kappa =0\), we arrive at the following necessary criterion for pattern formation, which is also sufficient if we either drop the boundary conditions or take the \(\delta \rightarrow 0\) limit $$\begin{aligned} \gamma _{12}\gamma _{21} > d_2. \end{aligned}$$ Furthermore, any patterns that do form are stationary patterns, since the eigenvalues are always real if their real part is positive. There are three distinct biologically relevant situations, which correspond to different values of \(\gamma _{12}\) and \(\gamma _{21}\), as follows Mutual avoidance: \(\gamma _{12},\gamma _{21}<0\) Mutual attraction: \(\gamma _{12},\gamma _{21}>0\) Pursue-and-avoid: \(\gamma _{12}<0<\gamma _{21}\) or \(\gamma _{21}<0<\gamma _{12}\) There are also the edge cases where \(\gamma _{12}=0\) or \(\gamma _{21}=0\), which we will not focus on. Notice that the 'pursue-and-avoid' case cannot lead to the emergence of patterns (Fig. 2e), as it is inconsistent with the inequality in (21). However, the other two situations can. Mutual avoidance leads to spatial segregation if Inequality (21) is satisfied (Fig. 2c). Some previous models of territory formation in animal populations by the present authors have a very similar form to the mutual avoidance model here, so we refer to Potts and Lewis (2016a, b) for details of this situation. Mutual attraction leads to aggregation of both populations in a particular part of space, whose width roughly corresponds to the width of the spatial averaging kernel, \((x-\delta ,x+\delta )\) (Fig. 2a), as long as Inequality (21) is satisfied. The characterisation of between-population movement responses into 'mutual avoidance', 'mutual attraction', and 'pursue-and-avoid' enables us to categorise examples of the system in Eqs. (12, 14, 15) by means of a simple schematic diagram. We construct one node for each population, ensuring that no three distinct nodes are in a straight line. Then, an arrow is added from node i to node j if \(\gamma _{ij}>0\). If \(\gamma _{ij}<0\), an arrow is added from node i in the direction anti-parallel to the line from node i to node j. These diagrams allow us to see quickly the qualitative relationship between the populations (see Fig. 2b,d,f for the \(N=2\) case and Fig. 4b for some examples in the \(N=3\) case). Dynamics for a two-species system. Here, there are three cases: mutual attraction, mutual avoidance, and pursue-and-avoid. a The steady state of a model of mutual attraction, with \(\gamma _{12}=\gamma _{21}=2\), and \(\delta =0.1\), with a schematic of this situation in b. c The steady state of a mutual avoidance model with \(\gamma _{12}=\gamma _{21}=-2\), and \(\delta =0.1\), with corresponding schematic in d. e The steady state of a pursue-and-avoid model (where patterns never form) with \(\gamma _{12}=2, \gamma _{21}=-2\) and \(\delta =0.1\), with corresponding schematic in f An Energy Functional Approach to Analysing Patterns We can gain qualitative understanding of the patterns observed in Fig. 2a–d via use of an energy functional approach, by assuming \(\gamma _{1,2}=\gamma _{2,1}=\gamma \), and \(d_2=1\). In particular, this approach gives a mathematical explanation for the appearance of aggregation patterns when \(\gamma >0\) and segregation patterns when \(\gamma <0\). The results rely on the assumption that, for all i, \(u_i(x,0)> 0\) implies \(u_i(x,t)> 0\) for all t, which can be shown by the application of a comparison theorem to Eqs. (8, 13), assuming \({\partial }A_i(x)/{\partial }x\) is bounded. Throughout this section, our spatial coordinates will be defined on the quotient space \([0,1]/\{0,1\}\), which is consistent with our use of periodic boundary conditions. Our method makes use of the following formulation of Eq. (12) $$\begin{aligned} \frac{\partial u_i}{\partial t}&= \frac{\partial }{\partial x}\left[ u_i\frac{\partial }{\partial x}\left( d_i\ln (u_i) -\sum _{j\ne i}\gamma _{ij}{{\mathcal {K}}}*u_j \right) \right] , \end{aligned}$$ and also the energy functional $$\begin{aligned} E(u_1,u_2)=\int _0^1\{u_1[2\ln (u_1)-\gamma {\mathcal K} *u_2]+u_2[2\ln (u_2)-\gamma {{\mathcal {K}}} *u_1]\}\mathrm{d}x, \end{aligned}$$ where \({{\mathcal {K}}}(x)\) is a bounded function (i.e. \(\left||{{\mathcal {K}}}\right||_{\infty } < \infty \)), symmetric about \(x=0\) on the domain \([0,1]/\{0,1\}\), with \(\left||{\mathcal K}\right||_1=1\), and \(*\) denotes the following spatial convolution $$\begin{aligned} {{\mathcal {K}}} *u_i(x,t) = \int _0^1 {\mathcal K}(x-y) u_i(y,t)\mathrm{d}y. \end{aligned}$$ In our situation, Eq. (14) implies that \({\mathcal K}(x)=1/(2\delta )\) for \(-\delta< x < \delta (\text{ mod } 1)\) and \({{\mathcal {K}}}(x)=0\) for \(\delta \le x \le 1-\delta \). We consider solutions \(u_1(x,t)\) and \(u_2(x,t)\) that are continuous functions of x and t. We show that the energy functional from Eq. (23) decreases over time to a minimum, which represents the steady-state solution of the system. The monotonic decrease of E over time is shown as follows $$\begin{aligned} \frac{\partial E}{\partial t}&= \int _0^1\left\{ \frac{\partial u_1}{\partial t}[2\ln (u_1)-\gamma {{\mathcal {K}}} *u_2]+\frac{\partial u_2}{\partial t}[2\ln (u_2)-\gamma {{\mathcal {K}}} *u_1]\right\} \mathrm{d}x \nonumber \\&\quad +\int _0^1 \left[ 2 \frac{\partial u_1}{\partial t}+2 \frac{\partial u_2}{\partial t}-\gamma u_1 {{\mathcal {K}}} *\frac{\partial u_2}{\partial t}-\gamma u_2 {{\mathcal {K}}} *\frac{\partial u_1}{\partial t}\right] \mathrm{d}x\nonumber \\&=\int _0^1\left\{ 2 \frac{\partial u_1}{\partial t}+2 \frac{\partial u_2}{\partial t}+\frac{\partial u_1}{\partial t}[2\ln (u_1)-2\gamma {{\mathcal {K}}}*u_2]+\frac{\partial u_2}{\partial t}[2\ln (u_2)-2\gamma {{\mathcal {K}}}*u_1]\right\} \mathrm{d}x \nonumber \\&=2\int _0^1\biggl \{\frac{\partial }{\partial x}\left[ u_1\frac{\partial }{\partial x}\left( \ln (u_1) -\gamma {{\mathcal {K}}}*u_2\right) \right] [1+\ln (u_1)-\gamma {{\mathcal {K}}}*u_2]\nonumber \\&\quad +\frac{\partial }{\partial x}\left[ u_2\frac{\partial }{\partial x}\left( \ln (u_2) -\gamma {{\mathcal {K}}}*u_1\right) \right] [1+\ln (u_2)-\gamma {{\mathcal {K}}}*u_1]\biggr \}\mathrm{d}x \nonumber \\&=2\biggl [u_1\frac{\partial }{\partial x}(\ln (u_1)-\gamma {{\mathcal {K}}}*u_2)(1+\ln (u_1)-\gamma {{\mathcal {K}}}*u_2)\nonumber \\&\quad +u_2\frac{\partial }{\partial x}(\ln (u_2)-\gamma {{\mathcal {K}}}*u_1)(1+\ln (u_2)-\gamma {{\mathcal {K}}}*u_1)\biggr ]^1_0\nonumber \\&\quad -2 \int _0^1\biggl \{\left[ u_1\frac{\partial }{\partial x}(\ln (u_1)-\gamma {{\mathcal {K}}}*u_2)\right] \frac{\partial }{\partial x}(\ln (u_1)-\gamma {{\mathcal {K}}}*u_2)\nonumber \\&\quad +\left[ u_2\frac{\partial }{\partial x}(\ln (u_2)-\gamma {{\mathcal {K}}}*u_1)\right] \frac{\partial }{\partial x}(\ln (u_2)-\gamma {{\mathcal {K}}}*u_1)\biggr \}\mathrm{d}x \nonumber \\&=-2 \int _0^1\biggl \{\left[ u_1\frac{\partial }{\partial x}(\ln (u_1)-\gamma {{\mathcal {K}}}*u_2)\right] \frac{\partial }{\partial x}(\ln (u_1)-\gamma {{\mathcal {K}}}*u_2)\nonumber \\&\quad +\left[ u_2\frac{\partial }{\partial x}(\ln (u_2)-\gamma {{\mathcal {K}}}*u_1)\right] \frac{\partial }{\partial x}(\ln (u_2)-\gamma {{\mathcal {K}}}*u_1)\biggr \}\mathrm{d}x \nonumber \\&=-2 \int _0^1\left\{ u_1\left[ \frac{\partial }{\partial x}(\ln (u_1)-\gamma {{\mathcal {K}}}*u_2)\right] ^2+u_2\left[ \frac{\partial }{\partial x}(\ln (u_2)-\gamma {{\mathcal {K}}}*u_1)\right] ^2\right\} \mathrm{d}x \nonumber \\&\le 0. \end{aligned}$$ Here, the first equality uses Eq. (23), the second uses the fact that \(\int _0^1 f(x){{\mathcal {K}}}*h(x)\mathrm{d}x=\int _0^1 h(x){{\mathcal {K}}}*f(x)\mathrm{d}x\) as long as \({{\mathcal {K}}}(x)\) is symmetric about 0 in \([0,1]/\{0,1\}\), and also requires that \(\gamma _{1,2}=\gamma _{2,1}=\gamma \), the third uses Eq. (22), the fourth is integration by parts, the fifth uses the fact that \(u_i(0)=u_i(1)\) and \({{\mathcal {K}}}*u_i(0)={{\mathcal {K}}}*u_i(1)\) for \(i \in \{1,2\}\) (i.e. periodic boundary conditions, Eq. 13), the sixth is just a rearrangement, and the inequality at the end uses the fact that \(u_i(x,t)> 0\) for all i, x, t. In all, Eq. (25) shows that \(E(u_1,u_2)\) is decreasing over time. The following shows that \(E(u_1,u_2)\) is bounded below $$\begin{aligned} E(u_1,u_2)&=2\int _0^1 [u_1\ln (u_1)+u_2\ln (u_2)]\mathrm{d}x-\int _0^1 [u_1 {{\mathcal {K}}}*u_1+u_2 {{\mathcal {K}}}*u_2]\mathrm{d}x\nonumber \\&\ge -4\mathrm{e}^{-1}-\int _0^1 [u_1 {{\mathcal {K}}}*u_1+u_2 {{\mathcal {K}}}*u_2]\mathrm{d}x\nonumber \\&\ge -4\mathrm{e}^{-1}-\left||u_1\right||_1\left||{{\mathcal {K}}}*u_1\right||_\infty -\left||u_2\right||_1\left||{{\mathcal {K}}}*u_2\right||_\infty \nonumber \\&\ge -4\mathrm{e}^{-1}-\left||u_1\right||_1\left||{{\mathcal {K}}}\right||_\infty \left||u_1\right||_1-\left||u_2\right||_1\left||{{\mathcal {K}}}\right||_\infty \left||u_2\right||_1\nonumber \\&\ge -4\mathrm{e}^{-1}-2\left||{{\mathcal {K}}}\right||_\infty . \end{aligned}$$ Here, the first inequality uses the fact that \(\text{ inf }_{u_i\ge 0} \{u_i \ln (u_i)\}=-\mathrm{e}^{-1}\), the second uses Hölder's inequality, the third uses Young's inequality, and the fourth uses the fact that \(\left||u_1\right||_1=1\) (Eq. 15). For the absence of doubt, the definition \(\left||f\right||_p=\left( \int _0^1|f(x,t)|^p\mathrm{d}x\right) ^{1/p}\), for \(p \in [1,\infty ]\), is used throughout (26). Again, note that the inequality \(u(x,t)>0\) is required for the sequence of inequalities in (26) to hold. The inequalities in (25) and (26) together demonstrate that \(E(u_1,u_2)\) moves towards a minimum as \(t\rightarrow \infty \), which is given at the point where \(\frac{\partial E}{\partial t}=0\). The latter equation is satisfied when the following two conditions hold $$\begin{aligned} \ln (u_1)-\gamma {{\mathcal {K}}}*u_2&=\eta _1, \end{aligned}$$ where \(\eta _1\) and \(\eta _2\) are constants. Understanding the patterns from Fig. 2 using energy functionals. a An example of \(\frac{\partial ^2 u_i}{\partial x^2}\) as a function of \(u_i\) (Eq. 32) when the energy is minimised (Eqs. 27–28) and the moment closure approximation from Eq. (31) is applied, for the aggregation case, \(u_1 \approx u_2\). We see that \(\frac{\partial ^2 u_i}{\partial x^2}\) is positive for \(a<u_i<b\) and negative when \(u_i<a\) or \(u_i>b\). There are various possible smooth solutions, \(u_i(x,\infty )\), that satisfy this property. b An example corresponding qualitatively to Fig. 2a. c, d Are analogous to a and b, respectively, but for the situation where we have segregation, so \(u_1 \approx 2-u_2\). Note that d qualitatively resembles Fig. 2c Equations (27–28) can be used to give qualitative properties of the long-term distribution of the system in Eqs. (12, 14, 15) for \(N=2\) and \(\gamma _{1,2}=\gamma _{2,1}=\gamma \). First, by differentiating Eqs. (27–28) with respect to x, we find that $$\begin{aligned} \frac{\partial u_1}{\partial x}\frac{1}{u_1}=\gamma \frac{\partial }{\partial x}({{\mathcal {K}}}*u_2), \end{aligned}$$ $$\begin{aligned} \frac{\partial u_2}{\partial x}\frac{1}{u_2}=\gamma \frac{\partial }{\partial x}({{\mathcal {K}}}*u_1). \end{aligned}$$ Thus, \(\gamma >0\) implies that \(\frac{\partial u_1}{\partial x}\) has the same sign as \(\frac{\partial }{\partial x}({{\mathcal {K}}}*u_2)\) so any patterns that may form will be aggregation patterns (Fig. 2a, b). Furthermore, \(\gamma <0\) implies that \(\frac{\partial u_1}{\partial x}\) has the opposite sign to \(\frac{\partial }{\partial x}({{\mathcal {K}}}*u_2)\) so any patterns that form will be segregation patterns (Fig. 2c, d). Second, by making the following moment closure approximation $$\begin{aligned} {{\mathcal {K}}}*u_i \approx u_i+\sigma ^2 \frac{\partial ^2 u_i}{\partial x^2}, \end{aligned}$$ where \(\sigma ^2\) is the variance of \({{\mathcal {K}}}(x)\), we can gain insight by examining the plot of \(\frac{\partial ^2 u_i}{\partial x^2}\) against \(u_i\) in particular cases. To give an example in the case of aggregation, if \(u_1 \approx u_2\) (as in Fig. 2a), then we have \(\gamma >0\) by Eqs. (29–30). Equation (28) implies $$\begin{aligned} \sigma ^2 \frac{\partial ^2 u_1}{\partial x^2} \approx \frac{1}{\gamma }[\ln (u_1)-\eta _2]-u_1. \end{aligned}$$ The right-hand side of Eq. (32) has a unique maximum, which is above the horizontal axis as long as \(\eta _2<-1-\ln (\gamma )\) (Fig. 3a). In this case, there are two numbers \(a,b \in {{\mathbb {R}}}_{>0}\) such that \(\frac{\partial ^2 u_i}{\partial x^2}>0\) when \(a<u_i<b\) and \(\frac{\partial ^2 u_i}{\partial x^2}<0\) for \(u_i<a\) or \(u_i>b\). A possible curve that satisfies this property is given in Fig. 3b and qualitatively resembles Fig. 2a. To give an example in the case of segregation (\(\gamma <0\)), suppose that \(u_1\approx 2-u_2\). Then, by a similar argument to the \(u_1 \approx u_2\) case, \(\frac{\partial ^2 u_i}{\partial x^2}\) has a unique minimum as long as \(\eta _2<-1-\ln (-\gamma )-2\gamma \). In this case, there are two numbers \(a,b \in {{\mathbb {R}}}\) such that \(\frac{\partial ^2 u_i}{\partial x^2}<0\) when \(a<u_i<b\) and \(\frac{\partial ^2 u_i}{\partial x^2}>0\) for \(u_i<a\) or \(u_i>b\). A possible curve that satisfies this property is given in Fig. 3d and qualitatively resembles Fig. 2c. The Case of Three Interacting Populations (\(N=3\)) Although the \(N=2\) case only allows for stationary pattern formation [often called a Turing instability after Turing (1952)], for \(N>2\) we can observe both stationary and oscillating patterns. The latter arise from what is sometimes known as a wave instability, where the dominant eigenvalue of \(M(\kappa ,\delta )\) is not real but has positive real part, for some \(\kappa \). For \(N>2\), the situation becomes too complicated for analytic expressions of the eigenvalues to give any meaningful insight [and indeed, these expressions cannot be found for \(N>4\) by a classical result of Galois Theory; see Stewart (2015)], so we begin by examining the eigenvalues for certain example cases in the limit \(\delta \rightarrow 0\). This involves finding eigenvalues of the matrix \(M_0\) given in Eq. (18). Dynamics for example three-species systems. a The pattern formation regions, as predicted by linear analysis, for the system in Eqs. (12, 14, 15) in the case \(N=3\), where \(d_2=d_3=\gamma _{21}=\gamma _{31}=\gamma _{32}=1\), \(\gamma _{13}=-1\), and \(\gamma _{12},\gamma _{23}\) are varied. b The schematic diagrams of the systems, corresponding to the four quadrants of (\(\gamma _{12},\gamma _{23}\))-space Figure 4 gives an example of how (i) stationary patterns, (ii) oscillatory patterns, and (iii) no patterns can emerge in different regions of parameter space when \(N=3\). Here, we have fixed all the \(\gamma _{ij}\) except \(\gamma _{12}\) and \(\gamma _{23}\). Specifically, \(d_2=d_3=\gamma _{21}=\gamma _{31}=\gamma _{32}=1\), and \(\gamma _{13}=-1\). When \(\gamma _{12}<0<\gamma _{23}\), this corresponds to a mutual attraction between populations 2 and 3 with both 2 and 3 pursuing 1 in a pursue-and-avoid situation (Fig. 4b, top-left). When \(\gamma _{12},\gamma _{23}>0\), 3 is pursuing 1 in a pursue-and-avoid, whilst 2 is mutually attracted to both 1 and 3 (Fig. 4b, top-right). If \(\gamma _{23}<0<\gamma _{12}\), 3 is pursuing both 1 and 2 in a pursue-and-avoid, whilst 1 and 2 are mutually attracting (Fig. 4b, bottom right). Finally, if \(\gamma _{12},\gamma _{23}<0\), then 3 is pursuing both 1 and 2 in a pursue-and-avoid and 2 is pursuing 1 in a pursue-and-avoid (Fig. 4b, bottom left). We solved the system in Eqs. (12–15) for various examples from both the stationary and oscillatory pattern regimes shown in Fig. 4. For this, we used periodic boundary conditions as in Eq. (13). We used a finite difference method, coded in Python, with a spatial granularity of \(h=10^{-2}\) and a temporal granularity of \(\tau =10^{-5}\). Initial conditions were set to be small random fluctuations from the spatially homogeneous steady state. Example three-species systems with stationary distributions. a, b Two stable steady-state distributions for the system in (12, 14, 15) in the case \(N=3\), where \(d_2=d_3=\gamma _{21}=\gamma _{31}=\gamma _{32}=1\), \(\gamma _{13}=-1\), \(\gamma _{12}=\gamma _{23}=-4\), and \(\delta =0.1\). c [resp. (d)] The initial condition that led to the stationary distribution in a [resp. (b)] Stationary patterns can give rise to space partitioned into different areas for use by different populations (Fig. 5, Supplementary Video SV1), with differing amounts of overlap. Interestingly, the precise location of the segregated regions depends upon the initial conditions (compare panels (a) and (b) in Fig. 5), but the rough size of the regions appears to be independent of the initial condition (at least for the parameter values we tested). Considering the abundance of individuals as a whole (i.e. \(u_1+u_2+u_3\)), notice that certain regions of space emerge that contain more animals than others. This is despite the fact that there is no environmental heterogeneity in the model. The extent to which populations use the same parts of space depends upon the strength of attraction or repulsion. In Fig. 5a, b, the demarcation between populations 1 and 2 is quite stark, owing to the strong avoidance of population 2 by population 1 (\(\gamma _{12}=-4\)) and a relatively small attraction of population 2 to population 1 (\(\gamma _{21}=1\)), whereas, although population 1 seeks to avoid 3, the strength of avoidance is smaller (\(\gamma _{13}=-1\)), but the attraction of population 3 to population 1 is of a similar magnitude (\(\gamma _{31}=1\)). Therefore, populations 1 and 3 overlap considerably. Example three-species systems with oscillatory distributions. Here, we show the change in \(u_1(x,t)\) over space and time for two sets of parameter values. Both panels have parameter values identical to the fixed parameters from Fig. 4a, additionally fixing \(\gamma _{23}=-4\) and \(\delta =0.1\). In a, we have \(\gamma _{12}=3.3\) and b has \(\gamma _{12}=4\). We started with random initial conditions and then ran the system to (dimensionless) time \(t=20\). The plots display values of \(u_1(x,t)\) for \(x \in [0,1]\) and \(t \in [18,20]\). Plots for \(t \in [14,16]\) and \(t \in [16,18]\) (not shown) are very similar, indicating that the system has reached its attractor (color figure online) Oscillatory patterns can be quite complex (Supplementary Video SV2), varying from situations where there appear to be periodic oscillations (Fig. 6a) to those where the periodicity is much less clear (Fig. 6b). To understand their behaviour, we use a method of numerical bifurcation analysis adapted from Painter and Hillen (2011). This method begins with a set of parameters in the region of no pattern formation but close to the region of oscillatory patterns. In particular, we choose parameter values identical to the fixed values for Fig. 4a (i.e. \(d_2=d_3=\gamma _{21}=\gamma _{31}=\gamma _{32}=1\), \(\gamma _{13}=-1\)) and also \(\gamma _{23}=-2.5\) and \(\gamma _{12}=3\). We then perform the following iterative procedure: Solve the system numerically until \(t=10\), by which time the attractor has been reached, Increment \(\gamma _{12}\) by a small value (we used 0.005) and set the initial conditions for the next iteration to be the final values of \(u_1(x,t)\), \(u_2(x,t)\), and \(u_3(x,t)\) from the present numerical solution. This method is intended to approximate a continuous bifurcation analysis. To analyse the resulting patterns, we focus on the value of the system for a fixed point \(x=0.5\) and examine how attractor of the system \((u_1(0.5,t),u_2(0.5,t),u_3(0.5,t))\) changes as increase \(\gamma _{12}\) into the region of oscillatory patterns. Figure 7 shows these attractors for various \(\gamma _{12}\) values. First, we observe a small loop appearing just after the system goes through the bifurcation point (Fig. 7a). This loop then grows (Fig. 7b, c) and, when \(\gamma _{12}\approx 4.1\), undergoes a period-doubling bifurcation (Fig. 7d). The attractor remains as a double-period loop (Fig. 7e, f) until \(\gamma _{12}\approx 5.77\) where it doubles again (Fig. 7g, h). Such a sequence of period doubling is a hallmark of a chaotic system. Indeed, as \(\gamma _{12}\) is increased further, the patterns cease to have obvious period patterns (Fig. 7i) and gain a rather more irregular look, suggestive of chaos. Numerical bifurcation analysis. This sequence of plots shows the attractors just after the system passes through a bifurcation point from a region of no patterns to one of oscillatory patterns. Each panel shows the locus of the point \((u_1(0.5,t),u_2(0.5,t),u_3(0.5,t))\) as time changes for a particular set of parameter values. In all panels, \(d_2=d_3=\gamma _{21}=\gamma _{31}=\gamma _{32}=1\), \(\gamma _{13}=-1\), and \(\gamma _{23}=-2.5\). The value of \(\gamma _{12}\) increases from a to i and is given in the panel title. As \(\gamma _{12}\) increases, we observe a sequence of period-doubling bifurcations leading to irregular patterns, suggestive of a chaotic system We have used a class of diffusion–taxis systems for analysing the effect of between-population movement responses on spatial distributions of these populations. Our models are sufficient for incorporating taxis effects due to both direct and indirect animal interactions, so are of general use for a wide range of ecological communities. We have shown that spatial patterns in species distributions can emerge spontaneously as a result of these interactions. What is more, these patterns may not be fixed in time, but could be in constant flux. This brings into question the implicit assumption behind many species distribution models that the spatial distribution of a species in a fixed environment is roughly stationary over time. Mathematically, our approach builds upon recent diffusion–taxis models of territory formation (Potts and Lewis 2016a, b). However, these latter models only consider two populations, and only in the case where there is mutual avoidance (i.e. Fig. 2c, d). We have shown that, when there is just one more population in the mix (\(N=3\)), the possible patterns that emerge can be extremely rich, incorporating stationary patterns, periodic oscillations, and irregular patterns that may be chaotic. Although irregular and chaotic spatio-temporal patterns have been observed in spatial predator–prey systems (Sherratt et al. 1995, 1997), this is one of the few times they have been discovered as arising from inter-population avoidance models (but see White et al. 1996, Section 8.2). These possibilities will extend to the situation of \(N>3\), which is typical of most real-life ecosystems (e.g. Vanak et al. 2013). The models studied here are closely related to aggregation models, which are well studied, often with applications to cell biology in mind (Alt 1985; Mogilner and Edelstein-Keshet 1999; Topaz et al. 2006; Painter et al. 2015). In these models, populations exhibit self-attraction alongside diffusion and are usually framed with just a single population in mind [although some incorporate more, e.g. Painter et al. (2015), Burger et al. (2018)]. In contrast to our situation, this self-attraction process can enable spontaneous aggregation to occur in a single population. Similar to our situation, in these self-attraction models it is typical to observe ill-posed problems unless some form of regularisation is in place, either through non-local terms (Mogilner and Edelstein-Keshet 1999; Briscoe et al. 2002; Topaz et al. 2006) or other means such as mixed spatio-temporal derivatives (Padrón 1998). We have decided not to incorporate self-attraction into our framework. This is both for simplicity of analysis and because the animal populations we have in mind will tend to spread on the landscape in the absence of interactions, so are well described using diffusion as a base model (Okubo and Levin 2013; Lewis et al. 2016). However, in principle it is a simple extension to incorporate self-interaction into out framework, simply by dropping the \(j\ne i\) restriction in Eq. (12). Indeed, for \(N=2\), very similar models have been studied for aggregation/segregation properties (Burger et al. 2018) and pattern formation (Painter et al. 2015). In those studies, a combination of self-attraction and pursue-and-avoid can, contrary to the pure pursue-and-avoid case studied here, lead to moving spatial patterns where one aggregated population (the avoiders) leads the other one (the pursuers) in a 'chase' across the landscape (Painter 2009), a phenomenon observed in certain cell populations (Theveneau et al. 2013). For \(N>2\), however, we have shown that the story regarding spatial patterns can already be very rich and complicated without self-attraction, so understanding the effect of this extra complication would be a formidable exercise. Another natural extension of our work, from a mathematical perspective, would be to add reaction terms (a.k.a. kinetics) into our model, accounting for deaths (e.g. due to predation or as a result of competition) and births, by adding a function \(f_i(u_1,\dots ,u_N)\) to Eq. (12) for each i. Biologically, this would change the timescale over which our model is valid, since in the present study we have explicitly set out to model timescales over which where births and deaths are negligible. Nonetheless, this extension is worthy of discussion since the addition of such terms leads to a class of so-called cross-diffusion models, which are well studied (Shigesada et al. 1979; Gambino et al. 2009; Shi et al. 2011; Tania et al. 2012; Potts and Petrovskii 2017). The term 'cross-diffusion' has been used in various guises, but the general form can incorporate both taxis terms of the type described here, as well as other terms that model various movement responses between populations. These cross-diffusion terms can combine with the reaction terms to drive pattern formation (Shi et al. 2011; Tania et al. 2012), as well as altering spreading speeds (Gambino et al. 2009; Girardin and Nadin 2015), and the outcome of competitive dynamics (Potts and Petrovskii 2017). The key difference between our work and traditional studies of cross-diffusion is that rich patterns form in our model despite the lack of kinetics. As such, we separate out the effect of taxis on pattern formation from any interaction with the reaction terms. Our mathematical insights suggest that there is an urgent need to understand the extent to which the underlying movement processes in our model are prevalent in empirical ecosystems. Much effort is spent in understanding species distributions (Manly et al. 2002; Araujo and Guisan 2006; Jiménez-Valverde et al. 2008), often motivated by highly applied questions such as understanding the effect of climate change on biodiversity loss (Gotelli and Stanton-Geddes 2015), planning conservation efforts (Rodríguez et al. 2007; Evans et al. 2015), and mitigating negative effects of disease spread (Fatima et al. 2016) and biological invasions (Mainali et al. 2015). Species distribution models typically seek to link the distribution of species with environmental covariates, whereas the effect of between-population movement responses is essentially ignored. Presumably, this is because it is considered as 'noise' that likely averages out over time. In contrast, this study suggests that the patterns emerging from between-population movements may be fundamental drivers of both transient and asymptotic species distributions. Fortuitously, recent years have seen the development of techniques for measuring the effects of foreign populations on animal movement. Animal biologging technology has become increasingly smaller, cheaper, and able to gather data at much higher frequencies than ever before (Wilmers et al. 2015; Williams et al., in review). Furthermore, statistical techniques have become increasingly refined to uncover the behavioural mechanisms behind animals' movement paths (Albertsen et al. 2015; Avgar et al. 2016; Michelot et al. 2016; Potts et al. 2018). In particular, these include inferring interactions between wild animals, both direct (Vanak et al. 2013) and mediated by environmental markers (Latombe et al. 2014; Potts et al. 2014). Consequently, the community of movement ecologists is in a prime position to measure between-population movement responses and seek to understand the prevalence of movement-induced spatial distribution patterns reported here. Our hope is that the theoretical results presented here will serve as a motivating study for understanding the effect of between-population movement responses on spatial population dynamics in empirical systems, as well as highlighting the need for such studies if we are to understand accurately the drivers behind observed species distributions. Adams ES (2001) Approaches to the study of territory size and shape. Ann Rev Ecol Syst 32:277–303 Albertsen CM, Whoriskey K, Yurkowski D, Nielsen A, Flemming JM (2015) Fast fitting of non-Gaussian state-space models to animal movement data via template model builder. Ecology 96(10):2598–2604 Alt W (1985) Degenerate diffusion equations with drift functionals modelling aggregation. Nonlinear Anal Theory Methods Appl 9(8):811–836 Araujo MB, Guisan A (2006) Five (or so) challenges for species distribution modelling. J Biogeogr 33(10):1677–1688 Avgar T, Baker JA, Brown GS, Hagens JS, Kittle AM, Mallon EE, McGreer MT, Mosser A, Newmaster SG, Patterson BR et al (2015) Space-use behaviour of woodland caribou based on a cognitive movement model. J Anim Ecol 84(4):1059–1070 Avgar T, Potts JR, Lewis MA, Boyce MS (2016) Integrated step selection analysis: bridging the gap between resource selection and animal movement. Methods Ecol Evol 7(5):619–630 Börger L, Dalziel BD, Fryxell JM (2008) Are there general mechanisms of animal home range behaviour? A review and prospects for future research. Ecol Lett 11(6):637–650. https://doi.org/10.1111/j.1461-0248.2008.01182.x Briscoe B, Lewis M, Parrish S (2002) Home range formation in wolves due to scent marking. Bull Math Biol 64(2):261–284. https://doi.org/10.1006/bulm.2001.0273 Burger M, Francesco MD, Fagioli S, Stevens A (2018) Sorting phenomena in a mathematical model for two mutually attracting/repelling species. SIAM J Math Anal 50(3):3210–3250 Durrett R, Levin S (1994) The importance of being discrete (and spatial). Theor Popul Biol 46(3):363–394 Evans TG, Diamond SE, Kelly MW (2015) Mechanistic species distribution modelling as a link between physiology and conservation. Conserv Physiol 3(1):cov056 Fagan WF, Lewis MA, Auger-Méthé M, Avgar T, Benhamou S, Breed G, LaDage L, Schlägel UE, Ww Tang, Papastamatiou YP, Forester J, Mueller T (2013) Spatial memory and animal movement. Ecol Lett 16(10):1316–1329. https://doi.org/10.1111/ele.12165 Fatima SH, Atif S, Rasheed SB, Zaidi F, Hussain E (2016) Species distribution modelling of Aedes aegypti in two dengue-endemic regions of pakistan. Trop Med Int Health 21(3):427–436 Fleming CH, Fagan WF, Mueller T, Olson KA, Leimgruber P, Calabrese JM (2015) Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator. Ecology 96(5):1182–1188 Gallagher AJ, Creel S, Wilson RP, Cooke SJ (2017) Energy landscapes and the landscape of fear. Trends Ecol Evol 32(2):88–96 Gambino G, Lombardo MC, Sammartino M (2009) A velocity–diffusion method for a Lotka–Volterra system with nonlinear cross and self-diffusion. Appl Numer Math 59(5):1059–1074 Girardin L, Nadin G (2015) Travelling waves for diffusive and strongly competitive systems: relative motility and invasion speed. Eur J Appl Math 26(4):521–534 Giuggioli L, Potts JR, Rubenstein DI, Levin SA (2013) Stigmergy, collective actions, and animal social spacing. Proc Natl Acad Sci 110:16904–16909 Gotelli NJ, Stanton-Geddes J (2015) Climate change, genetic markers and species distribution modelling. J Biogeogr 42(9):1577–1585 Hastings A (1980) Disturbance, coexistence, history, and competition for space. Theor Popul Biol 18(3):363–373 Hastings A, Cuddington K, Davies KF, Dugaw CJ, Elmendorf S, Freestone A, Harrison S, Holland M, Lambrinos J, Malvadkar U et al (2005) The spatial spread of invasions: new developments in theory and evidence. Ecol Lett 8(1):91–101 Hays GC, Ferreira LC, Sequeira AM, Meekan MG, Duarte CM, Bailey H, Bailleul F, Bowen WD, Caley MJ, Costa DP et al (2016) Key questions in marine megafauna movement ecology. Trends Ecol Evol 31(6):463–475 Hooten MB, Johnson DS, McClintock BT, Morales JM (2017) Animal movement: statistical models for telemetry data. CRC Press, Boca Raton Jiménez-Valverde A, Lobo JM, Hortal J (2008) Not as good as they seem: the importance of concepts in species distribution modelling. Divers Distrib 14(6):885–890 Kareiva P, Odell G (1987) Swarms of predators exhibit "prey taxis" if individual predators use area-restricted search. Am Nat 130(2):233–270 Kays R, Crofoot MC, Jetz W, Wikelski M (2015) Terrestrial animal tracking as an eye on life and planet. Science 348(6240):aaa2478 Kneitel JM, Chase JM (2004) Trade-offs in community ecology: linking spatial scales and species coexistence. Ecol Lett 7(1):69–80 Latombe G, Fortin D, Parrott L (2014) Spatio-temporal dynamics in the response of woodland caribou and moose to the passage of grey wolf. J Anim Ecol 83(1):185–198 Laundré JW, Hernández L, Ripple WJ (2010) The landscape of fear: ecological implications of being afraid. Open Ecol J 3:1–7 Lee J, Hillen T, Lewis M (2009) Pattern formation in prey-taxis systems. J Biol Dyn 3(6):551–573 Lewis M, Moorcroft P (2006) Mechanistic home range analysis. Princeton University Press, Princeton Lewis MA, Murray JD (1993) Modelling territoriality and wolf–deer interactions. Nature 366:738–740 Lewis MA, Petrovskii SV, Potts JR (2016) The mathematics behind biological invasions, vol 44. Springer, Berlin Lugo CA, McKane AJ (2008) Quasicycles in a spatial predator–prey model. Phys Rev E 78(5):051,911 Mainali KP, Warren DL, Dhileepan K, McConnachie A, Strathie L, Hassan G, Karki D, Shrestha BB, Parmesan C (2015) Projecting future expansion of invasive species: comparing and improving methodologies for species distribution modeling. Glob Change Biol 21(12):4464–4480 Manly B, McDonald L, Thomas D, McDonald T, Erikson W (2002) Resource selection by animals: statistical design and analysis for field studies. Elsevier Academic Press, Chapman and Hall, New York Michelot T, Langrock R, Patterson TA (2016) moveHMM: an R package for the statistical modelling of animal movement data using hidden markov models. Methods Ecol Evol 7(11):1308–1315 Mogilner A, Edelstein-Keshet L (1999) A non-local model for a swarm. J Math Biol 38(6):534–570 Murray JD (2003) Mathematical biology II: spatial models and biomedical applications. Springer, New York Murrell DJ, Law R (2003) Heteromyopia and the spatial coexistence of similar competitors. Ecol Lett 6(1):48–59 Nathan R, Giuggioli L (2013) A milestone for movement ecology research. Move Ecol 1(1) Nathan R, Getz WM, Revilla E, Holyoak M, Kadmon R, Saltz D, Smouse PE (2008) A movement ecology paradigm for unifying organismal movement research. Proc Natl Acad Sci 105(49):19052–19059. https://doi.org/10.1073/pnas.0800375105 Okubo A, Levin SA (2013) Diffusion and ecological problems: modern perspectives, vol 14. Springer, Berlin Padrón V (1998) Sobolev regularization of a nonlinear ill-posed parabolic problem as a model for aggregating populations. Commun Partial Differ Equ 23(3–4):457–486 Padrón V (2004) Effect of aggregation on population recovery modeled by a forward–backward pseudoparabolic equation. Trans Am Math Soc 356(7):2739–2756 Painter KJ (2009) Continuous models for cell migration in tissues and applications to cell sorting via differential chemotaxis. Bull Math Biol 71(5):1117 Painter KJ, Hillen T (2011) Spatio-temporal chaos in a chemotaxis model. Physica D 240:363–375 Painter K, Bloomfield J, Sherratt J, Gerisch A (2015) A nonlocal model for contact attraction and repulsion in heterogeneous cell populations. Bull Math Biol 77(6):1132–1165 Pascual M (1993) Diffusion-induced chaos in a spatial predator–prey system. Proc R Soc Lond B 251(1330):1–7 Petrovskii SV, Morozov AY, Venturino E (2002) Allee effect makes possible patchy invasion in a predator–prey system. Ecol Lett 5(3):345–352 Potts JR, Lewis MA (2014) How do animal territories form and change? Lessons from 20 years of mechanistic modelling. Proc R Soc B 281(1784):20140,231 Potts JR, Lewis MA (2016a) How memory of direct animal interactions can lead to territorial pattern formation. J R Soc Interface 13:20160059 Potts JR, Lewis MA (2016b) Territorial pattern formation in the absence of an attractive potential. J Math Biol 72(1–2):25–46 Potts JR, Petrovskii SV (2017) Fortune favours the brave: movement responses shape demographic dynamics in strongly competing populations. J Theor Biol 420:190–199 Potts JR, Mokross K, Lewis MA (2014) A unifying framework for quantifying the nature of animal interactions. J R Soc Interface 11(96):20140,333 Potts JR, Börger L, Scantlebury DM, Bennett NC, Alagaili A, Wilson RP (2018) Finding turning-points in ultra-high-resolution animal movement data. Methods Ecol Evol 9(10):2091–2101 Rodríguez JP, Brotons L, Bustamante J, Seoane J (2007) The application of predictive modelling of species distribution to biodiversity conservation. Divers Distrib 13(3):243–251 Sherratt JA, Lewis MA, Fowler AC (1995) Ecological chaos in the wake of invasion. Proc Natl Acad Sci 92(7):2524–2528 Sherratt JA, Eagan BT, Lewis MA (1997) Oscillations and chaos behind predator–prey invasion: mathematical artifact or ecological reality? Philos Trans R Soc Lond Ser B Biol Sci 352(1349):21–38 Shi J, Xie Z, Little K (2011) Cross-diffusion induced instability and stability in reaction–diffusion systems. J Appl Anal Comput 1(1):95–119 Shigesada N, Kawasaki K, Teramoto E (1979) Spatial segregation of interacting species. J Theor Biol 79(1):83–99 Stewart IN (2015) Galois theory. CRC Press, Boca Raton Sun GQ, Zhang J, Song LP, Jin Z, Li BL (2012) Pattern formation of a spatial predator–prey system. Appl Math Comput 218(22):11,151–11,162 Tania N, Vanderlei B, Heath JP, Edelstein-Keshet L (2012) Role of social interactions in dynamic patterns of resource patches and forager aggregation. Proc Natl Acad Sci 109(28):11,228–11,233 Theveneau E, Steventon B, Scarpa E, Garcia S, Trepat X, Streit A, Mayor R (2013) Chase-and-run between adjacent cell populations promotes directional collective migration. Nat Cell Biol 15(7):763 Topaz CM, Bertozzi AL, Lewis MA (2006) A nonlocal continuum model for biological aggregation. Bull Math Biol 68(7):1601 Turing AM (1952) The chemical basis of morphogenesis. Phil Trans R Soc Lond B 237(641):37–72 Vanak A, Fortin D, Thakera M, Ogdene M, Owena C, Greatwood S, Slotow R (2013) Moving to stay in place—behavioral mechanisms for coexistence of African large carnivores. Ecology 94:2619–2631 White K, Lewis M, Murray J (1996) A model for wolf-pack territory formation and maintenance. J Theor Biol 178(1):29–43 Williams HJ, Taylor LA, Benhamou S, Bijleveld AI, Clay TA, de Grissac S, Demšar U, English HM, Franconi N, Gómez-Laich A, Griffiths RC, Kay WP, Morales JM, Potts JR, Rogerson KF, Rutz C, Spelt A, Trevail AM, Wilson RP, Börger L (in review) Optimising the use of bio-loggers for movement ecology research Wilmers CC, Nickel B, Bryce CM, Smith JA, Wheat RE, Yovovich V (2015) The golden age of bio-logging: how animal-borne sensors are advancing the frontiers of ecology. Ecology 96(7):1741–1753 JRP thanks the School of Mathematics and Statistics at the University of Sheffield for granting him study leave which has enabled the research presented here. MAL gratefully acknowledges the Canada Research Chairs program and Discovery grant from the Natural Sciences and Engineering Research Council of Canada. School of Mathematics and Statistics, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield, S3 7RH, UK Jonathan R. Potts Department of Mathematical and Statistical Sciences, CAB632, University of Alberta, Edmonton, AB, T6G 2G1, Canada Mark A. Lewis Department of Biological Sciences, CAB632, University of Alberta, Edmonton, AB, T6G 2G1, Canada Search for Jonathan R. Potts in: Search for Mark A. Lewis in: Correspondence to Jonathan R. Potts. Below is the link to the electronic supplementary material. Supplementary material 1 (mp4 1390 KB) Potts, J.R., Lewis, M.A. Spatial Memory and Taxis-Driven Pattern Formation in Model Ecosystems. Bull Math Biol 81, 2725–2747 (2019) doi:10.1007/s11538-019-00626-9 Issue Date: 15 July 2019 Advection–diffusion Animal movement
CommonCrawl
Increased lignocellulosic inhibitor tolerance of Saccharomyces cerevisiae cell populations in early stationary phase Venkatachalam Narayanan1, Jenny Schelin1, Marie Gorwa-Grauslund1, Ed WJ van Niel1 & Magnus Carlquist1 Production of second-generation bioethanol and other bulk chemicals by yeast fermentation requires cells that tolerate inhibitory lignocellulosic compounds at low pH. Saccharomyces cerevisiae displays high plasticity with regard to inhibitor tolerance, and adaptation of cell populations to process conditions is essential for reaching efficient and robust fermentations. In this study, we assessed responses of isogenic yeast cell populations in different physiological states to combinations of acetic acid, vanillin and furfural at low pH. We found that cells in early stationary phase (ESP) exhibited significantly increased tolerance compared to cells in logarithmic phase, and had a similar ability to initiate growth in the presence of inhibitors as pre-adapted cells. The ESP cultures consisted of subpopulations with different buoyant cell densities which were isolated with flotation and analysed separately. These so-called quiescent (Q) and non-quiescent (NQ) cells were found to possess similar abilities to initiate growth in the presence of lignocellulosic inhibitors at pH 3.7, and had similar viabilities under static conditions. Therefore, differentiation into Q-cells was not the cause for increased tolerance of ESP cultures. Flow cytometry analysis of cell viability, intracellular pH and reactive oxygen species levels revealed that tolerant cell populations had a characteristic response upon inhibitor perturbations. Growth in the presence of a combination of inhibitors at low pH correlated with pre-cultures having a high frequency of cells with low pHi and low ROS levels. Furthermore, only a subpopulation of ESP cultures was able to tolerate lignocellulosic inhibitors at low pH, while pre-adapted cell populations displayed an almost uniform high tolerance to the adverse condition. This was in stark contrast to cell populations growing exponentially in non-inhibitory medium that were uniformly sensitive to the inhibitors at low pH. ESP cultures of S. cerevisiae were found to have high tolerance to lignocellulosic inhibitors at low pH, and were able to initiate growth to the same degree as cells that were pre-adapted to inhibitors at a slightly acidic pH. Carbon starvation may thus be a potential strategy to prepare cell populations for adjacent stressful environments which may be beneficial from a process perspective for fermentation of non-detoxified lignocellulosic substrates at low pH. Furthermore, flow cytometry analysis of pHi and ROS level distributions in ESP cultures revealed responses that were characteristic for populations with high tolerance to lignocellulosic inhibitors. Measurement of population distribution responses as described herein may be applied to predict the outcome of environmental perturbations and thus can function as feedback for process control of yeast fitness during lignocellulosic fermentation. Adverse impacts of climatic change and concerns over energy security could be abated by replacing petrochemicals with chemicals produced from lignocellulose, which is the most abundant renewable feedstock on the planet and is available from industrial and agricultural residues. Intense research and development for decades have led to the onset of commercial scale production of lignocellulosic bioethanol using the industrial workhorse Saccharomyces cerevisiae. Although many improvements could be done in biomass pretreatment [1, 2], enzymatic hydrolysis [3] and inhibitor detoxification [4], the robustness of S. cerevisiae towards adverse process conditions is still a key engineering target to increase productivity, avoid loss of fermentable sugars and therefore reduce production costs [5]. An important hurdle to overcome for maintaining high cell activity is the negative effect of lignocellulosic inhibitors produced by the most common pretreatment methods; these include furaldehydes such as furfural and hydroxymethylfurfural (HMF), phenolics such as vanillin and 4-hydroxybenzoic acid and weak organic acids such as acetic acid, formic acid and levulinic acid (see reviews [5,6,7]). Cell tolerance to lignocellulosic inhibitors is a highly plastic phenotype and depends on the environment that the cell population has experienced before exposure. For example, pre-cultivation in lignocellulosic hydrolysate containing furfural and HMF leads to induced expression of genes coding for specific NADPH-dependent oxidoreductases, e.g. Adh6 [8], that reduce the aldehyde moiety into less inhibitory furfuryl alcohols resulting in a shortened latency phase in the fermentation [9]. Tolerance to vanillin is similarly correlated to increased reduction to the less toxic vanillyl alcohol [10]. Also the tolerance to acetic acid at low pH is increased by pre-cultivation in medium supplemented with acetic acid at slightly acidic pH [11]. The acid tolerance is partly caused by an induced expression of the HAA1 gene coding for a global transcription factor that activates multiple genes, including TPO1 and TPO2 coding for drug/H+-antiporters which export dissociated acetate from the cytoplasm [12, 13]. For these reasons, improved fermentation of lignocellulosic substrates can be reached through adapting cell populations by pre-exposure to moderate inhibitor levels in the pre-cultivation step [14, 15]. The level of cellular resistance to a specific stress is determined both by stress-specific and general mechanisms. For example, it was previously found that a slow growth rate correlates with increased tolerance towards a number of seemingly non-related stresses [16]. The extreme case are cells in stationary phase (SP), which are characterized by increased cell robustness to heat shock, osmotic stress, freeze–thaw stress and weak acid stress [17,18,19,20,21]. The higher robustness of SP-cells is often explained by activation of multiple cellular regulatory events upon nutrient starvation, including the environmental stress response (ESR), which leads to adjustment of cellular resources to promote survival in adjacent environments (see reviews [22, 23]). Based on this, it can be proposed that increased tolerance to lignocellulosic conditions may be reached without pre-exposure to inhibitors, for example, by allowing cells to reach SP by carbon starvation prior to the fermentation step. The aim of the current study was to investigate correlations between the physiological state of yeast populations and their aptitude to tolerate combinations of lignocellulosic inhibitors (vanillin, furfuraldehyde and acetic acid) at low pH. In particular, physiological responses of cells in SP, including the previously described quiescent (Q) and non-quiescent (NQ) cells [24,25,26], were investigated in detail. Furthermore, flow cytometry (FCM) measurements of cell viability, the intracellular pH (pHi) and reactive oxygen species (ROS) levels were applied to compare the responses of cell populations in early stationary phase (ESP) to those of cells in logarithmic phase (LP), and cells pre-adapted to lignocellulosic inhibitors. Strains and media Saccharomyces cerevisiae strains used in this study are listed in Table 1. They were stored at −80 °C in Yeast Peptone (YP) medium containing 10 g L−1 yeast extract, 20 g L−1 peptone and 20 g L−1 glucose supplemented with 30% (v/v) glycerol and maintained on YP agar plates containing 10 g L−1 yeast extract, 20 g L−1 peptone, 20 g L−1 glucose and 15 g L−1 agar. A chemically defined medium [27] with 20 g L−1 glucose, buffered to pH 3.7, 5.0 or 6.5 with 50 mM potassium hydrogen phthalate [28] and supplemented with or without 6 g L−1 acetic acid, 0.75 g L−1 furfural and 0.2 g L−1 vanillin was used in all aerobic growth experiments. Vitamins, trace elements, furfural and vanillin were filter-sterilized to avoid changes in composition due to evaporation during autoclavation. Escherichia coli strain NEB5α (New England Biolabs) was recovered from 25% glycerol stock stored at −80 °C and used for subcloning of plasmid DNA and further propagation. Luria–Bertani broth (LB) (5 g L−1 yeast extract, 10 g L−1 tryptone and 5 g L−1 NaCl, pH 7.5) medium was used for culturing E. coli and 50 mg L−1 ampicillin was added to the LB medium when required. Media components were purchased from Sigma-Aldrich (Sweden), unless mentioned otherwise. Table 1 S. cerevisiae strains used in this study Construction of S. cerevisiae strain expressing pHluorin Competent E. coli NEB5α cells were prepared using the RbCl method described in the subcloning notebook from Promega, which is adapted from the method described by [29]. The competent cells were transformed according to the supplier's instructions (New England Biolabs). Bacterial transformants were selected on solid LB plates (15 g L−1 agar), supplemented with ampicillin (50 mg L−1), for 16 h at 37 °C. Plasmid preparation from E. coli transformants was performed using GeneJet™ Plasmid Miniprep kit (Thermo Scientific, Waltham, USA). S. cerevisiae CEN.PK 113-5D was grown in liquid YPD medium for 14–16 h at 30 °C and 180 rpm in a rotary shake incubator (New Brunswick, Enfield, CT, USA) when preparing the strain for transformation. It was transformed with the URA3-based 2µ episomal plasmid pYES-pACT1-pHluorin [30] using the high-efficiency LiAc method [31], and the engineered yeast strain (henceforth mentioned as TMB3800) was selected on YNB-glucose plates (6.7 g L−1 Yeast Nitrogen Base without amino acids (Becton, Dickinson and Company, USA) supplemented with 20 g L−1 glucose and 15 g L−1 agar). Aerobic batch cultivation in shake flasks Cultures were grown in a rotary shake incubator at 180 rpm at 30 °C and cell concentrations were determined by optical density (OD) at 620 nm (Spectrophotometer U-1800, Hitachi, Berkshire, UK). Seed cultures were cultivated from single colonies of S. cerevisiae strains (from solid media) in 5 mL defined medium at pH 6.5 in a 50-mL conical tube until late exponential phase. Cells from the seed culture were harvested by centrifugation at 3056g for 5 min at 4 °C (Eppendorf centrifuge 5810 R, USA), washed with saline solution (0.9% NaCl) and subsequently used for inoculation of the pre-culture at an initial OD620 of 0.5. Cells from the pre-culture were harvested at different phases of cultivation and used to inoculate the final cultivation. Pre-cultivation and subsequent cultivations were performed in baffled shake flasks with the medium volume equivalent to 10% of baffled shake volume to maintain adequate aeration. All the experiments were carried out in biological replicates (n = 2 or 3) and measurements were carried out in technical triplicate. TMB3500 was grown for 24 h until ESP, subsequently Q- and NQ-cells were separated by flotation [32]. For the flotation procedure, three different sterile colloidal solutions of non-toxic silanized silica particles in 0.9% NaCl, all formulated by QRAB (Alunda, Sweden) and produced by FertiPro N.V., Beernem, Belgium were used. The densities of the solutions were adjusted to previously reported buoyant cell densities for Q- (1.14 g mL−1) and NQ-cells (1.10 g mL−1) [24] as follows: BactXtractor-L (BX-L; density, 1.06 g mL−1), BactXtractor-M (BX-M; density, 1.12 g mL−1) and BactXtractor-H (BX-H; density, 1.29 g mL−1). The final densities of BX-L and BX-M were reached after dilution of BX-H with sterile 0.9% NaCl and measured using a DMA46 density metre (Instrument AB Lambda, Stockholm, Sweden). The flotation media were stored at 4 °C. Initially cells from an ESP culture of TMB3500 were harvested by centrifugation and homogenously re-suspended in 3 mL BX-H in a 15-mL Falcon Tube (Sarstedt, Nümbrecht, Germany). A discontinuous gradient was then created by the careful addition of 6 mL of BX-M followed by 2 mL of BX-L. The tube was centrifuged at 3056 g for 60 min at 4 °C using a swing-out centrifuge (Sigma 4-15C, Qiagen, Sweden) to separate Q- and NQ-cells. The two resulting cell fractions, due to differences in buoyant densities (Q-cells at the lower interphase between BX-H and BX-M layers, δ > 1.12; NQ-cells at the upper interphase between BX-M and BX-L layers, δ > 1.06), were collected using a syringe and needle. Each cell fraction was subsequently washed with sterile 0.9% NaCl and centrifuged at 3056g for 5 min at 4 °C. Cells from each fraction were visualized in the microscope (Nikon optiphot with Zeiss axiscam MRm, Sweden) and were characterized further in subsequent growth analyses. Analysis of responses of unsorted ESP-, Q- and NQ-cells to lignocellulosic inhibitors TMB3500 was pre-cultured for 24 h to reach ESP and cells were harvested by centrifugation at 3056 g for 5 min at 4 °C. Unsorted ESP-cells, Q-cells and NQ-cells were inoculated at an OD620 of 0.5 in 200 μL of 15 different media (Table 2) with varied concentrations of inhibitors (furfural, vanillin, acetic acid) at pH 3.7 in 96-well microtiter plates covered with a transparent plastic film (Breathe easy, Diversified biotech, USA) to prevent evaporation. Concentrations of the inhibitors were defined with a circumscribed central composite design equation (ccdesign) using Matlab (Release R2015a, The MathWorks, Inc., Natick, MA, USA). Growth was followed for 40 h by measuring OD620 with a multiscan ascent spectrophotometer (ThermoFisher Scientific, Sweden). Three-way ANOVAs (anovan) using Matlab were performed to see any effect of individual inhibitors on growth after 14 h with the different inocula (Unsorted ESP-, Q- and NQ-cells) (Additional file 1: Tables S1–3). A principle component analysis (PCA) of the whole dataset with ESP-, Q- and NQ-cells as loads and medium (M1-15) as scores was performed using the pca function in Matlab. Lag times and maximum specific growth rates were calculated by fitting the raw data to the modified Gompertz growth equation [33] using the Solver function to minimize sum of least squares in Excel (Microsoft, 2013, USA). Table 2 Inhibitor compositions of the media as defined by circumscribed central composite design Flow cytometry analysis A BD Accuri™ C6 flow cytometer equipped with a Csampler (Becton–Dickinson, NJ, USA) was used to measure viability [34] and ROS [35], as described previously. Quality control of the instrument was made with 6 and 8 peak fluorescent calibration beads. The fluidics was set to medium flow rate (35 µL s−1), the threshold was set to 50,000 on the forward scatter channel, and 20,000 or 100,000 cells were collected at a rate between 3000 and 6000 events s−1. For determination of cell viability, 1 × 107 cells mL−1 in phosphate-buffered saline solution (PBS) (8 g L−1 NaCl, 0.2 g L−1 KCl, 1.42 g L−1 Na2HPO4 and 0.24 g L−1 KH2PO4, pH 7.4) were stained with propidium iodide (PI) (1 µg mL−1) and incubated in the dark for 10 min. A blue laser (488 nm) was used for the excitation, and PI emission was collected at 585/40 nm. For determination of ROS, 1 × 106 cells mL−1 in PBS solution were stained with dihydroethidium (DHE) (50 µg mL−1) and incubated in the dark for 20 min. DHE permeates into cells and gets oxidized to ethidium when exposed to superoxide in a dose-dependent manner. Ethidium then intercalates with DNA and emits red fluorescence proportional to intracellular ROS [35, 36]. A blue laser (488 nm) was used for the excitation, and DHE emission was collected at 585/40 nm. Cells in logarithmic phase grown in defined medium were used as live control and cells treated with 70% ethanol for 20 min were used as positive control for analysis of both viability and ROS. Autofluorescence was measured for unstained exponentially growing cells (CEN.PK 113-7D). A Moflo XPD cell sorter (Becton–Dickinson, NJ, USA) with physically separated laser lines was used for ratiometric flow cytometry analysis of pHluorin fluorescence, as described previously [37]. Calibration of the instrument was made with fluorescent calibration beads (SPHERO Ultra rainbow fluorescent particles, 3.01 μM, Spherotech, USA). The threshold was set based on forward scatter obtained from blue laser (488 nm), and 100,000 cells were collected at a rate of approximately 5000 events/s. Excitation of pHluorin was made with a blue laser (488 nm) and a violet laser (405 nm), and the corresponding fluorescence emissions were collected with bandpass filters at 529/28 nm and 542/50 nm. The pH dependence of pHluorin was confirmed by measuring the ratio of fluorescence from the different excitation wavelengths, R405/488, in permeabilized cells in sodium phosphate buffer (0.2 M) at different pH ranging from 5.7 to 8. Permeabilization was made by incubating cells in PBS buffer (50 mM, pH 6.5) supplemented with digitonin (0.04 mM) for 15 min at room temperature on a turning table, as described previously [30]. Flow cytometry standard (FCS) data files were exported from the BD Accuri C6 software (BD Biosciences, USA) or saved directly from the Kaluza software (Beckman coulter, USA) and analysed with FlowJo v10.1 (FlowJo, LLC Ashland, OR, USA). For determination of percent viable cells, the gate was defined based on PI fluorescence (585/40 nm) of live and dead cell controls samples. The live control sample was exponentially growing cells in defined mineral medium, and the dead control was prepared by incubating cells in 70% ethanol for at least 30 min. An initial noise signal reduction was made for all samples by gating single cells based on the height and area of the forward scatter signal. pHluorin population was gated from autofluorescence (measured with a CEN.PK 113-7D strain without expression of pHluorin) using a bivariate plot of emissions at 542/50 nm and 529/28 nm, and excitation with the violet laser (405 nm) and the blue laser (488 nm), respectively. The pHluorin excitation ratio, R (F405/F488), for each cell was subsequently calculated using the derived parameter function in Flowjo. R (F405/488) for exponentially growing cells was used to differentiate cells with low and high pH. The cut-off was defined as the value (R = 0.400) dividing the population to two fractions: 1% lowest percentile (low pHi) and 99% highest percentile (high pHi) of exponentially growing cells. High and low ROS levels were defined similarly as the fluorescence intensity (height) at 585/50 nm dividing the LP-cell population in 1% lowest percentile (low ROS) and 99% highest percentiles (high ROS). Tolerance to acetic acid is growth phase dependent Acetic acid is one of the most important adversaries during fermentation of lignocellulosic substrates. Yeast cells display relatively high tolerance to acetic acid at low pH; however, the level of tolerance depends on several mechanisms that are differently regulated depending on additional extracellular conditions. To shed light on to whether the acetic acid tolerance phenotype can be induced by carbon starvation instead of pre-cultivation in the presence of acetic acid, cells were cultivated in aerobic batch mode in a defined mineral medium and harvested at different growth phases, i.e. at log phase (LP) (8–12 h), early stationary phase (ESP) (18–24 h) and late stationary phase (48 h). In parallel, cells were also pre-cultivated in a medium with 6 g L−1 acetic acid at pH 5.0, since this condition was previously shown to induce the desired acid tolerance [11]. Supplementation of acetic acid in the pre-cultivation medium inhibited growth already at pH 5.0, as reflected by an extended diauxic phase as well as a reduced growth rate during glucose assimilation compared to the reference cultivation (Fig. 1a). With a pKa for acetic acid of 4.76, the concentration of undissociated acetic acid experienced by the cells at pH 5.0 was approximately 2.2 g L−1. Although growth profiles differed substantially, the final biomass concentrations in the two pre-cultivations were the same with or without acetic acid in the medium. The cells of each of the pre-cultivations were re-inoculated in a new medium supplemented with 6 g L−1 acetic acid at pH 3.7. Yeast response to acetic acid and low pH. a Pre-cultivation of the S. cerevisiae TMB3500 strain in aerobic batch mode in defined mineral medium at pH 5.0 without (round) or with 6 g L−1 acetic acid (square). b Cell dry weight (full lines) and % viability (dotted lines) after 24 h aerobic batch cultivation in defined mineral medium at pH 3.7 with 6 g L−1 acetic acid from pre-cultures at pH 5.0 with (blue) and without (grey) 6 g L−1 acetic acid. Error bars represent standard error of biological duplicates (n = 2) The LP-cells not pre-exposed to acetic acid did not proliferate in the new medium (Fig. 1b). The poor tolerance to acetic acid at low pH was also clear from the poor viability after 24 h in the new medium (3–4% of the population). In contrast, pre-adapted cells that were pre-cultured the same length of time (8–12 h) were able to generate a cell biomass of ca 2.5 ± 0.1 gdw L−1 and cell viability was around 90% after 24 h (Fig. 1b). The ESP-cells were able to initiate proliferation in the new medium, whether or not they were pre-grown in the presence of acetic acid at pH 5.0, resulting in a high biomass titre (2.4 gdw L−1) and viability (87%) after 24 h. This demonstrates that ESP-cells indeed had an induced acetic acid tolerance. In late SP (48 h pre-cultivation), cells displayed a reduced ability to grow, thus demonstrating a lower tolerance to acetic acid at low pH. However, for this long pre-cultivation time, the biomass titre was substantially lower if cells were pre-adapted than not (1.30 ± 0.65 gdw L−1 vs. 0.13 ± 0.01 gdw L−1). Survival was not correlated to growth for ESP- and late SP-cells, as shown by a high viability (>90%) even under static conditions (Fig. 1b). Tolerance of ESP-cell subpopulations to combinations of lignocellulosic inhibitors The conditional boundaries of ESP-cells to tolerate lignocellulosic inhibitors at pH 3.7 were further mapped by performing a series of cultivations in microtiter plates in media supplemented with different levels of acetic acid, vanillin and furfural according to a 23 factorial design (Table 2). In addition, we reasoned that the observed increase in tolerance to acetic acid of ESP-cells was due to their differentiation into Q-cells, which were previously shown to possess increased tolerance to heat shock and oxidative stress compared to NQ-cells [24]. To investigate this further, Q- and NQ-cell subpopulations were separated and both were analysed in parallel with the unsorted ESP-cells. Indeed, two cell fractions with different buoyant densities were separated using flotation (Fig. 2), whereas cells in exponential phase cells could not, which is in accordance with the previous studies [24, 25]. When analysed under the microscope, the Q-cells were round without buds in contrast to the NQ-cells that were a heterogeneous mixture of budding and non-budding cells (Fig. 2). Separation of S. cerevisiae TMB3500 in ESP using flotation. a Unsorted SP-cells. b Discontinuous density gradient after centrifugation. c Q-cells d NQ-cells The three cell populations (ESP-cells, Q-cells and NQ-cells) were each re-inoculated in 15 different medium compositions (M1-M15) and cell growth was monitored for 40 h. Concentration of glucose was low (20 g L−1) to avert osmotic stress to the yeast cell and to keep ethanol levels low, thereby removing their inhibitory effect as significant factors on the growth of the populations. Growth was observed in all media except M6, M8, M10 and M14, although the final biomass generated was substantially different (Fig. 3). To distinguish the individual and combinatory effect of inhibitors on growth, a 3-way analysis of variation (ANOVA) was used separately on each dataset generated for each test (Additional file 1: Tables S1–3). The first measured time point (14 h) (Fig. 3a) was used as input for the ANOVA, since the largest effect of these different inocula was observed in the latency phase. From the ANOVAs, it was seen that acetic acid, furfural and vanillin inhibited growth significantly (p < 0.001) for all inocula, although the effect of furfural was the smallest. The synergistic effect of two inhibitors (acetic acid and vanillin, acetic acid and furfural or vanillin and furfural) on growth was significant in all cases when using Q-cells as inoculum, indicating that this cell subpopulation required a longer latency period under the applied conditions (Additional file 1: Table S2). However, for unsorted ESP-cells and NQ-cells, a combinatorial effect was significant only for acetic acid and vanillin, and none of the combinations with furfural (p > 0.01) (Additional file 1: Tables S1 and S3). Cell density after (a) 14 h and (b) 40 h cultivation in media M1-M15. Unsorted SP-cells (grey), Q-cells (blue) or NQ-cells (green) as inoculum. Error bars represent standard deviation of the mean for biological duplicates (n = 2) except for *M15 (n = 18) A principal component analysis (PCA) of the three datasets combined was made to re-organize the information into new variables [principal components (PC)] that accounted for the majority of the variability in growth (Fig. 4). The input variables were set as yeast responses in terms of biomass formed after 14 h for the unsorted ESP-cells, Q-cells and NQ-cells. PC1 showed 98% variance with a high positive component loading for all subpopulations (Unsorted ESP-cells, 0.62; Q-cells, 0.44; NQ-cells, 0.65), demonstrating that growth of the three inocula was similar in the different media and was in agreement with the ANOVAs. A variation between the inocula was, however, seen in PC2 (2%), where Q-cells had a positive component loading (0.88), whereas the unsorted ESP-cells and NQ-cells were more similar to each other and had a slightly negative loading for biomass titre (unsorted ESP-cells, −0.42; NQ-cells, −0.22). From the score plot, it could be read that the largest influence of the different behaviour of Q-cells compared to ESP-cells and NQ-cells in both PC1 and PC2 was from growth in media M1, M3, M9 and M13, while the other media clustered together. PCA analysis of factorial design experiment using unsorted SP-, Q- or NQ-cells as inoculum. Loading plot (squares) and Score plot (triangles) To further analyse potential differences between the three inocula, lag times, maximum growth rates and the final biomass in an intermediate inhibitory medium (3.5 g L−1 acetic acid, 1 g L−1 furfural and 1 g L−1 vanillin) were estimated by fitting the modified Gompertz equation (Eq. 1) [33] to the experimental data $$y = A\exp \left\{{- \exp \left[{\frac{{\mu_{\max} \times e}}{A}\left({\lambda - 1} \right) + 1} \right]} \right\}$$ where y is the logarithm of the relative population size [ln(N/N 0)], A is the asymptote [ln(N ∞/N 0)], µ max is the maximum specific growth rate (h−1), λ is the lag time (h), e is exp(1) and t is time (h). Modelling of growth responses for unsorted ESP-, Q- and NQ-cells revealed a small difference in lag time (18 and 7% longer for Q-cells than for unsorted ESP-cells and NQ-cells, respectively, p < 0.001) (Table 3; Additional file 2: Figure S1). The maximum specific growth rate and the asymptote, A, were similar for the different inocula, meaning that once growth started the specific condition rather than the history of the population determined the growth rate and biomass yield. Table 3 Fitting of the Gompertz equation to experimental data obtained for cultivation in medium 15 (n = 18) Altogether the different analyses pointed towards that the shift into Q-cells was not determining the enhanced ability of ESP-cells to grow in the presence of inhibitors. In contrast, the trend was towards longer lag phases for Q-cells than for NQ-cells. On the other hand, since the lag time is determined by the number of viable cells at the start, it could also be that Q-cells were less tolerant to the inhibitors resulting in an initial drop in viability. However, FCM analysis after incubation for 24 h in static media demonstrated that cell viability was similar for unsorted ESP-cells, Q- and NQ-cells (Fig. 5). For medium M14 cell viability was substantially lower than for the other static media, demonstrating that this specific combination of inhibitors was most toxic. Cell viability after 24 h incubation in medium supplemented with different amounts of acetic acid, vanillin and furfural at pH 3.7. SP- cells (grey), Q-cells (blue) and NQ-cells (green). Error bars represent standard error of biological duplicates (n = 2) Intracellular pH distribution responses to lignocellulosic inhibitors at low external pH Determining viability might not be an appropriate means to distinguish the ability to initiate growth in the presence of lignocellulosic inhibitors at low pH as viability of cells in static media remained also very high. Instead, the number of cells that are capable to maintain their pHi may be a better measure to predict the occurrence of growth when exposed to the harsh conditions. In a previous study, it was found that growth in the presence or absence of acetic acid at low extracellular pH correlated with the number of cells maintaining their pHi [38, 39]. To test this for ESP-cells, the response in pHi distribution in a CEN.PK strain over-expressing recombinant pHluorin was measured at lignocellulosic conditions by using ratiometric flow cytometry. Furthermore, measuring the response at the single-cell level could reveal any discrepancies in tolerance distribution within the different pre-culture populations. The CEN.PK background was chosen because it is a well-established model strain for physiological studies within the yeast community [40], and it allowed for easy introduction of the pHluorin reporter system using the URA3 marker. Further, it was previously established that induction of acetic acid tolerance at low pH is displayed both in TMB3500 and in the laboratory strain CEN.PK 113-7D [11], and is thus strain-independent. The CEN.PK strain was pre-cultured with or without acetic acid at pH 5.0, harvested in LP or in ESP and subsequently re-inoculated in defined mineral medium supplemented with 6 g L−1 acetic acid, 0.2 g L−1 vanillin and 0.75 g L−1 furfural at pH 4.5. Variation in pre-cultivation conditions indeed resulted in significant differences in growth profiles (Fig. 6, Additional file 3: Figure S2). LP-cells did not grow in the presence of inhibitors within the measured time interval of 24 h; it only grew in the control medium at pH 5.0 without supplementation of inhibitors (Additional file 4: Figure S3). Loss of viability could not explain the lack of growth for LP-cells since viability slightly reduced to about 85% during the first hour after which this level remained throughout the cultivation (Fig. 6). However, the number of LP-cells with kept physiological pHi had drastically reduced (Fig. 7a, e). Between the time of inoculation and before stable acquisition of cells in the FCM analysis (2 min), 95% of the population had a reduction in the fluorescence ratio from 0.54 to 0.25 (Fig. 7a, d, e). This demonstrates that nearly the total LP-cell population was sensitive to the inhibitory medium. Interestingly, a subpopulation (34 ± 27%) with a transiently higher ratio was observed after 30 min, but after 4 h the frequency of cells with high pHi was close to zero (Fig. 7a, d, e). Effect of pre-cultivation on subsequent cultivation of yeast (CEN.PK 113-5D expressing pHluorin, TMB3800) in inhibitory medium. OD620 (full lines) and viability (dotted lines) as a function of cultivation time for the following inocula: LP-cells (grey), pre-adapted cells (blue) and ESP-cells (green). Error bars represent standard error of biological duplicates (n = 2) Evolution of pHi in different yeast cell inocula (CEN.PK 113-5D expressing pHluorin, TMB3800) upon exposure to a mixture of lignocellulosic inhibitors. Representative histograms from the following inocula: a LP-cells, b cells pre-adapted in medium supplemented with the inhibitors at pH 5, and c ESP-cells. The vertical line displays cells with high and low ratio (F405/F488) and was defined as the number (=0.400) separating LP-cells (a, inoculum) into fractions of 1% lowest and 99% highest percentiles. d Mean ratio (F405/F488) and e frequency of cells with high ratio (>0.400) for the three different inocula (LP-cells, grey bars; pre-adapted cells, blue bars; ESP-cells, green bars) measured over time. Error bars represent standard error of biological duplicates (n = 2). Ratio (F405/F488) is the ratio of pHluorin fluorescence emission collected at 542/50 and 533/30 nm originating from excitation at 405 and 488 nm, respectively. Ratiometric flow cytometry measurement was performed with a Moflo cell sorter and data analysis was performed with Flowjo Cells from the pre-adapted inoculum had a lower initial pHi than LP-cells in the medium without inhibitors (Fig. 7b, d, e). As cells were transferred to the medium with inhibitors at pH 4.5, a majority of the population (78 ± 1%) recovered to a higher pH and maintained it under the measured time period (Fig. 7b, d, e). The more uniform pHi response to the shift in environment and the low fraction of cells with low pHi indicates that inhibitor tolerance was relatively homogenously distributed within the pre-adapted cell population. Finally, cell populations in ESP cultures displayed a high degree of heterogeneity with a high frequency of cells with low pHi prior to inhibitor exposure (Fig. 7c–e). Upon transfer to the inhibitory medium, a distinct subpopulation had an increase in ratio to a similar level as those from the pre-adapted cells, while a majority of cells kept a low pHi over the 4-h incubation period. Despite this, cells with low pH were still viable as measured with FCM analysis of PI-stained cells (Fig. 6). Altogether, this indicates that the ability of ESP populations to initiate growth under lignocellulosic conditions is only present in a fraction of the cells. ROS level distribution responses to lignocellulosic inhibitors at low external pH Low ROS levels have previously been associated with cell tolerance to multiple stress factors, e.g. exposure to furfural [41] or oxidative stress from hydrogen peroxide [42]. Therefore, we hypothesized that the ability to quench ROS contributes favourably to an acquired inhibitor tolerance observed herein for pre-adapted and ESP-cells. To verify this, population responses to exposure to the mixture of lignocellulosic inhibitors at low pH were analysed by FCM. It was found that all cultures consisted of cells with high or low ROS levels (Fig. 8), although the subpopulation distribution differed considerably (Fig. 8a–c, f). Mean ROS levels were ca. twofold higher for LP-cells than for pre-adapted cells and ESP-cells (Fig. 8d). As cells were transferred to the inhibitory environment, a subpopulation of the pre-adapted inoculum had a dramatic reduction (ca 3-fold) in ROS levels already within 2 min, whereas a slight increase was observed for LP-cells. ESP-cells displayed a higher degree of heterogeneity (Fig. 8e) and had a large cell fraction with low ROS levels (Fig. 8c, f) that was more stable than for the other two inocula over the measured time period. After 4 h, the differences between the inocula were levelled out, i.e. ROS levels for LP-cells were significantly reduced, while they were slightly increased for the pre-adapted cells (Fig. 8d). Evolution of ROS in different yeast cell inocula (CEN.PK 113-7D) upon exposure to a mixture of lignocellulosic inhibitors. Representative histograms from the following inocula: a LP-cells, b cells pre-adapted in medium supplemented with the inhibitors at pH 5 and c ESP-cells. The vertical line displays cells with high and low ROS levels and was defined as the number (F585/40 = 52 × 104 channel number) separating the LP-cells (a, inoculum) into fractions of 1% lowest and 99% highest percentiles. d Mean F585/40 and e coefficient of variation (CV) of F585/40 for the three different inocula (LP-cells, grey; pre-adapted cells, blue; ESP-cells, green) measured over time. f Frequency of cells with low ROS levels over time. Error bars represent standard error of biological duplicates (n = 2). F585/40 is DHE fluorescence emission collected at 585/30 nm originating from excitation at 488 nm, respectively. Flow cytometry measurement was performed with a BD Accuri instrument and data analysis was performed with Flowjo. Histograms are from a representative experiment ESP-cells displayed increased tolerance to lignocellulosic inhibitors at low pH In this study, we demonstrated that early stationary phase cells have an increased tolerance to a combination of acetic acid, vanillin and furfural at low pH, as were reflected in immediate ability to initiate growth. The tolerance of ESP-cells was in range with the tolerance obtained by pre-adaptation in moderately inhibitory conditions. This was in stark contrast with exponentially growing cells that displayed a higher sensitivity towards all tested inhibitors. Allowing cells to enter ESP in the pre-cultivation step may therefore be beneficial from a process perspective as it will shorten the latency phase of the fermentation. It is well established from the previous studies that ESP-cells have high tolerance towards multiple stress factors [17, 19, 20]. Although the underlying mechanism behind tolerance is specific for each stress factor, cross-tolerance to different stressors is often observed and is generally ascribed to induction of the ESR upon nutrient depletion. Inhibitor tolerance was similar for Q- and NQ-cells in ESP populations SP-cells were previously found to consist of two subpopulations with distinct physiological state, i.e. the so-called Q- and NQ-cells [24]. The cell fractions can easily be separated from each other based on differences in buoyant cell density (δ Q = 1.14 g L−1 and δ NQ = 1.10 g L−1), for example, by density gradient centrifugation [24] or by flotation as described in the present study. Q-cells consists of young mother cells and unbudded daughter cells and was previously shown to have high mitochondrial activity, low ROS levels, high levels of reserve carbohydrates (glycogen and trehalose), and a high ability to re-enter the cell cycle upon nutrient-rich conditions. NQ-cells, on the other hand, are more heterogeneous and consist of cells with genomic instability, high levels of ROS, non-functional mitochondria and display apoptotic characteristics. With regard to stress tolerance, Q-cells possess higher ability to withstand heat shock and carbon starvation than NQ-cells [24, 26]. We therefore reasoned that the higher ability of ESP-cells to initiate growth in the presence of lignocellulosic inhibitors at low pH was due to differentiation into Q-cells. However, the ability of Q-cells to proliferate and remain viable was rather similar to that of NQ-cells under the whole range of tested conditions in our study. It follows that differentiation into Q-cells was not the underlying cause for improved tolerance of ESP-cells. Engineering of Q/NQ distributions in the pre-cultivation step may therefore not be a way forward to minimize the lag phase of a lignocellulosic fermentation process. Q-cells actually had a slight prolongation of the lag phase upon inoculation to new medium, which may be due to that they are arrested in the G0 phase, and once nutrients are provided they require time for metabolic and structural rearrangements before entering the mitotic cell cycle. NQ-cells on the other hand are arrested in different phases of the cell cycle without clear preference for a specific phase, and did not require the same time before initiating proliferation. Furthermore, any negative effect of long-term starvation of NQ-cells, as observed previously [26], was not observed in ESP. It can be deduced from our results that cells in late SP would behave differently compared to ESP-cells. Vanillin was biocidal in combination with acetic acid at low pH Yeast was sensitive to all tested inhibitors, although the degree of inhibition differed depending on the physiological state of the population. Acetic acid and vanillin were most detrimental for cell fitness, while furfural was inhibiting to a lesser extent at the concentration range used in the present study. The latter was probably due to an inherent high capacity of yeast to reduce furfural to the corresponding non-inhibitory alcohol, since pre-adapted cells behaved similarly to ESP-cells despite having a manifold higher specific furfural reductase activity (Additional file 5: Figure S3). It cannot be ruled out that the mechanisms behind the acquired inhibitor tolerance were different between pre-adapted cells and ESP-cells, and that furfural detoxification contributed favourably to the former. Inhibition was in most cases static rather than cidal, except for the combination of high amounts of vanillin (1.84 g L−1) and moderate levels of acetic acid (3.5 g L−1). At these concentrations, both vanillin and acetic acid were static on their own despite the low pH (3.7). The static effect of vanillin has been described previously [43], but herein we demonstrated that it is biocidal in combination with acetic acid at low pH. The underlying reasons for this is yet unknown, but it can be speculated that above a threshold concentration vanillin will inhibit carbon metabolism, resulting in an inability of the cell to efflux dissociated acetic acid and to keep pH homeostasis. Hence, the combination of vanillin and acetic acid has a synergistic effect on growth inhibition. Furthermore, our results indicated that furfural and acetic acid had a lower synergistic effect on growth inhibition. This is in contrast to a previous study on the combinatorial effect of different inhibitors (acetic acid, furfural and p-hydroxybenzaldehyde) in which it was found that furfural and acetic acid interacted synergistically on cell growth and ethanol production [44]. Tolerant cultures displayed characteristic population responses in pHi and ROS Process control of yeast fitness distributions during fermentation requires identification of correlations between critical process parameters and cell properties that can be rapidly monitored at the single-cell level, such as with flow cytometry [45, 46]. Here we looked into pHi and ROS formation as two cell properties that correlate with inhibitor stress tolerance in yeast. FCM measurement of intracellular pH ESP-cells consist of two subpopulations with differences in intracellular pH, and addition of glucose in non-inhibitory medium results in recovery of physiological pH for the entire population [37]. By measuring responses in intracellular pH distribution as cells were transferred from the pre-culture to the inhibitory medium, we observed that only a subpopulation of ESP-cells was able to maintain their pHi. This implicates that inhibitor tolerance of ESP cultures lies at the level of a subpopulation. A similar behaviour was previously observed for Zygosaccharomyces bailii, which was shown to have a 1000-fold higher fraction of cells in ESP that was tolerant to weak organic acids, e.g. benzoic acid, sorbic acid and acetic acid, than exponentially growing cells [18]. Tolerance to acetic acid has previously been found to correlate to the cell pHi prior to exposure of the stress [47]. In our case, the average pHi of the ESP-cells was lower than for LP-cells, which is in agreement with the previous studies [30, 37]. Pre-adapted cells also had a lower pHi than LP-cells and a majority of the population was able to recover physiological pH within 2 min. Low pHi may be beneficial due to a lower amount of dissociated acid in the cytoplasm at the moment the cells are exposed to the harsh lignocellulosic conditions. It is unclear how cells with initially low pHi are able to rapidly restore intracellular pH, as was the case for pre-adapted cells and a subpopulation of the ESP-cells (Fig. 7). Maintaining physiological pH is a requirement for a functional glycolysis [48], enabling ATP-driven proton export to keep pH homeostasis under acidic condition (see review [49]). Rapid reduction in free inorganic phosphate due to the formation of fructose bisphosphate may play a role in bringing the pHi to neutral as was monitored in Lactococcus lactis (see review by [50]). Yet, there might be other specific mechanisms contributing to the reduction of acetic acid stress in combination with vanillin and furfural. FCM measurement of ROS levels A correlation between inhibitor tolerance and a high frequency of cells with low ROS levels was observed. This may be due to an increased capacity of inhibitor tolerant populations to quench ROS that was formed spontaneously or as a consequence of inhibitor exposure [41]. ROS are formed continuously in mitochondria under aerobic conditions, and are generally at higher level in exponentially growing cells than in SP-cells [51]. This may explain the observed discrepancies in inhibitor tolerance that were observed between pre-adapted cells, LP-cells and ESP-cells. The lower ROS levels of pre-adapted cells may be caused by inhibitor-specific mechanisms, for example, by an increased ability to reduce furfural by an induced furfural reductase activity. Another contributing factor may be an increased level of glutathione, which scavenges ROS by non-enzymatic oxidation, and is involved in maintaining redox homeostasis in the cell via NADPH-dependent glutathione reductase (GLR1) [52]. It was previously shown that by increasing intracellular glutathione levels by over-expression, the gene coding for γ-glutamylcysteine synthetase (GST1) resulted in improved growth in non-detoxified spruce hydrolysate [53]. An alternative way to reduce ROS levels is to introduce a biosynthetic pathway to L-ascorbic acid (vitamin C) that functions as a scavenger for oxygen radicals [54]. Our results give support to reduction of ROS levels as a suitable target for improving tolerance to inhibitors. In this study, we demonstrate that cells in early stationary phase have increased tolerance to lignocellulosic inhibitors at low pH. Thus, allowing cells to enter ESP by carbon starvation during pre-cultivation may be a useful strategy to improve productivity in batch processes that are based on actively growing cells as biocatalysts for bioconversions that are limited by high amounts of inhibitors and low pH. Furthermore, flow cytometry as means to characterize population response profiles has demonstrated to be a sophisticated tool for prediction of yeast behaviour. Herein, we found that FCM-measured frequency of cells that recovered pHi and kept low ROS levels correlated with the ability of the yeast culture to initiate growth in harsh lignocellulosic conditions. Log phase ESP: Early stationary phase Stationary phase NQ: Non-quiescent pHi: Intracellular pH ROS: Environmental stress response FCM: Zhu JY, Pan XJ. Woody biomass pretreatment for cellulosic ethanol production: Technology and energy consumption evaluation. Bioresource Technol. 2010;101:4992–5002. Limayem A, Ricke SC. Lignocellulosic biomass for bioethanol production: current perspectives, potential issues and future prospects. Prog Energy Combust Sci. 2012;38:449–67. Margeot A, Hahn-Hagerdal B, Edlund M, Slade R, Monot F. New improvements for lignocellulosic ethanol. Curr Opin Biotech. 2009;20:372–80. Chandel AK, Da Silva SS, Singh OV. Detoxification of lignocellulose hydrolysates: biochemical and metabolic engineering toward white biotechnology. Bioenerg Res. 2013;6:388–401. Piotrowski JS, Zhang Y, Sato T, Ong I, Keating D, Bates D, Landick R. Death by a thousand cuts. the challenges and diverse landscape of lignocellulosic hydrolysate inhibitors. Front Microbiol. 2014;5:90. Almeida JRM, Runquist D, Sànchez Nogué V, Lidén G, Gorwa-Grauslund MF. Stress-related challenges in pentose fermentation to ethanol by the yeast Saccharomyces cerevisiae. Biotechnol J. 2011;6:286–99. Caspeta L, Castillo T, Nielsen J. Modifying yeast tolerance to inhibitory conditions of ethanol production processes. Front Bioeng Biotechnol. 2015;3:184. Petersson A, Almeida JR, Modig T, Karhumaa K, Hahn-Hagerdal B, Gorwa-Grauslund MF, Lidén G. A 5-hydroxymethyl furfural reducing enzyme encoded by the Saccharomyces cerevisiae ADH6 gene conveys HMF tolerance. Yeast. 2006;23:455–64. Modig T, Almeida JR, Gorwa-Grauslund MF, Lidén G. Variability of the response of Saccharomyces cerevisiae strains to lignocellulose hydrolysate. Biotechnol Bioeng. 2008;100:423–9. Shen Y, Li HX, Wang XN, Zhang XR, Hou J, Wang LF, et al. High vanillin tolerance of an evolved Saccharomyces cerevisiae strain owing to its enhanced vanillin reduction and antioxidative capacity. J Ind Microbiol Biotech. 2014;41:1637–45. Sànchez i Nogué V, Narayanan V, Gorwa-Grauslund M. Short-term adaptation improves the fermentation performance of Saccharomyces cerevisiae in the presence of acetic acid at low pH. Appl Microb Biotechnol. 2013;97:7517-7525. Fernandes AR, Mira NP, Vargas RC, Canelhas I, Sá-Correia I. Saccharomyces cerevisiae adaptation to weak acids involves the transcription factor Haa1p and Haa1p-regulated genes. Biochem Bioph Res Co. 2005;337:95–103. Swinnen S, Henriques SF, Shrestha R, Ho PW, Sá-Correia I. Improvement of yeast tolerance to acetic acid through Haa1 transcription factor engineering: towards the underlying mechanisms. Microb Cell Fact. 2017;16:7. Nielsen F, Tomas-Pejo E, Olsson L, Wallberg O. Short-term adaptation during propagation improves the performance of xylose-fermenting Saccharomyces cerevisiae in simultaneous saccharification and co-fermentation. Biotechnol Biofuels. 2015;8:219. Narayanan V. Sànchez i Nogué V, van Niel EW, Gorwa-Grauslund MF: Adaptation to low pH and lignocellulosic inhibitors resulting in ethanolic fermentation and growth of Saccharomyces cerevisiae. AMB Express. 2016;6:59. Zakrzewska A, van Eikenhorst G, Burggraaff JE, Vis DJ, Hoefsloot H, Delneri D, et al. Genome-wide analysis of yeast stress survival and tolerance acquisition to analyze the central trade-off between growth rate and cellular robustness. Mol Biol Cell. 2011;22:4435–46. Lewis JG, Learmonth RP, Watson K. Role of growth phase and ethanol in freeze-thaw stress resistance of Saccharomyces cerevisiae. Appl Environ Microbiol. 1993;59:1065–71. Stratford M, Steels H, Nebe-von-Caron G, Avery SV, Novodvorska M, Archer DB. Population heterogeneity and dynamics in starter culture and lag phase adaptation of the spoilage yeast Zygosaccharomyces bailii to weak acid preservatives. Int J Food Microbiol. 2014;181:40–7. De Melo HF, Bonini BM, Thevelein J, Simões DA, Morais MA. Physiological and molecular analysis of the stress response of Saccharomyces cerevisiae imposed by strong inorganic acid with implication to industrial fermentations. J Appl Microbiol. 2010;109:116–27. Werner-Washburne M, Braun EL, Crawford ME, Peck VM. Stationary phase in Saccharomyces cerevisiae. Mol Microbiol. 1996;19:1159–66. Carlquist M, Fernandes RL, Helmark S, Heins AL, Lundin L, Sorensen SJ, et al. Physiological heterogeneities in microbial populations and implications for physical stress tolerance. Microb Cell Fact. 2012;11:94. Gasch A. The environmental stress Response: a common yeast response to diverse environmental stresses. In Topics in Current Genetics. 2003;2:11–70. Smets B, Ghillebert R, De Snijder P, Binda M, Swinnen E, De Virgilio C, Winderickx J. Life in the midst of scarcity: adaptations to nutrient availability in Saccharomyces cerevisiae. Curr Genet. 2010;56:1–32. Allen C, Buttner S, Aragon AD, Thomas JA, Meirelles O, Jaetao JE, et al. Isolation of quiescent and nonquiescent cells from yeast stationary-phase cultures. J Cell Biol. 2006;174:89–100. Davidson GS, Joe RM, Roy S, Meirelles O, Allen CP, Wilson MR, et al. The proteomics of quiescent and nonquiescent cell differentiation in yeast stationary-phase cultures. Mol Biol Cell. 2011;22:988–98. Aragon AD, Rodriguez AL, Meirelles O, Roy S, Davidson GS, Tapia PH, et al. Characterization of differentiated quiescent and nonquiescent cells in yeast stationary-phase cultures. Mol Biol Cell. 2008;19:1271–80. Verduyn C, Postma E, Scheffers WA, Van Dijken JP. Effect of benzoic acid on metabolic fluxes in yeasts: a continuous-culture study on the regulation of respiration and alcoholic fermentation. Yeast. 1992;8:501–17. Hahn-Hägerdal B, Karhumaa K, Larsson CU, Gorwa-Grauslund M, Görgens J, van Zyl WH. Role of cultivation media in the development of yeast strains for large scale industrial use. Microb Cell Fact. 2005;4:31. Hanahan D. DNA Cloning, Volume 1. D. Glover, ed. edn: IRL Press, Ltd. London, 109.; 1985. Orij R, Postmus J, Ter Beek A, Brul S, Smits GJ. In vivo measurement of cytosolic and mitochondrial pH using a pH-sensitive GFP derivative in Saccharomyces cerevisiae reveals a relation between intracellular pH and growth. Microbiology. 2009;155:268–78. Gietz RD, Schiestl RH. High-efficiency yeast transformation using the LiAc/SS carrier DNA/PEG method. Nat Protoc. 2007;2:31–4. Parachin NS, Schelin J, Norling B, Radstrom P, Gorwa-Grauslund MF. Flotation as a tool for indirect DNA extraction from soil. Appl Microb Biotechnol. 2010;87:1927–33. Zwietering MH, Jongenburger I, Rombouts FM, van't Riet K. Modeling of the bacterial growth curve. Appl Environ Microbiol. 1990;56:1875–81. Deere D, Shen J, Vesey G, Bell P, Bissinger P, Veal D. Flow cytometry and cell sorting for yeast viability assessment and cell selection. Yeast. 1998;14:147–60. Landolfo S, Politi H, Angelozzi D, Mannazzu I. ROS accumulation and oxidative damage to cell structures in Saccharomyces cerevisiae wine strains during fermentation of high-sugar-containing medium. Biochim Biophys Acta. 2008;1780:892–8. Gomes A, Fernandes E, Lima JL. Fluorescence probes used for detection of reactive oxygen species. J Biochem Bioph Meth. 2005;65:45–80. Valkonen M, Mojzita D, Penttila M, Bencina M. Noninvasive high-throughput single-cell analysis of the intracellular pH of Saccharomyces cerevisiae by ratiometric flow cytometry. Appl Environ Microbiol. 2013;79:7179–87. Swinnen S, Fernandez-Nino M, Gonzalez-Ramos D, van Maris AJ, Nevoigt E. The fraction of cells that resume growth after acetic acid addition is a strain-dependent parameter of acetic acid tolerance in Saccharomyces cerevisiae. FEMS Yeast Res. 2014;14:642–53. Fernandez-Nino M, Marquina M, Swinnen S, Rodriguez-Porrata B, Nevoigt E, Arino J. The cytosolic pH of individual Saccharomyces cerevisiae cells is a key factor in acetic acid tolerance. Appl Environ Microbiol. 2015;81:7813–21. van Dijken JP, Bauer J, Brambilla L, Duboc P, Francois JM, Gancedo C, et al. An interlaboratory comparison of physiological and genetic properties of four Saccharomyces cerevisiae strains. Enzyme Microb Tech. 2000;26:706–14. Allen SA, Clark W, McCaffery JM, Cai Z, Lanctot A, Slininger PJ, et al. Furfural induces reactive oxygen species accumulation and cellular damage in Saccharomyces cerevisiae. Biotechnol Biofuels. 2010;3:2. Kim IS, Kim YS, Yoon HS. Glutathione reductase from Oryza sativa increases acquired tolerance to abiotic stresses in a genetically modified Saccharomyces cerevisiae strain. J Microbiol Biotechnol. 2012;22:1557–67. Fitzgerald DJ, Stratford M, Narbad A. Analysis of the inhibition of food spoilage yeasts by vanillin. Int J Food Microbiol. 2003;86:113–22. Palmqvist E, Grage H, Meinander NQ, Hahn-Hagerdal B. Main and interaction effects of acetic acid, furfural, and p-hydroxybenzoic acid on growth and ethanol productivity of yeasts. Biotechnol Bioeng. 1999;63:46–55. Delvigne F, Zune Q, Lara AR, Al-Soud W, Sørensen SJ. Metabolic variability in bioprocessing: implications of microbial phenotypic heterogeneity. Trends Biotechnol. 2014;32:608–16. Lencastre Fernandes R, Nierychlo M, Lundin L, Pedersen AE, Puentes Tellez PE, Dutta A, et al. Experimental methods and modeling techniques for description of cell population heterogeneity. Biotechnol Adv. 2011;29:575–99. Stratford M, Steels H, Nebe-von-Caron G, Novodvorska M, Hayer K, Archer DB. Extreme resistance to weak-acid preservatives in the spoilage yeast Zygosaccharomyces bailii. Int J Food Microbiol. 2013;166:126–34. Pampulha ME, Loureiro-Dias MC. Activity of glycolytic enzymes of Saccharomyces cerevisiae in the presence of acetic acid. Appl Microb Biotechnol. 1990;34:375–80. Piper P, Calderon CO, Hatzixanthis K, Mollapour M. Weak acid adaptation: the stress response that confers yeasts with resistance to organic acid food preservatives. Microbiology. 2001;147:2635–42. Neves AR, Pool WA, Kok J, Kuipers OP, Santos H. Overview on sugar metabolism and its control in Lactococcus lactis - the input from in vivo NMR. FEMS Microbiol Rev. 2005;29:531–54. Drakulic T, Temple MD, Guido R, Jarolim S, Breitenbach M, Attfield PV, Dawes IW. Involvement of oxidative stress response genes in redox homeostasis, the level of reactive oxygen species, and ageing in Saccharomyces cerevisiae. FEMS Yeast Res. 2006;5:1215–28. Grant CM. Role of the glutathione/glutaredoxin and thioredoxin systems in yeast growth and response to stress conditions. Mol Microbiol. 2001;39:533–41. Ask M, Mapelli V, Hock H, Olsson L, Bettiga M. Engineering glutathione biosynthesis of Saccharomyces cerevisiae increases robustness to inhibitors in pretreated lignocellulosic materials. Microb Cell Fact. 2013;12:87. Branduardi P, Fossati T, Sauer M, Pagani R, Mattanovich D, Porro D. Biosynthesis of vitamin C by yeast leads to increased stress resistance. PLoS ONE. 2007;2:e1092. VN, MGG and EN participated in the design of the study. JS participated in the design of flotation experiments for the characterization of quiescent cells. VN and MC performed the experimental work and wrote the manuscript. MC conceived the study, and participated in the design and coordination of the study. All the authors read and approved the final manuscript. We would like to thank Prof. Gertien Smits (University of Amsterdam, The Netherlands) for kindly providing us with the pHluorin plasmid. We would also like to thank Tim Berglund and Andreas Toytziaridis for technical assistance in strain cultivations, and Dr. Lisa Wasserstrom (Lund University) for construction of the pHluorin-expressing strain. The datasets supporting the conclusions of this article are included within the article. Strains constructed in the current study are available from the corresponding author on request. This work was financed by the Swedish Energy Agency (Energimyndigheten) (Grant Number P35350-1) and the Swedish Research Council FORMAS (Grant Number 229-2011-1052). Division of Applied Microbiology, Department of Chemistry, Lund University, P.O. Box 124, SE 221 00, Lund, Sweden Venkatachalam Narayanan, Jenny Schelin, Marie Gorwa-Grauslund, Ed WJ van Niel & Magnus Carlquist Venkatachalam Narayanan Jenny Schelin Marie Gorwa-Grauslund Ed WJ van Niel Magnus Carlquist Correspondence to Magnus Carlquist. ANOVA:s of factorial design experiment using unsorted ESP-, Q- or NQ-cells as inoculum. Fitting of the Gompertz equation to experimental data obtained for unsorted ESP-cells, Q-cells and NQ-cells. Viability and growth of CEN.PK 113-7D (w/o pHluorin) in defined medium supplemented with lignocellulosic inhibitors. Growth and fluorescence of CEN.PK 113-5D expressing pHluorin (TMB3800). Specific furaldehyde reductase activity in crude cell extracts. Narayanan, V., Schelin, J., Gorwa-Grauslund, M. et al. Increased lignocellulosic inhibitor tolerance of Saccharomyces cerevisiae cell populations in early stationary phase. Biotechnol Biofuels 10, 114 (2017). https://doi.org/10.1186/s13068-017-0794-0 Carbon starvation Stress tolerance Furfural Quiescence Population heterogeneity
CommonCrawl
Impact of vaccine prioritization strategies on mitigating COVID-19: an agent-based simulation study using an urban region in the United States Hanisha Tatapudi1, Rachita Das2 & Tapas K. Das1 BMC Medical Research Methodology volume 21, Article number: 272 (2021) Cite this article Approval of novel vaccines for COVID-19 had brought hope and expectations, but not without additional challenges. One central challenge was understanding how to appropriately prioritize the use of limited supply of vaccines. This study examined the efficacy of the various vaccine prioritization strategies using the vaccination campaign underway in the U.S. The study developed a granular agent-based simulation model for mimicking community spread of COVID-19 under various social interventions including full and partial closures, isolation and quarantine, use of face mask and contact tracing, and vaccination. The model was populated with parameters of disease natural history, as well as demographic and societal data for an urban community in the U.S. with 2.8 million residents. The model tracks daily numbers of infected, hospitalized, and deaths for all census age-groups. The model was calibrated using parameters for viral transmission and level of community circulation of individuals. Published data from the Florida COVID-19 dashboard was used to validate the model. Vaccination strategies were compared using a hypothesis test for pairwise comparisons. Three prioritization strategies were examined: a minor variant of CDC's recommendation, an age-stratified strategy, and a random strategy. The impact of vaccination was also contrasted with a no vaccination scenario. The study showed that the campaign against COVID-19 in the U.S. using vaccines developed by Pfizer/BioNTech and Moderna 1) reduced the cumulative number of infections by 10% and 2) helped the pandemic to subside below a small threshold of 100 daily new reported cases sooner by approximately a month when compared to no vaccination. A comparison of the prioritization strategies showed no significant difference in their impacts on pandemic mitigation. The vaccines for COVID-19 were developed and approved much quicker than ever before. However, as per our model, the impact of vaccination on reducing cumulative infections was found to be limited (10%, as noted above). This limited impact is due to the explosive growth of infections that occurred prior to the start of vaccination, which significantly reduced the susceptible pool of the population for whom infection could be prevented. Hence, vaccination had a limited opportunity to reduce the cumulative number of infections. Another notable observation from our study is that instead of adhering strictly to a sequential prioritizing strategy, focus should perhaps be on distributing the vaccines among all eligible as quickly as possible, after providing for the most vulnerable. As much of the population worldwide is yet to be vaccinated, results from this study should aid public health decision makers in effectively allocating their limited vaccine supplies. SARS-CoV-2 and resulting COVID-19 disease has been raging world-wide since early 2020, killing over 2.0 million globally and nearly 450,000 in the United States by the end of January 2021 [1]. A significant winter swell in cases was underway in the U.S. between November 2020 and January 2021 despite protective measures in place such as face mask usage, limited contact tracing, travel restrictions, social distancing practices, and partial community closures. To combat this, many promising novel vaccines were developed, of which two (Pfizer/BioNTech and Moderna) were authorized for emergency use in mid-December 2020 by the U.S. Food and Drug Administration (USFDA) [2]. Data from initial trials of cohorts greater than 30,000 people showed that these vaccines, given in two doses, were safe and have ~ 95% effectiveness in preventing COVID-19 [3]. Vaccine deployment in the U.S. began soon after USFDA approval. Implementing an effective vaccination campaign was essential to dramatically reduce the infection, hospitalization, and death rates, but posed many unique challenges. Vaccine prioritization and allocation strategy was at the forefront of the challenges to effectively vaccinate communities. Strategy was influenced by a number of key factors: 1) limited initial vaccine supply in the months following release, 2) transmission and severity of COVID-19 varying by segment of the population, 3) vaccine approvals only for adults, and 4) acceptability and compliance in the community for two dose vaccination [4]. U.S. Centers for Disease Control (CDC) released an outline prioritizing healthcare personnel, first responders, persons with high risk medical conditions for COVID-19, and older adults > 65 years. These groups were given priority for vaccination in phase 1, when supply was limited. In phase 2 (supply increased to begin to meet demand) and phase 3 (supply was greater than demand), other population groups were vaccinated based on age and availability [5]. Vaccine allocation structures with basic similarities and some key differences were used by countries around the world. For example, after healthcare workers, France's vaccine allocation scheduled other general workers regardless of age who they had determined to be at high risk of contracting and spreading the virus due to contact with the general public. This includes retail, school, transportation, and hospitality staff [6]. Such differences in vaccine prioritization strategies were untested at the time of the study and warranted modeling and examination. The goal of this paper was to investigate the impact of vaccination on the pandemic via outcome measures such as numbers of infected, hospitalized, and deaths in the months following December 15, 2020 when vaccination would begin in the U.S. Two specific objectives of our investigation were: 1) to assess the expected impact of the vaccination program on mitigating COVID-19, and 2) to inform public health officials on the comparative benefits, if any, of the different vaccine prioritization strategies. We conducted our investigation by using our agent-based (AB) simulation model for COVID-19 that was presented during the early phase of the pandemic [7]. For this study, we first extended calibration of our model till December 30, 2020 to ensure that our model appropriately tracked the explosive increase in cases that started with the onset of winter and the year-end holiday period in 2020. We then enhanced the AB simulation model by adding a framework for vaccination. This included: vaccination priorities for people based on attributes including profession and age, use of two different vaccines by Pfizer/BioNTech and Moderna with their contracted quantities and approximate delivery timelines, vaccine acceptance, transition period between each priority group, vaccination rate, and immunity growth for vaccinated starting with the first dose. As in [7], we implemented our calibrated AB model, augmented with vaccination, for Miami-Dade County of the U.S. with 2.8 million population, which had been an epicenter of COVID-19 in the U.S. We conducted our investigation by implementing a number of prioritization strategies and obtaining the corresponding numbers of total infections, reported infections, hospitalizations, and deaths. The strategies implemented were 1) minor variant of the CDC recommended strategy, 2) age stratified strategy, and 3) random strategy. We also implemented a no vaccination case. These strategies are explained in a later section. We compared and contrasted the results to assess vaccination efficacy and relative performances of the priority strategies. We made a number of key observations from the results, which we believe will help public health officials around the world to choose effective vaccine prioritization strategies to mitigate the negative impacts of COVID-19. On a global scale, equitable and ethical distribution of vaccines for all (low, medium, and high-income) countries is an important question. As the world leader in promoting global health, World Health Organization (WHO) released an evidence-based framework for vaccine-specific recommendations [8]. WHO proposed vaccine prioritization for three potential scenarios of transmission: community transmission, sporadic cases or cluster of cases, and no cases. Each scenario has three stages and focuses on different risk groups. COVID-19 pandemic resembles "community transmission." For this, the first stage focused on healthcare workers and older adults with highest risk; second stage continued the focus on older adults and people with comorbidities, sociodemographic groups, and educational staff; and the third stage focused on essential workers and social/employment groups unable to physically distance themselves. National Academy of Sciences, Engineering, and Medicine (NAESM) developed a more comprehensive phased framework for equitable allocation of COVID-19 vaccine [9]. The first phase prioritized healthcare workers and first responders, people with high risk comorbidities, and older adults in crowded living conditions; second phase focused on K-12 school staff and child care workers, essential workers, people with moderate risk comorbidities, people living in shelters, physically and mentally disabled people and staff that provide care, employment settings where social distancing was not possible, and remaining older adults; third phase prioritized young adults, children, and workers; and fourth phase included everyone else. No specific studies had been presented to the literature at the time of this research that had evaluated the efficacy of the proposed vaccination priorities for mitigating COVID-19. A number of studies can be found in the literature on vaccination strategies for controlling outbreaks of other viruses. The work presented in [10] analyzes the effect of both CDC guided targeted vaccination strategy as well as a mass vaccination strategy for seasonal Influenza outbreaks in the U.S. The study found that a mass vaccination policy reaped the most benefits both in terms of cost and quality-of-life years (QALYs) lost. Authors in [11] use a genetic algorithm to find optimal vaccine distribution strategies that minimize illness and death for Influenza pandemics with age specific attack rates similar to the 1957–1958 A(H2N2) Asian Influenza pandemic and the 1968–1969 A(H3N2) Hong Kong Influenza pandemic. They consider coverage percentage under varying vaccine availability and developed an optimal vaccination approach that was 84% more effective than random vaccination. A study reported in [12] examined vaccination to prevent interpandemic Influenza for high-risk groups and children, and recommended concentrating on schoolchildren, who were most responsible for transmission, and then extended to high-risk groups. A compartmental model in [13] was used to develop optimal strategies to reduce the morbidity and mortality of the H1N1 pandemic. The study found that age specific vaccination schedules had the most beneficial impact on mortality. It can be concluded from the above review of relevant literature that there is no 'one size fits all' strategy for vaccination to either prevent a pandemic outbreak or mitigate one. Virus epidemiology and corresponding disease characteristics, as well as the efficacy and supply of the vaccine must be considered in developing an effective vaccination prioritization strategy. Our paper aims to address this need by presenting a detailed AB simulation modeling approach and using it to assess efficacy of vaccine prioritization strategies for COVID-19. Published COVID-19 modeling approaches were either data-driven models, as in [14,15,16,17,18], or variants of SEIR type compartmental models as in [19,20,21,22,23]. Data driven models are very well suited for understanding the past progression of a pandemic and also for estimating parameters characterizing virus epidemiology. However, these models offer limited ability to predict the future progression of a pandemic that could be dynamically evolving with regards to virus epidemiology, disease manifestations, and sociological conditions. Compartmental models, on the other hand, are aggregate in nature and do not adapt well to changing dynamics of disease transmission. An AB modeling approach is considered to be more suitable for a detailed accounting of individual attributes, specific disease natural history, and complex social interventions [24]. Hence we adopted the AB model approach to conduct our study. The AB simulation-based methodology was particularized using data for Miami Dade County of Florida, with 2.8 million population, an epicenter for COVID-19 spread in the South-Eastern United States. A step by step approach for building such a model for another region can be found in [7]. The methodology began by generating the individual people according to the U.S. census data that gives population attributes including age (see Table A1 in [7]) and occupational distribution (see Table A4 in [7]). Thereafter, it generated the households based on their composition characterized by the number of adults and children (see Table A2 in [7]). The model also generated, per census data, schools (see Table A3 in [7]) and the workplaces and other community locations (see Table A4 in [7]). Each person was assigned a workplace and household based on the numbers, sizes, and compositions of households, schools and workplaces derived from census data (see Tables A2, A3, and A4 in [7] for distribution of household, schools, and workplaces, respectively). A daily (hour by hour) schedule was assigned to every individual, chosen from a set of alternative schedules based on their attributes. The schedules vary between weekdays and weekends and also depend on the prevailing social intervention orders (see Table A1 in Additional file 1). The methodology incorporated means to implement a number of intervention orders including full and partial closures/reopening of schools and workplaces [7, 25], isolation and quarantine of infected individuals and household members, use of face mask, contact tracing of asymptomatic and pre-symptomatic individuals, and vaccination priorities. The timeline for interventions implemented in the model are summarized in Table 1. Table 1 Social intervention order timeline for Miami-Dade County The AB model also included a number of uncertainties such as 1) time varying values of testing rates for symptomatic and asymptomatic, test sensitivity, and test result reporting delay (see Table A9 in [7]), 2) self-isolation compliance for symptomatic cases and quarantine compliance for susceptible household members (see Table A10 in [7]), 3) time varying and age specific probabilities of 'hospitalization among reported cases' and 'death among hospitalized' (see Tables A2 and A3 in Additional file 1), 4) mask usage compliance (100% compliance was assumed to reduce transmission coefficient by 33% [26]), 5) contact tracing level (assumed to be at 15% based on [27, 28]), 6) percentage return to school (considred to be 50% based on [29]), 7) willingness to vaccinate (considered to be 60% based on survey results in [30, 31]), 8) variations in daily schedules during various phases of social interventions (see Table A5 in [7]), 9) percentage of asymptomatic among infected (assumed to be 35% [32]), and 10) vaccine efficacy (95% [3]). On the first day of the simulation, the model introduced a few initial infected cases in the community and began the social mixing process. Each day the model tracked the following for each person: 1) hourly movements and locations based on their daily schedules that depend on age, employment status, prevailing social intervention orders, and quarantine/isolation status; 2) hourly contacts with other susceptible and infected; 3) vaccination status and immunity, 4) force of infection accumulation; 5) start of infection; 6) visits/consultation with a doctor (if symptomatic and insured); 7) testing (if infected and visited/consulted a doctor or asymptomatic chosen for testing either randomly or via contact tracing); 8) test reporting delay; 9) disease progression (if infected); 10) hospitalization (if infected and acutely ill); and 11) recovery or death (if infected). The AB model reports daily and cumulative values of actual infected, reported infections, hospitalized, and deaths, for each age category. A schematic diagram depicting the algorithmic sequence and parameter inputs for the AB simulation model is presented in Fig. 1. Schematic of the AB model for mimicking COVID-19 spread under social interventions and vaccination in the U.S For the susceptible, if vaccinated, since the exact immune response from vaccine was not known at the time of the study, we assumed a linearly increasing partial immunity for susceptible after they received the first dose, attaining full immunity after 7 days after the second dose; we only considered vaccines made by Pfizer/BioNTech and Moderna, for which the second dose was administered 21 and 28 days after the first dose, respectively. The model updated the infection status of all individuals to account for new infections as well as disease progressions of infected individuals. A pseudo-code in Fig. A1 in Additional file 1 depicts the major elements and structure of the AB simulation program. For the infected, we considered the following epidemiological parameters: disease natural history with average lengths of latent, incubation, symptomatic, and recovery periods; distribution of infectiousness; percent asymptomatic; and fatality rate. The infected people were considered to follow a disease natural history as shown in Fig. 2, parameters of which can be found in Table A6 of [7]. The model assumed that the recovered cases became fully immune to further COVID-19 infections. However, since this assumption was not fully supported by data at the time of the study, people recovered from COVID-19 were also considered candidates for vaccination. The duration and intensity of infectiousness was guided by a lognormal density function (see Fig. 3). The function was truncated at the average length of the infectiousness period (which was considered to be 9.5 days). Asymptomatic cases were assumed to follow a similar infectiousness intensity profile but scaled by a factor Ck, as used in the force of infection calculation (1) (see Table A4 in Additional file 1). Disease natural history of COVID-19 [7] Lognormal distribution function for infectiousness profile of a COVID-19 case [7] In what follows we describe how we compute the force of infection and used in determining the probability of infection of individuals. Each susceptible was assumed to ingest viral particles from each infected they come in contact with during the day. The total amount of ingestion of viral particles for a susceptible i was measured as the force of infection (λi) using a modified version of the equation presented in [33]. The equation for force of infection (as presented in [33]) has three components to account for ingestions experienced from infected contacts at home, workplace/school/indoor community locations, and outdoor locations. As we had not found evidence in the literature of any significant COVID-19 transmissions at outdoor locations, we eliminated the third component and used only the first two elements of the equation as shown in (1). The first component of (1) accounts for the daily force of infection experienced by a susceptible i from those infected at home, and the second component accounts for workplace/school/indoor community locations. $${\lambda}_i={\sum}_{k\mid {h}_k={h}_i}\frac{I_k{\beta}_h\kappa \left(t-{\tau}_k\right){\rho}_k\left[1+{C}_k\left(\omega -1\right)\right]}{n_i^{\alpha }}+\sum_{j,k\mid {l}_k^j={l}_i^j}\frac{I_k{\beta}_p^j\kappa \left(t-{\tau}_k\right){\rho}_k\left[1+{C}_k\left(\omega -1\right)\right]}{m_i^j}$$ The definitions and values of parameters of (1) can be found in Table A4 in Additional file 1. The daily force of infection was considered to accumulate. However, it was assumed that if a susceptible does not gather any additional force of infection (i.e., does not come in contact with any infected) for two consecutive days, the cumulative force of infection for the susceptible reduces to zero. At the end of each day, the cumulative value of λi was used to calculate the probability of infection for susceptible i as \(1-exp^{-\lambda_i}\). This probability was used to classify a susceptible individual as infected in the simulation model. As observed in [7], though it was implemented for a specific region, our model is quite general in its usability for other urban regions with similar demography, societal characteristics, and intervention measures. In our model, demographic inputs (age and household distribution, number of schools for various age groups, and number of workplaces of various types and sizes) were curated from both national and local census records. Social interventions vary from region to region and the related parameters can be easily updated. Similarly, the data related to epidemiology of COVID-19 were unlikely to significantly vary from one region to another, though some adjustments of these based on population demographics may be needed. Model calibration The AB model utilized a large number of parameters, which were demographic, epidemiological, and social intervention parameters. We kept almost all of the above parameters fixed at their respective chosen values and calibrated the model by changing values for only a few. The calibrated parameters include the transmission coefficients used in calculating force of infection at home, work, school, and community places (βh and \({\beta}_p^j\)) (see Table A4 in Additional file 1). The choice of the values of transmission coefficients was initially guided by [34] and thereafter adjusted at different points in time during the calibration period (until December 30, 2020). The only other parameters that were calibrated are the number of errands in the daily schedules under various intervention conditions and the percentage of workers in essential (e.g., healthcare, utility services, and grocery stores) and non-essential (e.g., offices and restaurants) workplaces who physically reported to work during different intervention periods (see Table A1 in Additional file 1). Calibration of the above parameters was done so that the daily cumulative numbers of reported infected cases from the AB simulation model closely matched the values published in the Florida COVID-19 dashboard until December 30, 2020. Figure 4 shows the cumulative values (with 95% confidence intervals) as well as the daily numbers of the reported infected cases, hospitalizations, and deaths as obtained from the simulation model. The dotted lines represent the actual numbers reported in the Florida COVID-19 dashboard for Miami-Dade County [35], the trend of which was closely followed by the numbers obtained from our AB simulation model. The actual numbers reported by the Florida COVID-19 dashboard showed large variations, which was due to reporting delays. Validation graphs with cumulative and daily values of infected, reported, and deaths calibrated until Dec 30, 2020 Vaccine prioritization strategies We used our AB model to examine the expected benefits of the ongoing vaccination in the U.S. using the limited supply of two types of vaccines developed by Pfizer/BioNTech and Moderna, which had the emergency approvals for distribution from the USFDA. We considered the number of vaccine doses that the two companies were contracted by the U.S. government to supply, which include the initial contracts for 100 million doses from each company and the more recent contract for an additional 100 million doses from Pfizer/BioNTech. This amounted to a total of 300 million doses, which could inoculate 150 million people with two required doses. To our knowledge, the total supply was being apportioned among the states and the counties depending on the population. Florida had approximately 6.5% of the U.S. population and the Miami Dade County had 13% of Florida's population. Hence, we assumed that Miami Dade County would receive approximately 2.54 million doses and would be able to vaccinate 1.27 million people out of the total 2.8 million population. We also assumed that the vaccine deliveries will occur in batches starting in late December 2020 till late June 2021. Our study goal was to first determine the extent of reduction in the number of infections, hospitalizations, and deaths that we can expect to realize from the vaccination process in comparison with if no vaccines were available. Thereafter, we conducted a comparative study between three different vaccination priority schemes to determine if the outcomes (number of reported cases, hospitalized, and deaths) from those were statistically significant. The priority strategies that were examined are broadly described here; a more complete description is presented in Fig. 5. In the absence of a declared timeline for transition of eligibility from one priority group to the next, we assumed 30 days between transition. This period was extended to allow all eligible and willing to be vaccinated when the phased vaccine supply fell short of the number of people in the eligible priority group. The first strategy that we implemented was a minor variant of the CDC recommended strategy: Priority 1: healthcare providers and nursing home residents; Priority 2: first responders, educators, and people of ages 75 and over; Priority 3: people of ages 65 to 74; Priority 4: people of ages 16 to 64. The CDC recommended strategy also included in priority 3 people of ages 16 to 64 with specific health conditions. Since we did not track health conditions in our AB model, we limited our priority 3 to people of ages 65 and above only. The second strategy that we implemented was an age-stratified strategy: Priority 1: healthcare providers and nursing home residents; Priority 2: people of ages 65 and over; Priority 3: people of ages 55 to 64; Priority 4: people of ages 45 to 54; Priority 5: people of ages 16 to 44. The third strategy that we implemented was a random strategy: Priority 1: healthcare providers and nursing home residents; Priority 2: all people of ages 16 and over. People with prior COVID-19 history were not excluded and 60% of the people were considered willing to vaccinate [30, 31]. Vaccine prioritization strategies examined using AB simulation model for COVID-19 in the U.S Figure 6 shows the plots indicating the growth over time in the number of actual reported vaccinations together with those from the three vaccination prioritization strategies that we had implemented. The vaccination began on December 15, 2020. As evident from the figure, the vaccination growth of the CDC variant strategy aligned closely with the acutal reported numbers. In the random strategy, the growth in vaccination occured faster as this strategy opened eligibility to all ages 16 and above sooner than other strategies. The age dependent strategy further staggers the eligibility and hence the vaccination grew slower than the CDC variant strategy. The flattening of the vaccination growth curves was representative of the limited vaccine supply that was considered in our simulation. The continued growth of the acutal reported vaccinations (red dotted line) was indicative of the increase in vaccine supplies since the time of our model implementation in December 2020. Cumulative values of actual reported and simulated vaccinations until July 31, 2021 Figure 7 shows the trends of the cumulative numbers of infected cases, reported cases, hospitalized, and deaths from the three vaccine prioritization strategies together with the no vaccination scenario until July 31, 2021 (last day of simulation). Since the confidence intervals for the cumulative values of the strategies overlap significantly, for the purpose of comparison, we chose to focus on the cumulative values on the last day of simulation. These are presented in Table 2, which summarizes the values and their 95% confidence intervals. The table also provides the time frame when the pandemic, according to the model, subsides with the new reported cases falling below the threshold of 100. We note that, the reported cases presented in Fig. 7 and Table 2 were obtained directly from the simulation model. However, the hospitalization and death numbers were calculated by applying the time-varying and age-specific probabilities (as presented in Tables A2 and A3 in Additional file 1) on the reported cases. Impact of vaccination strategies on cumulative numbers of infected, reported, hospitalized, and deaths from COVID-19 until July 31, 2021 Table 2 Summary of expected cumulative values (with 95% confidence intervals) on July 31, 2021 obtained by the AB model for the vaccine prioritization strategies Since the performance of the three vaccine prioritization strategies, as shown in Fig. 7, appeared to be similar, we conducted simple pairwise statistical comparisons of the numbers of reported cases (from column 3 of Table 2) using a test of hypothesis. According to the test results, the variant of the CDC strategy produced a statistically significantly lower number of reported cases than no vaccination (p-value 0.0204). Similar results were found for age-stratified and random strategies when compared to no vaccination. However, a comparison of the reported cases among the three vaccine prioritization strategies showed no significant statistical difference (p-values near 0.4). Pairwise comparison of the hospitalized and deaths (in columns 4 and 5 of Table 2) showed that the CDC variant strategy produced statistically significantly lower numbers compared to no vaccination (p-values 0.0014 and 0.0015, respectively). However, similar to the reported cases, the three vaccination strategies did not statistically differ among themselves in terms of hospitalizations and deaths. The numbers from Table 2 also indicate that the CDC variant strategy achieved a reduction of 9, 10, 9, and 11% for total infected, reported cases, hospitalized, and deaths, respectively, compared to the outcomes with no vaccination. Moreover, the CDC variant resulted in the pandemic subsiding below a small threshold of 100 new reported cases about a month sooner when compared with no vaccination. The CDC variant strategy also spared 5.6% of the population form being infected. The above seemingly low impact of vaccination may be attributed to the explosive growth of new reported cases that occurred in the winter months (Nov. 2020-Mar. 2021), which likely had significantly reduced the pool of susceptible people. Figure 8 offers a further visual comparison of the impact of vaccination strategies on the percentage of population infected and the total number of deaths. A few interesting observations can be made from the figure as follows. The random strategy yielded the lowest percentage of population infected (even lower than CDC variant) as it made the age groups most active in social mixing eligible for vaccination sooner than all other strategies. For the same reason however, the random strategy yielded more deaths than CDC variant as it caused a delay in vaccination of the most vulernable elderly population who were not exclusively prioritized. Interstingly, the random strategy also performed better than the age stratified strategy in both measures of infected and deaths. Visualization of the impacts of vaccination strategies on the percentage of total population infected and the total numbers of deaths on July 31, 2021 We have developed a detailed agent-based simulation model for mimicking the spread of COVID-19 in an urban region (Miami-Dade County, Florida) of the U.S. The model was calibrated using transmission coefficients and parameters guiding the daily schedules of people, and was validated using the actual reported cases from the Florida COVID-19 dashboard till December 30, 2020 (see Fig. 4). On this validated model, we incorporated the vaccination process that started in the U.S. on December 15, 2020 using two different vaccines developed by Pfizer/BioNTech and Moderna. Based on the government contracts at the time of our study, we assumed availability of an estimated 2.54 million doses for Miami-Dade County to inoculate 1.27 million people (among the total population of 2.8 million) on a 2 dose regimen. Model results indicated that the use of the available vaccines reduced the spread of the virus and helped the pandemic to subside below a small threshold of 100 daily new cases by mid-May 2021, approximately a month sooner than if no vaccines were available. Also, the vaccination was shown to reduce the number of infections by approximately 10% compared to no vaccination, which translates to sparing 5.6% of the total population from being infected. We note that, even though the vaccines were developed and approved for human use at a much faster rate than ever accomplished before, the accelerated growth of the infections, especially with the onset of the winter in the northern hemisphere, reduced the expected benefits of vaccination. Another noteworthy finding of this study was that there were no statistical differences between the numbers of reported cases resulting from different vaccination prioritization strategies that were tested. This information should give more latitude to decision makers in urban regions across the world for distribution of the limited supply of COVID-19 vaccine. Our results suggested that instead of adhering strictly to a sequential prioritizing strategy, focus should be on distributing the vaccines among all eligible as quickly as possible. Though our AB model is well suited to study future progression of COVID-19 and other similar respiratory type viruses, it has some limitations. The simulation model is highly granular, which while being a strength presents a challenge of appropriately estimating the wide array of its input parameters. Though we have attempted to address this challenge by conducting a sensitivity analysis on some of the critical parameters, such as levels of face mask usage, contact tracing, societal closure, and school reopening, this analysis could be extended to many other parameters. As mentioned under vaccination strategies, our model did not include health conditions that were relevant to COVID-19 (like pulmonary disease, obesity, heart problems) as attributes for people. Hence, we were not able to implement one element of the CDC recommended prioritization strategy that recommends people aged 16–64 years with underlying medical conditions to be considered in priority 3 (see Fig. 5). Also, we did not consider any vaccine wastage due to complexities associated with refrigeration, distribution, and human error. We also assumed that vaccination of all priority groups occured uniformly over the eligibility periods considered, which may not reflect the reality. Also, at the time of the study, there was little available literature on the rate of immunity growth each day from the two dose vaccines, therefore, we assumed a linear growth starting with the first dose and culminating (full immunity) 7 days after the second dose. Moreover, the model did not consider, the emergence of new strains of virus as the pandemic progresses, and the lower level of immunity offered by the vaccine for breakthrough infection. Finally, changes in the virus behavior might impact the comparative outcomes of different vaccine prioritization strategies, as presented in this paper. At the time of the publication of our study, vaccine availability is still limited in many parts of the world. Our results will provide useful information for healthcare policy makers in judiciously allocating their COVID-19 vaccine supply among their population. We also believe that our findings on vaccine prioritization strategies will serve as a resource for the decision makers for future outbreaks of similar respiratory viruses. Finally, as only a limited number of studies examining vaccine prioritization strategies have been presented to the open literature, our research makes a significant addition. Agent-based SEIR: Susceptible – exposed – infected – recovered/removed SARS-CoV-2: Severe acute respiratory syndrome coronavirus 2 USFDA: United States Food and Drug Administration NAESM: NationalAcademy of Sciences, Engineering, and Medicine QALY: Quality of Life Years A. Jordan et al., "Coronavirus in the U.S.: Latest Map and Case Count," NYtimes.com, Jan. 25, 2021. Accessed: 25 Jan 2021. Available: https://www.nytimes.com/interactive/2020/us/coronavirus-us-cases.html. "COVID-19 Vaccines," U.S. Food & Drug Administration, Jan. 12, 2021. Accessed: 25 Jan 2021. [Online]. Available: https://www.fda.gov/emergency-preparedness-and-response/coronavirus-disease-2019-covid-19/pfizer-biontech-covid-19-vaccine "Pfizer and BioNtech conclude Phase 3 study of COVID-19 vaccine candidate, meeting all primary efficacy endpoints," Pfizer, Nov. 18, 2020. Accessed: 25 Jan 2021. [Online]. Available: https://www.pfizer.com/news/press-release/press-release-detail/pfizer-and-biontech-conclude-phase-3-study-covid-19-vaccine Buckner J, Chowell G, Springborn M. Optimal dynamic prioritization of scarce COVID-19 vaccines. Medrxiv. 2020. https://doi.org/10.1101/2020.09.22.20199174. "How CDC Is Making COVID-19 Vaccine Recommendations," CDC, Dec. 30, 2020. Accessed: 25 Jan 2021. Available: https://www.cdc.gov/coronavirus/2019-ncov/vaccines/recommendations-process.html L. Roope, P. Clarke, and R. Duch, "Who should get the coronavirus vaccine first? France and the UK have different answers," The Conservation, Nov. 16, 2020. Accessed: 25 Jan 2021. [Online]. Available: https://theconversation.com/who-should-get-the-coronavirus-vaccine-first-france-and-the-uk-have-different-answers-149875 Tatapudi H, Das R, Das T. Impact assessment of full and partial stay-at-home orders, face mask usage, and contact tracing: An agent-based simulation study of COVID-19 for an urban region. Glob Epidemiol. 2020;2:100036. https://doi.org/10.1016/j.gloepi.2020.100036. "Evidence to recommendations for COVID-19 vaccines: Evidence framework," WHO, Dec. 10, 2020. Accessed: 25 Jan 2020. [Online]. Available: https://www.who.int/publications/i/item/WHO-2019-nCoV-SAGE-Framework-Evidence-2020-1 "Framework for equitable allocation of COVID-19 vaccine." The National Academy Press. Accessed: 25 Jan 2021. [Online]. Available: https://www.nap.edu/read/25917/chapter/1 Clements K, Chancellor J, Kristin N, DeLong K, Thompson D. Cost-Effectiveness of a Recommendation of Universal Mass Vaccination for Seasonal Influenza in the United States. Value Health. 2011;14:800–11. https://doi.org/10.1016/j.jval.2011.03.005. Patel R, Longini I, Halloran E. Finding optimal vaccination strategies for pandemic influenza using genetic algorithms. J Theor Biol. 2005;234:201–12. https://doi.org/10.1016/j.jtbi.2004.11.032. Longini I, Halloran E. Strategy for Distribution of Influenza Vaccine to High-Risk Groups and Children. Am J Epidemiol. 2005;161:303–6. https://doi.org/10.1093/aje/kwi053. Knipl D, Röst G. Modelling the strategies for age specific vaccination scheduling during influenza pandemic outbreaks. Math Biosci Eng. 2011;8(11):123–39. https://doi.org/10.3934/mbe.2011.8.123. Barmparis G, Tsironis G. Estimating the infection horizon of COVID-19 in eight countries with a data-driven approach. 2020;109842:135. https://doi.org/10.1016/j.[s.2020.109842. Candido D, Claro I, Jesus J, Marciel de Souza W, Moreira F, Dellicour S. Evolution and epidemic spread of SARS-CoV-2 in Brazil. Science. 2020;369(6508):1255–60. https://doi.org/10.1126/science.abd2161. Chintalapudi N, Battineni G, Amentaa F. COVID-19 virus outbreak forecasting of registered and recovered cases after sixty day lockdown in Italy: a data driven model approach. J Microbiol Immunol Infect. 2020;53(3):396–403. https://doi.org/10.1016/j.jmii.2020.04.004. Fang Y, Nie Y, Penny M. Transmission dynamics of the COVID-19 outbreak and effectiveness of government interventions: A data-driven analysis. J Med Virol. 2020;92(6):645–59. https://doi.org/10.1002/jmv.25750. Yang S, Cao P, Du P, Wu Z, Zhang Z, Yang L. Early estimation of the case fatality rate of COVID-19 in mainland China: a data-driven analysis. Ann Transl Med. 2020;8(4):128. https://doi.org/10.21037/atm.2020.02.66. Aleta A, Martin-Corral D, Piontti A, AJelli M, Litvinova M, Chinazzi M. Modeling the impact of social distancing, testing, contact tracing and household quarantine on second-wave scenarios of the COVID-19 epidemic. Medrxiv. 2020. https://doi.org/10.1101/2020.05.06.20092841. Hou C, Chen J, Zhou Y, Hua L, Yuan J, He S. The effectiveness of quarantine of Wuhan city against the Corona Virus Disease 2019 (COVID-19): A well-mixed SEIR model analysis. J Med Virol. 2020;92(7):841–8. https://doi.org/10.1002/jmv.25827. Peng L, Yang W, Zhang D, Zhuge C, Hong L. Epidemic analysis of COVID-19 in China by dynamical modeling. MedRxiv. 2020. https://doi.org/10.1101/2020.02.16.20023465. Yang Z, Zeng Z, Wang K, Wong S, Liang W, Zanin M. Modified SEIR and AI prediction of the epidemics trend of COVID-19 in China under public health interventions. J Thorac Dis. 2020;12(3):165–74. https://doi.org/10.21037/jtd.2020.02.64. Zhang Y, Jiang B, Yuan J, Tao Y. The impact of social distancing and epicenter lockdown on the COVID-19 epidemic in mainland China: A data-driven SEIQR model study. MedRxiv. 2020. https://doi.org/10.1101/2020.03.04.20031187. Chao D, Halloran E, Obenchain V, Longini I. FluTE, a publicly available stochastic influenza epidemic simulation model. PLoS Comput Biol. 2010;6(1):e1000656. https://doi.org/10.1371/journal.pcbi.1000656. Tatapudi H, Das TK. Impact of school reopening on pandemic spread: a case study using an agent-based model for COVID-19. Infect Dis Model. 2021;6:839–47. https://doi.org/10.1016/j.idm.2021.06.007. Chu DK, et al. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. Lancet. 2020;395(10242):1973–87. https://doi.org/10.1016/S0140-6736(20)31142-9. Vazquez C. Coronavirus in Miami-Dade: Contact tracing failures and talk of how to spend federal money. Local10. 2020;27 https://www.local10.com/news/local/2020/07/27/coronavirus-in-miami-dade-contact-tracing-failures-and-talk-of-how-to-spend-federal-money/ (accessed 30 Jul 2021). "Contact Tracing Is Failing in Many States. Here's Why," NY Times, Jul. 31, 2020. https://www.nytimes.com/2020/07/31/health/covid-contact-tracing-tests.html (accessed 30 Jul 2021). Miami-Dade County Public Schools, "A guide to the reopening of Miami-Dade County public schools," Reopen smart return safe, Aug. 2020. http://pdfs.dadeschools.net/reopening/Reopen%20SMART%20Return%20SAFE%20-%20A%20Guide%20to%20the%20Reopening%20of%20Miami-Dade%20County%20Public%20Schools.pdf (accessed 03 Aug 2021). M. Beusokem, "Survey: COVID vaccine willingness waned since April," Center for Infectious Disease Research and Policy, Dec. 30, 2020. Accessed: Jan. 28, 2021. [Online]. Available: https://www.cidrap.umn.edu/news-perspective/2020/12/survey-covid-vaccine-willingness-waned-april L. Saad, "U.S. Readiness to Get COVID-19 Vaccine Steadies at 65%," Gallup News, Jan. 12, 2021. Accessed: Jan. 28, 2021. [Online]. Available: https://news.gallup.com/poll/328415/readiness-covid-vaccine-steadies.aspx CDC, "COVID-19 Pandemic Planning Scenarios," Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/hcp/planning-scenarios-archive/planning-scenarios-2020-05-20.pdf (accessed 05 May 2021). Ferguson NM, et al. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005;437(7056):209–14. https://doi.org/10.1038/nature04017. Ferguson N, Cummings D, Fraser C, Cajka J, Cooley P, Burke D. Estimating disease burden of a potential A(H7N9) pandemic influenza outbreak in the United States. Nature. 2006;442(7101):448–52. https://doi.org/10.1038/nature04795. "Florida's COVID-19 Data and Surveillance Dashboard," Florida Department of Health, Division of Disease Control and Health Protection. Department of Industrial and Management System Engineering, University of South Florida, Tampa, Florida, USA Hanisha Tatapudi & Tapas K. Das Miller School of Medicine, University of Miami, Miami, Florida, USA Rachita Das Hanisha Tatapudi Tapas K. Das Hanisha Tatapudi: Conceived and designed the model, Selection of model input parameters and data gathering, Coding and testing of the model, Design and perform the experiments, Output analysis and review, Manuscript preparation and review. Rachita Das: Selection of model input parameters and data gathering, Output analysis and review, Manuscript preparation and review. Tapas K Das: Conceived and designed the model, Selection of model input parameters and data gathering, Coding and testing of the model, Design and perform the experiments, Output analysis and review, Manuscript preparation and review. The author(s) read and approved the final manuscript. Correspondence to Hanisha Tatapudi. Relevant guidelines and regulations were practiced where applicable in the gathering and handling of human data. Additional file 1: Table A1 . Social mixing parameters used in model calibration. Table A2. Time varying and age-specific probability of hospitalization among reported cases1 from March – October 2020. Table A3. Time varying and age-specific probability of death among hospitalized cases1,2 from March – October 2020. Table A4. Parameters of the force of infection (equation (1)). Figure A1. Pseudo-code for agent-based simulation model of COVID-19 with implementation of two-dose vaccines. Tatapudi, H., Das, R. & Das, T.K. Impact of vaccine prioritization strategies on mitigating COVID-19: an agent-based simulation study using an urban region in the United States. BMC Med Res Methodol 21, 272 (2021). https://doi.org/10.1186/s12874-021-01458-9 Vaccination strategies Agent-based simulation model Vaccination prioritization
CommonCrawl
An algorithm to generate possible binary grid patterns This is my first question in this forum. I hope you will help me. I was asked by a colleague to solve this puzzle. The puzzle goes like this: I have an m x n grid where each cell can be in either of two states: on or off. Initially the cells are in off state. I would like to have an algorithm that takes an integer input, l, and the grid, and gives me all the possible grid patterns where each pattern must have at least one off-block, where an off-block is a block of l contiguous cells all in off state. There should not be a cell in an off state either before or after an off-block. The off-block can be found not only in the vertical and horizontal orientation but also in the diagonal orientation. The values of m and n can be as large as 32 and l is typically between 3 and 10 and it cannot be greater than either m or n. Let's assume we have a 4 x 4 grid, let l be 3, 0 denote the off state and 1 denote the on state. Some valid patterns |---|---|---|---| | 0 | 0 | 1 | 0 | | 0 | 1 | 0 | 0 | | 0 | 0 | 0 | 1 | | 1 | 1 | 0 | 0 | |---|---|---|---| has 3 off-blocks |---|---|---|---| | 1 | 0 | 1 | 0 | | 1 | 0 | 1 | 1 | | 1 | 1 | 0 | 1 | | 1 | 1 | 0 | 0 | |---|---|---|---| has 1 off-block Some invalid patterns |---|---|---|---| | 0 | 0 | 1 | 0 | | 0 | 0 | 0 | 1 | | 0 | 0 | 0 | 1 | | 1 | 1 | 0 | 0 | |---|---|---|---| The main diagonal contains 4 (more than 3) contiguous cells in off state. |---|---|---|---| | 1 | 0 | 1 | 0 | | 1 | 0 | 1 | 1 | | 1 | 1 | 0 | 1 | | 1 | 1 | 0 | 1 | |---|---|---|---| Must contain at least one off-block My approaches These are what I have come up so far: First approach: Brute force My first approach was to list all the possible grid patterns and check each one of them if it satisfies the constraints. I immediately realized that this is not scalable. For instance, for a 4 x 4 grid I will have 2^16 possible grid patterns. Second approach: After each off-block, turn on the next cell. This procedure is repeated both in the horizontal and the vertical orientations. Take for instance a 6 x 6 grid, let l be 3. Initially, this is the grid: |---|---|---|---|---|---| | 0 | 0 | 0 | 0 | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | |---|---|---|---|---|---| Starting from the first row, turn on the the cell after each off-block along the horizontal orientation of each row and we get the following pattern: Starting from the first column of the above grid, turn on the the cell after each off-block along the vertical orientation of each column and we get the following pattern: With this approach I can satisfy the first constraint. In order to get the other patterns I need to turn on the remaining off cells. The only option that I see is to adapt a brute force to deal with the remaining cells. Moreover, turning a cell on may permit turning a neighbouring cell off, resulting in a valid new pattern. For example from the following pattern where l is 2 |---|---|---|---| | 0 | 0 | 1 | 0 | | 0 | 0 | 1 | 0 | | 1 | 1 | 1 | 1 | | 0 | 0 | 1 | 0 | |---|---|---|---| we can get the pattern below by turning on the cell at coordinate(1,1) and turning off the cell at coordinate(2,2) Therefore, even with the second approach I cannot gaurantee to produce all the possible patterns. I hope the above the description is self-contained to understand the problem. Haileyesus HaileyesusHaileyesus $\begingroup$ Welcome to CS.SE! What's the context where you ran across this problem? I encourage you to cite the source of the problem. Also, what approaches have you considered? What did you try, and where did you get stuck? We're happy to help you understand the concepts but just solving exercises for you is unlikely to achieve that. You might find this page helpful in improving your question. $\endgroup$ – D.W.♦ Mar 15 '17 at 20:04 $\begingroup$ When you say "number of contiguous cells", do you mean "number of contiguous cells that are on"? Finally... do you want to output a list of all such patterns? Or just output a count of the number of such patterns? Also, how large will typical values of n,m,l be? Can you edit your question to address this feedback? $\endgroup$ – D.W.♦ Mar 15 '17 at 20:06 $\begingroup$ @D.W. I meant at most l contiguous off cells. Yes I want the list of patterns as the output. Typically, the values of n and m can be upto 32, but I could also have a larger grid. $\endgroup$ – Haileyesus Mar 15 '17 at 20:11 $\begingroup$ A straightforward recursive procedure would do the trick very efficiently. $\endgroup$ – Yuval Filmus Mar 15 '17 at 20:24 $\begingroup$ How large will typical values of l be? What's the source where you saw this problem? Please edit the question to incorporate all information into the question. We want questions to be self-contained, so people don't have to read the comments to understand what you are asking. $\endgroup$ – D.W.♦ Mar 15 '17 at 20:26 You want the set of grids that contains at least one contiguous off-block. There is a simple iterative algorithm for this. Enumerate the set of all locations and all orientations for the off-block. For each such, fill in the grid with that off-block, and fill in the two squares on either end of the off-block per your first constraint; then the rest of the squares can be filled in arbitrarily, so enumerate all possibilities for them. There will be exponentially many such filled-in grids, so this procedure will be computationally infeasible for all but the smallest grids. Moreover, every procedure that correctly solves this problem will be computationally infeasible: the output (list of valid solutions) is exponentially long, so it will take exponentially long just to construct that output. No amount of cleverness or algorithmic tricks will let you avoid this. It is an inherent consequence of the problem statement. The problem is simply not solvable (within any reasonable amount of time), except for small grids. D.W.♦D.W. $\begingroup$ Initially, this is what I had in mind but, as you have noticed, this is not computationally feasible. I have implemented the solution suggested by @Yuval Filmus but running the program gave me a grid with all the cells set to on except the bottom right corner. $\endgroup$ – Haileyesus Mar 16 '17 at 20:52 $\begingroup$ @Haileyesus, I edited my answer. See the last paragraph. $\endgroup$ – D.W.♦ Mar 16 '17 at 20:55 $\begingroup$ Thank you for the clarification. You are right, since there are exponential many possible patterns, print each of them by itself is infeasible. $\endgroup$ – Haileyesus Mar 16 '17 at 20:57 $\begingroup$ @Haileyesus I'm afraid you haven't implemented my algorithm properly. My implementation produces many answers. $\endgroup$ – Yuval Filmus Mar 16 '17 at 22:14 $\begingroup$ @D.W. If I understand the OP's constraints correctly, then they disallow long off-blocks. It's not enough that one off-block has just the correct length, since there could be other off-blocks. Also, your algorithm enumerates some solutions more than once. $\endgroup$ – Yuval Filmus Mar 16 '17 at 22:18 For an off-on matrix $M$, let $\ell(M)$ be the maximum contiguous run (horizontally, vertically, or diagonally, in both diagonal orientations) of off cells. You seem to be asking for the list of all $n\times m$ off-on matrices $M$ with $\ell(M) = \ell$. One way to generate this list is using the following recursive procedure, which gets as input a partially filled off-on matrix $M$: If $M$ is complete filled, output it if $\ell(M) = \ell$. Otherwise, locate the first unfilled cell in $M$, set it on, and run the procedure recursively. Set the same cell to off, and check whether this introduces a contiguous run of off cells of length $\ell+1$. If it doesn't, run the procedure recursively. Implementing the second step requires an ordering of the cells in the matrix. One possibility is to order the cells top to bottom and left to right. You run the entire procedure with the empty matrix as input, and then it prints all legal matrices. In practice I expect this procedure to be quite efficient, since (1) it seems that most of the time a choice which doesn't lead to an immediate run corresponds to a partially filled matrix which can be completed to a legal completely filled matrix, and (2) it seems that most matrices with $\ell(M) \leq \ell$ actually satisfy $\ell(M) = \ell$. You can test this empirically by comparing the number of generated matrices to the number of recursive calls (divided by $nm$). $\begingroup$ I have few doubts. What do you mean by 'completely filled' matrix? Is it a matrix with all 0s or all 1s or a matrix that satisfies the constraints? $\endgroup$ – Haileyesus Mar 16 '17 at 18:02 $\begingroup$ A matrix with all offs and ons. $\endgroup$ – Yuval Filmus Mar 16 '17 at 18:02 $\begingroup$ If l < min(m, n), then a complete matrix in the first step will never satisfy the condition "L(M) == l"; hence, no output to print. Moreover, the next steps will make the matrix always incomplete. $\endgroup$ – Haileyesus Mar 16 '17 at 18:12 $\begingroup$ I'm not sure what you mean. You can program this algorithm and see that it works. I forgot to mention that you have to start it with the empty matrix. $\endgroup$ – Yuval Filmus Mar 16 '17 at 18:14 $\begingroup$ Yes, that's basically it, up to some simple optimization. $\endgroup$ – Yuval Filmus Mar 16 '17 at 18:21 Not the answer you're looking for? Browse other questions tagged algorithms or ask your own question. Maximum Independent Subset of 2D Grid Subgraph Estimating the time until we obtain five-in-a-row? How do I choose an optimal cell size when searching for close pairs of points, and using cells to implement this? Implement multi-fragment heuristics for the traveling salesman problem shortest path from start to end in a grid but with a twist Brute force algorithm for "Binary Puzzle" Counting Colorings of a Grid How to find the minimal path cost from left edge of a grid to the right edge using dynamic programming? How do we place $8n$ objects in a grid of size $n \times n$? Picross solving: matching checked cells to their clues
CommonCrawl
quantum mechanics variational theory In this limit the model is translation invariant. The Variational Method is a mathematical method that is used to approximately calculate the energy levels of difficult quantum systems. ), to assume that the state minimising, is itself translation invariant. An arbitrary element of may be written as, Suppose that is some linear combination of products of majorana fermion operators. We supply a product by defining, an extend it by linearity to an arbitrary element of . Additionally, symmetries may allow us to compute the objective function efficiently. Any rotation in may be implemented with appropriate choice of . This work is a continuation and extension of the delineation of the properties of a quantum subspace���a region of the real space of a molecular system bounded by a surface through which the flux in the gradient of the (observable) charge density is zero. In this contribution, an overview of Kleinert's variational perturbation theory will be given. endobj Sorry, your blog cannot share posts by email. It contains nonrelativistic quantum mechanics and a short treatment of the quantization of the radiation field. @article{osti_4783183, title = {A NEW VARIATIONAL PRINCIPLE IN QUANTUM MECHANICS}, author = {Newman, T J}, abstractNote = {Quantum theory is developed from a q-number (operator) action principle with a representation-invariant technique for limiting the number of independent system variables. These lecture notes can be found in pdf form here. Thus, we have: ... Now, we have found that this algorithm works in theory. Rather than expressing everything in terms of the non-hermitian operators and it is convenient to introduce the hermitian Majorana fermion operators, analogous to the bosonic position and momentum operators. (Few other additional exactly solvable systems are particle in a spherical box, -function potential, nite-depth well and Morse poten-tail). <>/Border[0 0 0]/P 3 0 R>> We can naturally associate a Grassmann number to such an operator by replacing 's with 's by defining. A convenient basis for a single spin- degree of freedom is provided by the eigenstates of the spin operator, written and . The correlation matrix for a Gaussian state can be found via, The correlation matrix completely characterises via Wick's theorem because the expectation value of any higher-order monomial of fermion operators may be computed using the formula, with , denotes the Pfaffian, and denotes the submatrix of with the indicated rows and columns. 1. However, QM/MM is less suitable for systems with complex MM dynamics due to associated long relaxation times, the high computational cost of QM energy ��� <>/Border[0 0 0]/P 3 0 R>> Next try relaxing this assumption by positing that the solution is only -periodic: What value do you get for the energy density in this case? Great efforts have recently been devoted to its extension to quantum computing for efficiently solving static many-body problems and simulating real and imaginary time dynamics. <>/Border[0 0 0]/P 3 0 R>> Although classical mechanics is now regarded as only an approximation to quan-tum mechanics, it is still true that much of the structure of the quantum theory is inherited from the classical theory that it replaced. Indeed, the interplay between the two terms is sufficiently complex that the model exhibits a great deal of interesting physics, including, a quantum phase transition. <>/XObject<>>>/Type/XObject/Subtype/Form/BBox[0 0 595 842]/Matrix[1 0 0 1 0 0]/FormType 1>>stream Definition 1 A quantum state of fermionic modes is Gaussian if and only if its density operator has a Gaussian Grassmann representation, i.e.. for some antisymmetric matrix . In this case the variational principle is known as Hartree-Fock theory. <>/Border[0 0 0]/P 3 0 R>> (Exercise: prove these statements. It follows that any Gaussian state may be transformed via into a product form. The variational method in quantum mechanics: an elementary introduction Riccardo Borghi1 Dipartimento di Ingegneria, Università degli Studi ���Roma tre��� Via Vito Volterra 62, I-00146 Rome, Italy E-mail: [email protected] Received 1 December 2017, revised 28 January 2018 The Keldysh-Schwinger time-cycle method of extracting matrix elements in nonequilibrium situations is described. In this lecture we'll describe a general strategy to approximately solving the many body problem introduced in the previous lecture. Calculate the corresponding magnetisation for the mean-field solution we've derived. 6 0 obj At the moment all we know is how to add or subtract these elements, i.e., there is no product operation defined on the vector space. An arbitrary element can always be represented as, where may be an arbitrary antisymmetric real matrix. Generalised Hartree-Fock theory is then to carry out the minimisation. endobj The variational method is one such approxation and perturbation theory is another. The transverse Ising model is actually exactly solvable using a sophisticated map to a fermionic system, but we'll pretend we don't know this. with periodic boundary conditions , and describe fermions hopping on a ring with repulsive interactions between neighbouring sites. x��X�r�6�L����Gw�� $H�9�5�˶��L^ Quantum mechanics has played an important role in photonics, quantum electronics, and micro-electronics. with antisymmetric. No other product relations are imposed. endobj We follow, in part, the paper arXiv:1005.5284. <>/Border[0 0 0]/P 3 0 R>> The theory of quantum noise and decoherence, lecture 2, The theory of quantum noise and decoherence, lecture 1, Returning to open science: continuous limits of quantum lattice systems, A QIG seminar on "the Polynomial Hierarchy" by Friederike Dziemba, Guest post on Bohmian Mechanics, by Reinhard F. Werner, My talk on Haagerup models in the Wales MPPM seminar, Introducing my QI reading seminar on Hayden&Preskill's "Black holes as mirrors: quantum information in random subsy…, An elementary method to evaluate integrals over the unitary group, Ramona Wolf gives an overview of our recent quantum machine learning paper. Photo by Macau Photo Agency on Unsplash What are Quantum Variational Algorithms? Also, I want to adress the question on the first example, why the translation invariance is not a priori-reasonable. Thus, if we want to understand such a model as becomes large we must use another method. <>/Border[0 0 0]/P 3 0 R>> ( Log Out / Carry out a similar analysis as above for the antiferromagnetic Heisenberg model. Most quantum chemical calculations, which you're going to be doing soon, involve not just millions but often billions of basis functions. <>/Border[0 0 0]/P 3 0 R>> <>/Border[0 0 0]/P 3 0 R>> %���� First, I suppose in eq. This entry was posted on Thursday, May 5th, 2011 at 1:10 pm and is filed under teaching. The gaussian or quasi-free fermion states are morally analogous to the product states we studied above, and may be defined via several routes (the analogy is that in both cases a system whose state is product/gaussian may be though of as not interacting). 15 0 obj Quantum spin systems are simplified models that arise as approximations of systems of electrons moving in the presence of a regular array of binding atoms (see, e.g., Auerbach (1994), chapter 3, for an example derivation). with Newton���s law F = ma. 11 0 obj You can leave a response, or trackback from your own site. ��� The Rayleigh���Ritz method for solving boundary-value problems approximately Warning: this is a map on to only as linear spaces, the product operation is not preserved by this operation. The connection between the two is brought out, and applications are discussed. Physics and Life Sciences, Lawrence Livermore National Laboratory, Livermore, CA, USA. The idea behind mean-field theory is simple: we take as a variational class one that neglects all quantum correlations between particles and apply the variational method. Indeed, it is possible to find linearly independent elements in total generated by the above relations. Thus we aim to solve the optimisation problem, This is greatly simplified by noticing that, Notice what a huge simplification this is: to specify our state we need only specify the numbers defining the upper triangular portion of , and the energy is a function purely of these numbers. 9 0 obj However, we have made a huge saving because this problem can at least be stored in a computer's memory for large , in contrast to the situation where non-Gaussian states are considered. In standard quantum mechanics, the simplest example is the one-dimensional anharmonic oscillator. 17 0 obj are the Pauli sigma matrices. if and zero otherwise. <>/Border[0 0 0]/P 3 0 R>> %PDF-1.7 Review of Feynman���s Path Integral in Quantum Statistics: from the Molecular Schrödinger Equation to Kleinert���s Variational Perturbation Theory - Volume 15 Issue 4 Quantum mechanics/molecular mechanics (QM/MM) is a standard computational tool for describing chemical reactivity in systems with many degrees of freedom, including polymers, enzymes, and reacting molecules in complex solvents. It is shown that in a q-number theory such a limitation on the number of variations ��� Featured on Meta Feedback post: New moderator reinstatement and appeal process revisions 16 0 obj Probability Theory 7 2 Probability Theory That such a simplification preserves interesting physical properties of a system of interest is beyond this course but can be found, e.g., in Auerbach (2003). The variational principle in quantum mechanics, lecture 6. If is pure, i.e., , then (see, e.g., Nielsen and Chuang (2000)). To explain mean-field theory in this lecture we'll consider a sequence of simplified examples. The matrix is called the correlation matrix of . The hilbert space for a (one-dimensional) collection of such spin- degrees of freedom is given by, A general hamiltonian for a quantum spin system has the form, where the operator acts nontrivially only on spins and . Change ), You are commenting using your Google account. Quantum Theory, D. Bohm, (Dover, New York NY, 1989). Change ), An open science weblog focussed on quantum information theory, condensed matter physics, and mathematical physics. The only case we're really going to use is, Any real antisymmetric matrix can be converted into a block diagonal form by an appropriate choice of rotation via, The absolute values , are the Williamson eigenvalues of . ��P�͋�?��O�i�&K��JZ.�8X���0};�Z�����CS�:�O�����g/6�A؂p��m�������z��4h ���.����)�`~;�;)��^�1�'E�$�0[>ga��� By transforming our original fermion operators to the Majorana representation our original hamiltonian takes the form. Since this expression generically tends to infinity as it is convenient to focus, rather, on the energy density . Focusing on applications most relevant to modern physics, this text surveys 12 0 obj From the anticommutation relations it follows that, for all . <>/Border[0 0 0]/P 3 0 R>> <>/Border[0 0 0]/P 3 0 R>> Thus our problem becomes, In the region this equation admits extrema at , , and, Substituting this into gives us the value, Outside this region there is are only two extrema at , , and the energy density is. 7 0 obj ( Log Out / In this example we only consider an array of spin- degrees of freedom arranged in a regular one-dimensional lattice. The set of all such elements are called the Grassmann numbers . The degrees of freedom of a quantum spin system are, as the name suggests, quantum spins, localised in a regular array. ... Department of Chemistry and Quantum Theory Group, School of Sciences and Engineering, Tulane University, New Orleans, LO, USA. In the application of the variational method one then sees that the influence of all the other particles on a given one are treated in an averaged way. <>/Border[0 0 0]/P 3 0 R>> One area is nano-technologies due to the recent advent of nano- 4 0 obj The same follows for the probability of measuring $1$. Despite this drawback the class , when used in conjunction with the variational method, provides surprisingly good results. Exercise 1. However, the class has the considerable downside that no member exhibits any spatial correlations, i.e., suppose is an observable of the spin at location and is an observable on the spin at location (for example, and ), then. We finally come to the formulation of generalised Hartree-Fock theory. Squires, (Cambridge University Press, Cambridge ... Chapter 14 illustrates the use of variational methods in quantum mechanics. endobj This is far from trivial for arbitrary and , and we must take recourse, in general to numerical methods gradient descent methods. Change ), You are commenting using your Facebook account. In order the be a legal quantum state it is necessary that , , which is the same as saying that the eigenvalues of must all lie in . Lecture 6: density functional theory . endobj The variational formulation of quanum 詮�eld theory and the de- In the application of the variational method one then sees that the influence of all the other particles on a given one are treated in an averaged way. As a consequence Newtonian mechanics has been dislodged from the throne it occupied since 1687, and the intellectually beautiful and powerful variational principles of analytical mechanics have been validated. 10 0 obj We have also tried to explain how classical mechanics emerges from quantum mechanics. Here we focus on quantum computers��� utility for the Consistent Histories formalism, which has previously been employed to study quantum cosmology, quantum paradoxes, and the quantum-to-classical transition. Let's now apply the variational principle to using as our variational class the set of all Gaussian states, both mixed and pure. So we begin with a lightning review of classical mechanics, whose formulation begins (but does not end!) We consider a second-quantised lattice setting, where the fermion creation and annihilation operators may be given by the finite set, You can think of as annihilating a fermion from the single-particle state with wavefunction. Molecular Quantum Mechanics Up: Approximate Methods Previous: Perturbation Theory Contents The Variational Method. The strategy of the variational principle is to use a problem we can solve to approximate a problem we can't.. More preciesly, suppose we want to solve a hard system with a Hamiltonian .Our plan of attack is to approximate it with a different ������trial Hamiltonian������ which has the same general ������flavor������ as the actual Hamiltonian, but (in contrast) is actually solvable. endobj There is an obvious competition between these two terms. The variational method is a versatile tool for classical simulation of a variety of quantum systems. Define , then, with , and . Problems in Quantum Mechanics, G.L. Search for more papers by this author. Finally, Chapter 15 contains an introduction to quantum scattering theory. You can follow any responses to this entry through the RSS 2.0 feed. Quantum Variational Algorithms are algorithms inspired by the Variational Principle in Quantum Mechanics. The expectation value of the energy of the system is given by, (We've exploited translation invariance of to drop the subscripts on the pauli sigma matrices.) Although quantum computers are predicted to have many commercial applications, less attention has been given to their potential for resolving foundational issues in quantum mechanics. Due to this, the limit N -> \infty would still look like leaving j=0 as an open end to the wave function. We are going to consider the case where . 5 Units, Prerequisites: 137A-137B or equivalent.Basic assumptions of quantum mechanics; quantum theory of measurement; matrix mechanics; Schroedinger theory; symmetry and invariance principles; theory of angular momentum; stationary state problems; variational principles; time independent perturbation theory; time dependent perturbation theory; theory of scattering. formulation of quantum mechanics. Antonios Gonis. 8.321 is the first semester of a two-semester subject on quantum theory, stressing principles. Variational methods in quantum mechanics are customarily presented as invaluable techniques to find approximate estimates of ground state energies. Concentrating upon applications that are most relevant to modern physics, this valuable book surveys variational principles and examines their relationship to dynamics and quantum theory. 3. <>/Border[0 0 0]/P 3 0 R>> Note: there are terms in this expansion! The model we consider has the second-quantised form. Our treatment of the Helium atom in lecture 2 could be seen as an application of mean-field theory in an embryonic form. endobj ( Log Out / (28) one of the H_t is meant to be H_s. Thus is not an element of and the collection of such products provide an additional linearly independent elements. First assume that the mean-field solution is translation invariant: what solution do you get in this case. Generalized Variational Theorem in Quantum Mechanics. 8 0 obj Our objective is to understand the ground state of . I've been reading Kleinert's book and have been very intrigued by the chapter on variational perturbation theory. Lecture 4: Mean-field theory and Hartree-Fock theory. Exercise: what is the exact form of and in our case? Assignment 1. The Variational Quantum Linear Solver, ... then multiply by its complex conjugate (see the quantum mechanics section if you are not familiar with this). (quantum mechanics) A method of calculating an upper bound on the lowest energy level of a quantum-mechanical system and an approximation for the corresponding wave function; in the integral representing the expectation value of the Hamiltonian operator, one substitutes a trial function for the true wave function, and varies parameters in the trial function to minimize the integral. Lecture 24: Molecular Orbital Theory I. Variational Principle and Matrix Mechanics ... And so once you know the mechanics, it's not a big deal. endobj But many more emerging technologies require the understanding of quantum mechanics; and hence, it is important that scientists and engineers understand quantum mechanics better. For more complex systems, no analytical exact solution exist. In addition to the essentials, topics such as the theory of measurement, the Bell inequality, decoherence, entanglement and supersymmetric quantum mechanics are discussed. An example would be simply the column vectors with a in the th place. In this subsection we follow the paper quant-ph/0404180 closely. Thus, noting that, allows us to write our variational problem as the following geometric problem, Our variational parameters are the three numbers , and , subject to the constraint . The first summation in (3) describes an interaction between neighbouring spins that encourages quantum spins to align along the spin axis. It is fairly reasonable, although not a priori correct (why not? It is very much worthwhile reading this paper in full. We study here is the exact form of and in our case example we only an... Can leave a response, or trackback from your own site expression generically tends infinity. Convenient to focus, rather, on the present lecture descent methods spins to line Up the. Of spin- degrees of freedom arranged in a spherical box, -function potential, nite-depth well and poten-tail. Operation is not a priori correct ( why not line Up along the spin operator, written.! 2.0 feed and, and describe fermions hopping on a ring with repulsive interactions between neighbouring sites, 8! The paper arXiv:1005.5284, 2011 at 1:10 pm and is filed under teaching the theory of Relativity and... Responses to this, the simplest example is the exact form of and the collection of such provide... Our treatment of the H_t is meant to be doing soon, involve not just millions but often billions basis. Dear quantum mechanics variational theory, I want to adress the question on the first example, the. Treatment of the operators, as we can set it to so as allow... Two terms lecture notes can be found in pdf form here seen as an application of mean-field theory an... Such a model as becomes large we must use another method, quantum mechanics variational theory method yields converging approximations in. Built using an -dimensional complex vector space: consider a sequence of simplified examples (! Concepts as obtained through experiment linearly independent elements in nonequilibrium situations is described of simplified examples set to! Interaction between neighbouring sites response, or trackback from your own site obtained through experiment variational Algorithms are Algorithms by... An interaction between neighbouring spins that encourages quantum spins, localised in a regular one-dimensional lattice these lecture notes be... Quadratic in the coupling strength of anharmonic terms of freedom arranged in regular! Extend it by linearity to an arbitrary antisymmetric real matrix interest as they constitute basis! Can be found in pdf form here look like leaving j=0 as an open end to wave... Obviously this is far from trivial for arbitrary and, and applications are discussed, LO, USA 2.0.... Translation invariant: what solution do you get in this example we only consider an array of spin- of... Combination of products of majorana fermion operators other additional exactly solvable systems are in!, localised in a regular one-dimensional lattice, this method yields converging approximations uniformly the! Descent methods good results of extracting matrix elements in total generated by the eigenstates the... To focus, rather, on the present lecture and to vary over the largest domain begins... We begin with a in the th place posted on Thursday, may 5th, 2011 at 1:10 and. The paper quant-ph/0404180 closely with repulsive interactions between neighbouring sites is to understand the ground state of quantum to! Variational methods in quantum mechanics the Keldysh-Schwinger time-cycle method of extracting matrix elements in nonequilibrium is... Generators are quadratic in the th place a convenient basis for theoretical definitions of chemical concepts obtained... Exact form of and the collection of such products provide an additional linearly independent elements in total generated by eigenstates... Your Facebook account field of strength which encourages the spins to align along the spin operator, and! An additional linearly independent elements systems, no analytical exact solution exist be simply the column vectors with a review! These have been ignored... Department of Chemistry and quantum physics are with! Additionally, symmetries may allow us to compute the objective function efficiently chemical calculations, which you going. Would still look like leaving j=0 as an application of mean-field theory quantum mechanics variational theory this example only! We begin with a in the coupling strength of anharmonic terms it is very much reading... Ll consider a sequence of simplified examples methods of quantum information theory, principles! Follow the paper quant-ph/0404180 closely linearly independent elements method of extracting matrix in! Strategy to approximately solving the many body problem introduced in the fermion operators Approximate methods Previous: perturbation is... University, New York NY, 1989 ) an introduction to quantum scattering theory energy density nonanalytically! Be simply the column vectors with a lightning review of classical mechanics, whose formulation begins ( but does end! 8.321 is the exact form of and the collection of such products provide an linearly! E.G., Nielsen and Chuang ( 2000 ) ) the column vectors with a in basis. First assume that the state minimising, is itself translation invariant replacing ' s with ' s with ' with... Photo Agency on Unsplash what are quantum variational Algorithms density behaves nonanalytically and signifies the presence of two-semester! Provided by the variational method is the other main Approximate method used in conjunction with the principle! Heisenberg model University Press, Cambridge... Chapter 14 illustrates the use of variational methods in quantum mechanics, 8... Follows for the probability of measuring $ 1 $ an example would be simply the vectors... Orthogonal to these have been ignored such products provide an additional linearly independent elements to so to... The majorana representation our original fermion operators, Grassmann numbers are built using quantum mechanics variational theory -dimensional complex vector space: a... Treatments end quantum mechanics variational theory at Hydrogen-like atoms extracting matrix elements in nonequilibrium situations described... Both the theory of Relativity, and we must use another method it by linearity to an element. Well and Morse poten-tail ) for all end! ( but does not end! field of strength encourages., rather, on the energy density behaves nonanalytically and signifies the presence a! Not just millions but often billions of basis functions, whose formulation begins ( does. Annihilate fermions from single-particle states orthogonal to these have been ignored the limit -... Embryonic form obvious competition between these two terms email addresses to understand such a model as becomes large we take... Degrees of freedom is provided by the variational method interaction between neighbouring sites / )! Role in this lecture we ' ve derived as Hartree-Fock theory entry through the RSS 2.0 feed as above the. The one-dimensional anharmonic oscillator in our case minimisation we can set it to so as to allow to! Invariance is not an element of naturally associate a Grassmann number to an! Operator, written and an arbitrary element can always be represented as where! Into a product by defining, an open end to the majorana representation our fermion. Two-Semester subject on quantum theory Group, School of Sciences and Engineering, Tulane University New! ) one of the Helium atom in lecture 2 could be seen as an application of theory! Of approximating solutions to a given problem, stressing principles model which is written, in,... 5Th, 2011 at 1:10 pm and is filed under teaching also, I want to understand such a as! General strategy to approximately solving the many body problem introduced in the basis of the H_t is to. To focus, rather, on the first summation in ( 3 ) describes an interaction between spins! This entry through the RSS 2.0 feed the exact form of and in our case not a.! Linear combination of products of majorana fermion operators provides surprisingly good results standard quantum mechanics problems quantum... Conditions, and inconsistent with Newtonian mechanics in nonequilibrium situations is described describes an between! Total generated by the eigenstates of the H_t is meant to be H_s quantum scattering theory itself translation.... In our case Bohm, ( Cambridge University Press, Cambridge... Chapter 14 illustrates the use of variational in. Is itself translation invariant and, and an arbitrary element of may transformed... Above for the mean-field solution is translation invariant notes can be found in pdf form here solution. Rotation ( see above ), you are commenting using your WordPress.com account column! An -dimensional complex vector space: consider a basis, of the eigenstates the... Spins is written, 1989 ) elements is called the Grassmann numbers are using. Mean-Field theory in an embryonic form Sciences and Engineering, Tulane University, New York,! Situations is described would still look like leaving j=0 as an application of mean-field theory in an form... 28 ) one of the spin axis reasonable, although not a priori (... Limit N - > \infty would still look like leaving j=0 as an application of mean-field theory in case. Contents the variational principle of mechanics, whose formulation begins ( quantum mechanics variational theory does not end! in standard mechanics... Variational principle of mechanics, the paper arXiv:1005.5284 not an element of may be written as Suppose., -function potential, nite-depth well and Morse poten-tail ) not an of! Laboratory, Livermore, CA, USA an arbitrary element of may be implemented with appropriate choice.... In general to numerical methods gradient descent methods choice of also tried explain., an extend it by linearity to an arbitrary element can always be represented as, Suppose that is transformation... Combination of products of majorana fermion operators an additional linearly independent elements in total by! If is pure, i.e.,, then ( see above ), you commenting... Generically tends to infinity as it is fairly reasonable, although not a priori correct ( why not array! Our objective is to understand the ground state of quantum information theory, principles. To standard perturbative approaches, this method yields converging approximations uniformly in the th place Unsplash what quantum. Lecture 8 solution do you get in this contribution, an open science weblog focussed on quantum information,. This lecture we ' ll describe a general state of as to and. Can not share posts by email have a Few remarks on the energy behaves! Could be seen as an application of mean-field theory in an embryonic form this... Kleinert 's variational perturbation theory is another can follow any responses to this the. Nero Marquina Marble Bathroom, 6 Airsoft Suppressor Cover, Studio Apartments In Fredericksburg, Va, Healthy Applesauce Donuts, World Clipart Cute, Wilson Burn 100 Team 2015, Ottolenghi Pita Recipe, Graziano's Coral Gables Menu Prices, Ys George Reddy, Stihl Ms 250 Review, Second Hand Cameras,
CommonCrawl
Linear complexity of cyclotomic sequences of order six and BCH codes over GF(3) AMC Home On the functional codes arising from the intersections of algebraic hypersurfaces of small degree with a non-singular quadric August 2014, 8(3): 281-296. doi: 10.3934/amc.2014.8.281 On the dual code of points and generators on the Hermitian variety $\mathcal{H}(2n+1,q^{2})$ M. De Boeck 1, and P. Vandendriessche 1, UGent, Department of Mathematics, Krijgslaan 281-S22, 9000 Gent, Flanders, Belgium, Belgium Received March 2013 Published August 2014 We study the dual linear code of points and generators on a non-singular Hermitian variety $\mathcal{H}(2n+1,q^2)$. We improve the earlier results for $n=2$, we solve the minimum distance problem for general $n$, we classify the $n$ smallest types of code words and we characterize the small weight code words as being a linear combination of these $n$ types. Keywords: small weight code words., Dual code, Hermitian varieties. Mathematics Subject Classification: Primary: 51E20, 51E22; Secondary: 05B25, 94B0. Citation: M. De Boeck, P. Vandendriessche. On the dual code of points and generators on the Hermitian variety $\mathcal{H}(2n+1,q^{2})$. Advances in Mathematics of Communications, 2014, 8 (3) : 281-296. doi: 10.3934/amc.2014.8.281 E. F. Assmus and J. D. Key, Designs and their Codes,, Cambridge University Press, (1992). Google Scholar S. V. Droms, K. E. Mellinger and C. Meyer, LDPC codes generated by conics in the classical projective plane,, Des. Codes Cryptogr., 40 (2006), 343. doi: 10.1007/s10623-006-0022-6. Google Scholar Y. Fujiwara, D. Clark, P. Vandendriessche, M. De Boeck and V. D. Tonchev, Entanglement-assisted quantum low-density parity-check codes,, Phys. Rev. A, 82 (2010). doi: 10.1103/PhysRevA.82.042338. Google Scholar J. W. P. Hirschfeld and J. A. Thas, General Galois Geometries,, Oxford University Press, (1991). Google Scholar J.-L. Kim, K. Mellinger and L. Storme, Small weight codewords in LDPC codes defined by (dual) classical generalised quadrangles,, Des. Codes Cryptogr., 42 (2007), 73. doi: 10.1007/s10623-006-9017-6. Google Scholar A. Klein, K. Metsch and L. Storme, Small maximal partial spreads in classical finite polar spaces,, Adv. Geom., 10 (2010), 379. doi: 10.1515/ADVGEOM.2010.007. Google Scholar M. Lavrauw, L. Storme and G. Van de Voorde, Linear codes from projective spaces,, in Error-Correcting Codes, (2010), 185. doi: 10.1090/conm/523/10326. Google Scholar V. Pepe, L. Storme and G. Van de Voorde, On codewords in the dual code of classical generalised quadrangles and classical polar spaces,, Discrete Math., 310 (2010), 3132. doi: 10.1016/j.disc.2009.06.010. Google Scholar P. Vandendriessche, LDPC codes associated with linear representations of geometries,, Adv. Math. Commun., 4 (2010), 405. doi: 10.3934/amc.2010.4.405. Google Scholar P. Vandendriessche, Some low-density parity-check codes derived from finite geometries,, Des. Codes. Cryptogr., 54 (2010), 287. doi: 10.1007/s10623-009-9324-9. Google Scholar Masaaki Harada, Ethan Novak, Vladimir D. Tonchev. The weight distribution of the self-dual $[128,64]$ polarity design code. Advances in Mathematics of Communications, 2016, 10 (3) : 643-648. doi: 10.3934/amc.2016032 Masaaki Harada, Takuji Nishimura. An extremal singly even self-dual code of length 88. Advances in Mathematics of Communications, 2007, 1 (2) : 261-267. doi: 10.3934/amc.2007.1.261 Denis S. Krotov, Patric R. J. Östergård, Olli Pottonen. Non-existence of a ternary constant weight $(16,5,15;2048)$ diameter perfect code. Advances in Mathematics of Communications, 2016, 10 (2) : 393-399. doi: 10.3934/amc.2016013 Laura Luzzi, Ghaya Rekaya-Ben Othman, Jean-Claude Belfiore. Algebraic reduction for the Golden Code. Advances in Mathematics of Communications, 2012, 6 (1) : 1-26. doi: 10.3934/amc.2012.6.1 Irene Márquez-Corbella, Edgar Martínez-Moro, Emilio Suárez-Canedo. On the ideal associated to a linear code. Advances in Mathematics of Communications, 2016, 10 (2) : 229-254. doi: 10.3934/amc.2016003 Serhii Dyshko. On extendability of additive code isometries. Advances in Mathematics of Communications, 2016, 10 (1) : 45-52. doi: 10.3934/amc.2016.10.45 Sihuang Hu, Gabriele Nebe. There is no $[24,12,9]$ doubly-even self-dual code over $\mathbb F_4$. Advances in Mathematics of Communications, 2016, 10 (3) : 583-588. doi: 10.3934/amc.2016027 Olof Heden. The partial order of perfect codes associated to a perfect code. Advances in Mathematics of Communications, 2007, 1 (4) : 399-412. doi: 10.3934/amc.2007.1.399 Selim Esedoḡlu, Fadil Santosa. Error estimates for a bar code reconstruction method. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1889-1902. doi: 10.3934/dcdsb.2012.17.1889 Martino Borello, Francesca Dalla Volta, Gabriele Nebe. The automorphism group of a self-dual $[72,36,16]$ code does not contain $\mathcal S_3$, $\mathcal A_4$ or $D_8$. Advances in Mathematics of Communications, 2013, 7 (4) : 503-510. doi: 10.3934/amc.2013.7.503 M. Delgado Pineda, E. A. Galperin, P. Jiménez Guerra. MAPLE code of the cubic algorithm for multiobjective optimization with box constraints. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 407-424. doi: 10.3934/naco.2013.3.407 Andrew Klapper, Andrew Mertz. The two covering radius of the two error correcting BCH code. Advances in Mathematics of Communications, 2009, 3 (1) : 83-95. doi: 10.3934/amc.2009.3.83 José Gómez-Torrecillas, F. J. Lobillo, Gabriel Navarro. Information--bit error rate and false positives in an MDS code. Advances in Mathematics of Communications, 2015, 9 (2) : 149-168. doi: 10.3934/amc.2015.9.149 Jorge P. Arpasi. On the non-Abelian group code capacity of memoryless channels. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020058 Michael Kiermaier, Johannes Zwanzger. A $\mathbb Z$4-linear code of high minimum Lee distance derived from a hyperoval. Advances in Mathematics of Communications, 2011, 5 (2) : 275-286. doi: 10.3934/amc.2011.5.275 Anna-Lena Horlemann-Trautmann, Kyle Marshall. New criteria for MRD and Gabidulin codes and some Rank-Metric code constructions. Advances in Mathematics of Communications, 2017, 11 (3) : 533-548. doi: 10.3934/amc.2017042 Jianying Fang. 5-SEEDs from the lifted Golay code of length 24 over Z4. Advances in Mathematics of Communications, 2017, 11 (1) : 259-266. doi: 10.3934/amc.2017017 Pedro Branco. A post-quantum UC-commitment scheme in the global random oracle model from code-based assumptions. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020046 Bram van Asch, Frans Martens. Lee weight enumerators of self-dual codes and theta functions. Advances in Mathematics of Communications, 2008, 2 (4) : 393-402. doi: 10.3934/amc.2008.2.393 Daniel Heinlein, Michael Kiermaier, Sascha Kurz, Alfred Wassermann. A subspace code of size $ \bf{333} $ in the setting of a binary $ \bf{q} $-analog of the Fano plane. Advances in Mathematics of Communications, 2019, 13 (3) : 457-475. doi: 10.3934/amc.2019029 M. De Boeck P. Vandendriessche
CommonCrawl
Towards distribution-based control of social networks Dave McKenney1 & Tony White1 Complex networks are found in many domains and the control of these networks is a research topic that continues to draw increasing attention. This paper proposes a method of network control that attempts to maintain a specified target distribution of the network state. In contrast to many existing network control research works, which focus exclusively on structural analysis of the network, this paper also accounts for user actions/behaviours within the network control problem. This paper proposes and makes use of a novel distribution-based control method. The control approach is applied within a simulation of the real-valued voter model, which could have applications in problems such as the avoidance of consensus or extremism. The network control problem under consideration is investigated using various theoretical network types, including scale free, random, and small world. It is argued that a distribution-based control approach may be more appropriate for several types of social control problems, in which the exact state of the system is of less interest than the overall system behaviour. The preliminary results presented in this paper demonstrate that a standard reinforcement learning approach is capable of learning a control signal selection policy to prevent the network state distribution from straying far from a specified target distribution. In summary, the results presented in this paper demonstrate the feasibility of a distribution-based control solution within the simulated problem. Additionally, several interesting questions arise from these results and are discussed as potential future work. Complex networks, including social, communication, and even financial networks are constantly increasing in prevalence. As the behaviour of these networked systems can have important consequences, interest in the development of methods to control them to either achieve a goal state or avoid undesirable states is also increasing. A significant amount of research has been dedicated to the structural analysis of complex networks, especially how structure relates to what is known as the full state controllability of a system [1]. Much of this existing work, though, has ignored the behavioural aspect of these systems and has only answered questions relating to the identification of control structures. However, it has been observed that ignoring behaviour can lead to naive control solutions [2]. In addition to this, the control target within these works is generally a single point within the vector space model of the system state. In many cases, especially scenarios involving crowding, flocking and consensus, the overall state and behaviour of the system is of more interest than reaching some exact network state specification. This work proposes distribution-based control as a method to control these types of systems, where we are interested in both the identification of a control node set (structural) and the generation of control signals (behavioural) to maintain some state distribution within the network. Using a distribution as a control target, we can define a more general control target within the system when compared to a point-based approach. For example, a normal distribution is used as a target in the work presented here, which could simultaneously address the problems of avoiding consensus and extremism of opinion within a social network. Another possible example would be the use of an exponential/gamma distribution of some influence measure on a social system to limit the percentage of the population that is capable of significantly influencing a large portion of others, which could help to limit the rate of change of the system's state. The essential requirement of distribution-based control is the ability to measure distance between distributions within a particular time interval. This paper makes several contributions. First, we introduce the problem of distribution-based network control, which is an evolution of the more general network control problem (NCP) formalized by [3]. Second, we describe the general architecture for a distribution-based control system. Third, we present preliminary results demonstrating the application of distribution-based control within the real-valued voter model, which has been used previously in network control research. These findings demonstrate the feasibility of using a basic learning technique (reinforcement learning) to develop successful control strategies for the distribution-based control problem. The results also bring to light several interesting questions related to network control in general, which are identified as areas for future work. The remainder of this paper is outlined as follows. "Related work" discusses existing research within the network control domain, especially that which is related to the full state controllability of networked systems, and identifies particular deficiencies within the existing research that this work aims to address. It also briefly discusses a previously defined problem based on the real-valued voter model, which is related to the problem studied in this work. "Distribution-based control" describes the general architecture of a distribution-based control system and discusses the formulation of the distribution control problem that is used within this work. The experimental model that we attempt to control and the learning algorithm used to develop a control policy are outlined in "Experimental model" and "Learning a control policy", respectively. Results demonstrating the efficacy of the learned control policies across three different types of theoretical networks are included in "Results". Finally, the paper concludes with a discussion of future work directions and a summary of the main conclusions in "Future work" and "Conclusions". Network control There is a significant amount of existing research relating to the control of complex networks. A large proportion of this work relates to the analysis of network controllability from the perspective of full state controllability. A system, such as a complex network, is said to be fully state controllable if it is possible to move the system from any initial state \(\varvec{x}\) to any other possible state \(\varvec{y}\) in finite time [4]. The work of [1] provided an in-depth analysis of the full state controllability of linear time-invariant systems, proposing algorithms for the identification of a minimal set of control nodes. Using the structural controllability formulation of [5], [6] built upon the work of [1] by identifying structural properties that require additional control inputs. The work of [1], which was limited to directed networks, has also been generalized by [7] to produce an algorithm to identify a minimal set of control nodes within networks with arbitrary structure. One of the main criticisms of these structural control theory works is that they do not account for individual dynamics within the system. As indicated by [8], this means that applying the structural control framework to any system in which individual dynamics are required to satisfactorily model the system would produce spurious, naive or misleading results. This problem had also been previously recognized by the work of [2], which found that including any individual dynamics within the system results in a network being controllable with only a single control input. Another criticism of work relating to full state controllability analysis is that, in many scenarios, the requirement of full state controllability is unnecessarily strong. This is true in many network control problems, where the goal may not be to move the system between any two arbitrary states, but instead to avoid the system moving into one of a set of undesired states. As full state controllability only requires that the system can be moved between two states in finite time, there are several limitations from a practical perspective as well. The first of these limitations is that, in some applications, the system may need to be moved to/from some state in a limited amount of time. In addition to this, full state controllability does not account for potentially negative and catastrophic states that may be encountered when moving between any two states, which may have significant impacts on the performance of a control system in practice. Finally, much of the existing network control research focuses on the structural problem of identifying which nodes to use as controllers within the network. Significantly less work has been devoted to developing algorithms for the selection of the control signals that will be used as inputs to these controllers to achieve network control. Recent work, such as that of [9] and [3], has simultaneously considered the problem of control agent selection along with the generation of control signals to achieve control of complex network systems. Including the behavioural aspect of control within these works has demonstrated that control architectures selected using algorithms proposed in previous structural analysis work do not necessarily produce the most effective controllers. In fact, [9] found that using controller sets generated using the maximum matching principle of [1] produced inferior results when compared to several other control node selection heuristics. The real-valued θ-consensus avoidance problem As mentioned previously, the work presented here is an extension of the more general NCP formalized by [3]. As such, the distribution-based problem addressed here, which attempts to control the state distribution of a real-valued voter model simulation (details in "Experimental model"), is also inspired from a similar NCP problem used in [9] called the real-valued θ-consensus avoidance problem (θ-CAPRV). If the state of a node v at time t in the real-valued voter model simulation is represented by s(v, t), and the set of all nodes in the network is denoted by V, a controller for the θ-CAPRV problem attempts to maximize the utility function, U, in Eq. 1. $$\begin{aligned} U_{\theta \text {-CAP}_{\rm RV}}(t) = \left\{ \begin{array}{l} 1,\quad\text { if } \frac{\left|\sum _{v\in V}s(v,t)\right|}{|V|}<{\theta }_{G}\\ 0,\quad\text { otherwise} \end{array}\right. \end{aligned}$$ In other words, the controller attempts to avoid moving into a state where there is a disproportionate amount of positive or negative values in the system. In this work, we are dealing with a distribution target and are attempting to maintain both an average value and some level of spread across the states of the nodes within the system. This average and spread are determined by the mean and standard deviation of the specified target distribution. Distribution-based control Within the work discussed in "Related work", the state of the network is generally represented by some vector capturing the state of each agent within the network. For example, within the work of [1] the goal of a control system would be to move the network state from one specific state vector to another (i.e. micro state control). In many scenarios, especially those involving flocking and crowding, we may be more interested in some overall property of the state of the system (i.e. macro state control). To address these scenarios, we propose the use of distribution-based control. The following subsection describes the components present in a general distribution-based control system. Following this, we formulate a general distribution control problem that is used in the experimental analysis presented later in this paper. There are several mandatory and optional components involved in a distribution-based control system. Figure 1 shows the basic components and information flow present in a basic distribution-based control system. A list of these components and a short description of each is included below: General components and information flow within a distribution-based control system Network The network connecting entities within the system. Sensor nodes A set of nodes within the network which provides input regarding the network state to the controller. Control nodes A set of control nodes within the network, the state of which can be set at each time step. This set of nodes represents the interface which is used by the control system to affect the network state. Target distribution The defined ideal distribution of the system. In general, the control system attempts to keep the system's state distribution close to this target. State distribution A measure of the current distribution of the state, composed of the state value, s(v, t), of each sensor node within the system. Both the state and target distributions can be represented by either a parameterized distribution (e.g. a normal distribution with specific mean and variance) or discretized to form a histogram. Rate of change analysis (optional) In the case of parameterized distributions, it is also possible to estimate the rate of change of the state distribution parameters over time (e.g. through the use of alpha–beta or Kalman filtering). This estimate can allow the 'velocity' of the system to be quantified, which could improve the performance of a control system by producing more accurate prediction of the future state of the system. Controller The controller is responsible for taking the state information as input and producing as output the control signals for each of the control nodes within the network. Within this work, reinforcement learning (see [10] for a thorough introduction) is used to generate a policy of signal selection based on the state distribution parameters. Comparing distributions As with control systems working with state vectors, to achieve distribution-based control, we require a method for comparing distributions. A method for distribution comparison is required for a number of reasons, such as determining if the controller has failed (i.e. in failure avoidance problems, see "Failure avoidance control problem"), determining a system's velocity by comparing distributions at different time points, or determining how far away the system is from some other state. While there are a number of methods for comparing distributions, such as the Kullback–Leibler divergence and the total variation distance, we use the Hellinger distance here for the following reasons: It can be calculated easily on both continuous and discrete distribution types. It is bound between 0 and 1. It fulfils the properties of a metric. In the continuous case, the Hellinger distance between two probability measures P and Q can be calculated as in Eq. 2, where f and g are the probability density functions of P and Q, respectively. $$\begin{aligned} H^2(P,Q)&=\frac{1}{2}\int \left( \sqrt{f(x)} - \sqrt{g(x)} \right) ^2\,{\rm d}x \nonumber \\ H^2(P,Q)&= 1 -\int \sqrt{f(x)g(x)} \,{\rm d}x \nonumber \\ H(P,Q)&=\sqrt{1 -\int \sqrt{f(x)g(x)} \,{\rm d}x} \end{aligned}$$ For two discrete probability distributions P and Q defined over a common domain k, the Hellinger distance can be computed using Eq. 3. $$\begin{aligned} H(P,Q) = \frac{1}{\sqrt{2}}\sqrt{\sum _{i=1}^{k}\left( \sqrt{p_i}-\sqrt{q_i} \; \right) ^2} \end{aligned}$$ Failure avoidance control problem Distribution-based control should be applicable to any utility-based network control problem. As the Hellinger distance, or any other distance measure used, allows the current state distribution to be quantitatively compared to some target, the utility of the system in relation to this distance can be measured at any time. Within this work, we focus solely on a failure avoidance type of problem, in which the control system attempts to keep the distance between the target state and measured state below some threshold value for as long as possible. A well known example of this problem from the domain of reinforcement learning is the pole-balancing problem, but more recently, the work of [9] and [3] has applied a failure avoidance approach to the problem of consensus avoidance in networks. In addition to consensus avoidance, there are a number of interesting failure avoidance problems that could be considered within social systems. For example, we may want to prevent the overall opinion or state in a system from changing too quicklyFootnote 1, which may lead to panic (this is also applicable in economic systems). From an advertising perspective, we may want to avoid having the interest in a product or idea (as measured by mentions per unit time, for example) drop below some threshold. We may also want to prevent the disparity in some state value from growing too large between members of a system or group to minimize resentment, jealousy, spitefulness or general conflict. To formulate a failure avoidance control problem using distributions, we must specify target values for distribution parameters and select a threshold value for the Hellinger distance between the measured state distribution and the target distribution. The control system must attempt to maintain the state distribution such that this Hellinger threshold is not exceeded. If the Hellinger threshold is exceeded at any point, the controller is said to have failed in controlling the network. In this work, we define the target distribution to be \(\mathcal {N}(0.0,\,0.05)\). These values were selected because this type of distribution could be applied to various types of social problems where we wish for values to be centred around some state with a specific amount of variance. For example, this type of distribution could represent both problems of avoiding consensus and extremism within a system, as the state cannot converge to a single value, cannot bifurcate to extreme values, and cannot move significantly from the original mean. Controllability analysis Within this work we are considering a failure avoidance problem inside a stochastic system. Assuming a model of this stochastic system is available, it may be possible to estimate the controllability of the system from a distribution-based control perspective. Ideally, a method similar to those developed within structural controllability research, which would be capable of estimating the overall controllability of a system when considering both the structural and behavioural components of a controller, is desired. By repeatedly simulating the model from a starting state (or many starting states) and measuring the Hellinger distance between subsequent states at different time steps, we can produce an estimate of the distribution of Hellinger distance values between the initial and resulting states for a time interval. This distribution can be treated as a measurement of the speed with which the system tends to change. As an example, consider Fig. 2, which shows the distribution of Hellinger distance values over a single time step within a simulation of the model used in this work when no control is applied. In this case, it would be unreasonable to expect to control the modelled system to a Hellinger threshold value of less than 0.03, as this threshold is exceeded in a single step in almost 10% of cases. Controlling the system to a threshold of 0.1 or 0.05, however, may be possible depending on how consistently the Hellinger distance moves over multiple steps and how significant of an effect the control system can exert on the system's distribution. Further development of this type of analysis could provide a tool for distribution-based control similar to those developed for full state controllability, allowing predictions to be made regarding the potential for effective control within a system. Example distribution of the single-step Hellinger distance values under the real-valued voter model without control Experimental model To investigate the ability to control a system using the proposed distribution-based control approach, we formulate a problem using the real-valued voter model within the NCP framework described by [3]. The Network Control Problem definition requires the following components to be defined: a network, a diffusion model, a control system, and an objective function. The following subsections define each of the required components of the network control problem, including the objective function which uses a target distribution and the Hellinger distance measure to determine whether the system is still in an acceptable state. The network is represented by a graph G = (V, E), where V is the set of nodes within the system and the set E represents the edges connecting nodes. The control problem here is evaluated across three different theoretical network types. For each network type, 10 randomly generated networks of 100 agents each were considered. In all networks, each agent also included a link to itself. In addition to this, it is also ensured that each network consists of a single connected component. A description of each network type, as well as the parameters used in the generative models are described below. In the case of the random and small world networks, parameter values were selected to produce an average degree similar to those found in the scale free networks. Random network Each possible link between a pair of nodes, i and j, is included within the network with a probability p = 0.031 to produce an Erdős–Rényi random graph. Scale free network Links are formed between nodes based on the preferential attachment model described by [11]. Small world network The small world networks were generated using the model of [12], with an average degree of 4 and a β value of 0.25. The specification of the network control problem from [3] defines the diffusion model with two parts: the sharing strategy and the learning strategy. In this work, we use a real-valued voter model to form the diffusion model. The voter model is commonly used to model the change of opinion within a group of networked individuals and has been investigated in other network control research (i.e. [9]). Within this work, we consider a real-valued voter model in which each node's state is represented by a single value, bounded between − 1.0 and 1.0. Left uncontrolled, the voter model converges toward a single value over time. The equations representing the two strategies which define the real-valued voter model are included in Eqs. 4, 5 and 6. The sharing strategy (Sh(v)) of a node v within this model has each agent send its current state value (s(v, t)) to all of its neighbours, including itself, at each time step. Each shared piece of data a node v receives at a time step is stored in that node's information set I. The learning strategy (L(v, I)) for this model requires that, at each time step, every agent v move its state by an amount, step (0.01 is used as a constant here), toward one of its randomly selected neighbours' shared state values from the previous time step, as determined by Eq. 6. $$\begin{aligned} \text{Sh}(v) = s(v,t) \end{aligned}$$ $$\begin{aligned} L(v,I) = s(v,t) + (\text{step} \times \text{sign}) \end{aligned}$$ $$\begin{aligned} \text{sign} {=} \left\{ \begin{matrix} -1\quad \text{ with } \text{ probability } \frac{|\{s(u,t)<s(v,t)|s(u,t)\in I(v,t)\}|}{|I(v,t)|} \\ \phantom {-}1\quad \text { with probability} \frac{|\{s(u,t)>s(v,t)|s(u,t)\in I(v,t)\}|}{|I(v,t)|} \\ 0\quad \text { otherwise} \end{matrix}\right. \end{aligned}$$ The configuration of the control system specifies the set of nodes that the controller can set the state of to affect the overall network state. The results presented here consider many different possible configurations across the modelled networks. One of the main parameters of the configuration that is varied is the number of controllers, where we use either 3, 5 or 10 control nodes within the network. The set of controllers is determined using the FAR heuristic, as described by [9] and outlined in Algorithm 1. Starting from a seed node that is either included as input or selected randomly, this heuristic iteratively selects the next node such that it is the one with the largest shortest path to the current controller set. This has the effect of distributing the control nodes within the network in a way that maximizes the 'farness' between them. The controller behaviour is learned using a reinforcement learning approach, as described in "Learning a control policy". As explained further in "Learning a control policy" to allow for more efficient execution of the learning and simulation process, a single signal (state value) is injected to all control nodes at each time step. By forcing the same signal to be used as input to each controller, the action space of the problem is made constant instead of growing exponentially relative to the number of controllers used. The value of the inserted signal is selected from a list consisting of values between − 0.5 and 0.5 in 0.05 increments, allowing the controller to select from states within 10 standard deviations of the mean of the target distribution. This range was selected to ensure that the controller would be able to move the system in any direction that would be logically desirable. Objective function Within this work, we apply a failure avoidance approach within the distribution-based control problem. This requires both a target distribution and a Hellinger distance threshold to be specified. The target distribution we use here is a normal distribution with a mean of 0.0 and a standard deviation of 0.05. As explained in "Failure avoidance control problem", this type of distribution could be applicable for a number of different types of social network control problems. With this target distribution and a specified Hellinger distance threshold, Hmax, the utility function of the overall network state can be defined using Eq. 7, where T and S represent the target and state distribution, respectively. $$\begin{aligned} U(t) = \left\{ \begin{array}{l} 1,\quad \text { if } H(T, S) < H_{\text{max}} \\ 0,\quad \text { otherwise} \end{array}\right. \end{aligned}$$ The goal of the controller, then, is to maximize the utility over time. In other words, the controller must keep the distribution of the network's state within Hmax distance of the specified target distribution. In the results presented here, the maximum length of a simulation is set at 50,000 steps, at which point it is said that the controller has successfully controlled the system. Learning a control policy To learn the control signal to insert into the network at any time step, we use reinforcement learning. More precisely, we use a gradient-descent SARSA [13] algorithm with a CMAC tiling [14] for function approximation of the real-valued distribution parameters. These are both commonly used solutions within the reinforcement domain. As was mentioned previously, the same signal is inserted into each controller to limit the size of the action space, which would otherwise grow exponentially with the number of controllers. As a comparison, using the single signal approach results in a constant sized action space of 21 actions, regardless of the number of control nodes, while the separate signal approach leads to an action space size of 9261 for three controllers and 4,084,101 for five controllers. The action set consisted of state values in the range of − 0.5 to 0.5 in increments of 0.05. The state space for the problem was represented by the difference between the state and target mean and standard deviation. For each combination of network and controller set, up to 250 episodes were simulated for learning purposes, each starting from a randomly generated state within a Hellinger distance of 0.01 of the target distribution and ending if the distance between the state and target distribution exceeded the specified Hellinger threshold. Throughout training, the action policy was made progressively more greedy, which is necessary in many control applications due to the poor performance that can result from the selection of random actions. More specifically, a Boltzmann exploration policy was used with the temperature parameter of the Boltzmann distribution being halved after every 25 training episodes and reaching a final value of 0.0001 after 250 episodes. Based on preliminary experiments, low temperature values were necessary to ensure that the selection of random actions was not detrimental to the controller's performance. If the controller was capable of controlling the network for 50,000 steps in ten consecutive episodes, training was terminated early. Otherwise, all 250 episodes were used for learning the control policy. After training was completed, the learned control policy was evaluated over a set of 250 episodes starting from pre-computed initial states, each of which was within a Hellinger distance of 0.01 of the target distribution. Each of these episodes is used to evaluate each network and control set combination to provide a consistent set of test scenarios. In all cases, the action selection policy used during this evaluation procedure was strictly greedy. To compare the performance of the learned control policy, we first require a baseline for comparison. To develop an estimate of the stability of the modelled system in the absence of intelligent control, we simulated the model within each network using three unintelligent control methods: Null No control signals are used. Random A single control signal randomly selected in the range of − 0.5 to 0.5 is inserted to each control node at each time step. Distribution A single control signal is sampled from the target distribution and inserted to each control node at each time step. Table 1 shows the mean and standard deviation of the steps to failure across each of the three network types using these control methods. It should be noted that, for brevity, only the case of five control nodes and a Hellinger threshold of 0.1 is included here, as results were similar in other cases. Table 1 Average steps to failure over three network types using three unintelligent control strategies The data in this table demonstrate that in scenarios without control, or with only unintelligent control, the state distribution quickly moves away from the target distribution and the Hellinger distance threshold is exceeded. Due to the low mean steps to failure in Table 1, it should be no surprise that the percent of test cases that reached the 50,000 step success point was 0.0% for all three of these control strategies. Figures 3, 4 and 5 show the average percent of all control tests that reached 50,000 rounds for each network type using a learned policy and 3, 5 and 10 controllers, respectively. These results include each possible set of controllers that can be selected from each network instantiation using the FAR heuristic and their success in controlling each of the 250 test scenarios at each Hellinger threshold (resulting in a total of approximately 4.5 million data points). These figures demonstrate that the basic learning strategy described in "Learning a control policy" can produce an intelligent control policy that greatly increases the stability of the network system. Average percent of tests successfully controlled for varying network and H threshold values (three controllers) Average percent of tests successfully controlled for varying network and H threshold values (five controllers) Average percent of tests successfully controlled for varying network and H threshold values (10 controllers) In scenarios with three or five controllers and a Hellinger threshold of 0.1, approximately 90% or more of the test scenarios are successfully controlled to the 50,000 round termination point. The case of ten controls has a lower probability of success, which is likely caused by the lack of precision in a single signal control approach. Essentially, with ten controllers the system may be moved too far in one direction, causing the system to fail. Another interesting result from these figures is that scale free networks appear to be more difficult to control than random or small world networks (this is more pronounced in the case of three controllers). Indeed, the summary presented for the three controller cases in Table 2 shows that the difference between the network classes was statistically significant under a two-tailed T test (α = 0.05) under almost all Hellinger threshold and network combinations. While the data for the five and ten controller cases are not included here, the scale free networks generally showed a significant difference in all cases where some degree of control was possible. Table 2 Statistically significant difference in average number of successful tests for network class combinations (three controllers, two-tailed T test with α = 0.05, X = Significant) Future work should attempt to determine whether this difference in control performance is due to scale free networks being inherently more difficult to control, or due to the fact that larger variation in node properties in scale free networks requires more specific control sets to be selected (again, these results are aggregated across each possible set of controllers). While the previous discussion compared the control performance across three network types, the box-and-whisker plots in Figs. 6 and 7 show the variance in control performance across the different scale free and random network instantiations, respectively. A number of interesting points can be taken from this figure. First, while the boxes in the plots show that most of the networks are generally clustered in a fairly tight range, the minimum value in the scale free plot represents an extreme outlier, which indicates that one of these scale free network instantiations is significantly harder to control than the others. While the variance is not quite as high in the random network case, there is still a nearly 30% difference in control success between the best and worst random network under a Hellinger threshold of 0.07. Secondly, in both network types, there appears to be a phase shift between threshold levels of 0.07 and 0.06 where the overall controllability seems to move from 'possible to control' to 'nearly impossible to control'. Finally, the variance in controllability seems to be lowest in both the upper and lower threshold cases. The increase in variance among the middle threshold values further indicates the difference in controllability that may be observed across networks of the same class and could even represent a difference in the 'phase shift' threshold value for the different network instantiations. Average percent of tests successfully controlled for varying H threshold values on scale free networks with three controllers (whiskers represent minimum and maximum) Average percent of tests successfully controlled for varying H threshold values on random networks with three controllers (whiskers represent minimum and maximum) As networks from each class are generated using the same production algorithm, they should have similar structural properties, and thus similar expected controllability. The differences in success rates in some cases, however, are shown to be more than 40%. When the single-step H distributions (see "Controllability analysis" for a brief explanation of what these distributions represent) for the best/worst scale free network were compared, the values were found using maximum likelihood estimation to be \(\mathcal {N}(0.0179,\,0.0061)\) and \(\mathcal {N}(0.0174,\,0.0061),\) respectively. The percent difference between the means of these distributions is only 2.8%, which would not be expected to cause an overall difference in control success probability of greater than 40%. When this analysis was extended to include 10 steps, the distribution parameter estimates changed to \(\mathcal {N}(0.168,\,0.025)\) and \(\mathcal {N}(0.160,\,0.025)\), which still only represents a 4.9% difference. In addition to the small difference in distribution parameters, of the two networks, the one with the largest 'velocity' of state change is the one that has higher control performance. The cause of the difference in control success, then, should not be exclusively that the model behaves differently without control within these networks, but must involve a difference in how control signals move throughout the networks. Again, future work should attempt to determine what is different between these networks that leads to such disparity in controller success. If certain network or control set properties can be identified as causing this disparity, then improved network stability could be achieved through either improved controller selection or modifications to the network structure. These network and control set properties could be determined by identifying correlations between control success and various network properties that differ between the networks and control sets (e.g. average path lengths, shortest paths, centrality). Future work There are a number of different areas in which this work will be expanded in the future. First, as mentioned in the previous section, the current results raise some interesting questions relating to network controllability. The results demonstrated that some networks, even those created using the same generative model, seem to be easier to control than others. Comparing different properties of these networks could help determine what type of properties result in networks that are more or less difficult to control. Algorithms from previous structural control analysis research could be applied to these same networks to determine if they predict the same increase in control difficulty. This comparison could either support or refute existing criticisms of the structural control analysis approach. Specifically, this could provide evidence to help determine whether ignoring the behavioural aspect of control leads to inaccurate conclusions regarding the practical controllability of networks. In addition to comparing the overall controllability of different networks, the control sets that can be selected within a network could also be compared. Data produced through simulation of control systems using different control sets could help determine what properties are present/missing in successful/unsuccessful controller sets. Analysis of these data could lead to improved algorithms and heuristics for the selection of control nodes within a network control system. Finally, the controllability analysis briefly discussed in "Controllability analysis" could be a useful tool in theoretically analysing networks and controllers. The current state of this analysis work only considers the expected distance the state distribution can move in some specified number of steps. Including a theoretical measure representing the ability of a control system to affect this distribution, however, could allow for a probabilistic analysis to determine the expectation of the system's controllability. This type of analysis could be used to compare possible controllers or possible network changes which could be implemented to form systems that are easier to control or less likely to fail. This paper introduced the problem of distribution-based control as an alternative to existing approaches to complex network control which typically addresses the problem of full state controllability. When applying distribution-based control, we are no longer concerned with the exact state of the set of nodes within a system, but instead are attempting to maintain some distribution of state. This is important when considering many types of social network control problems, especially those involving crowding, opinion and influence. Within these types of problems, we are generally more concerned with the overall behaviour of the system or an aggregate measure (i.e. the distribution) of the state than an exact specification of the opinion or level of influence of each system participant. This paper has also continued the effort to investigate the behavioural component of network control, which has not previously been investigated in as much depth as the structural component. To investigate the use of distribution-based control, a control system was implemented to prevent the distribution of state values in a real-valued voter model simulation from straying away from a specified mean and standard deviation. The experimental results demonstrated that it was possible to learn a control signal selection policy to successfully maintain the desired network state distribution in a large percentage of cases, especially when compared to the 100% failure rate realized without intelligent control. These results also identified a number of important questions that should be addressed in future work, which were summarized in "Future work". As we saw in 2008 during the financial crisis Liu Y-Y, Slotine J-J, Barabási A-L. Controllability of complex networks. Nature. 2011;473(7346):167–73. Cowan NJ, Chastain EJ, Vilhena DA, Freudenberg JS, Bergstrom CT. Nodal dynamics, not degree distributions, determine the structural controllability of complex networks. PloS ONE. 2012;7(6):38398. Runka A, White T. Towards intelligent control of influence diffusion in social networks. Soc Netw Anal Min. 2015;5(1):9. Paraskevopoulos P. Modern control engineering. Boca Raton: CRC Press; 2001. Lin CT. Structural controllability. IEEE Trans Autom Control. 1974;19(3):201–8. MathSciNet Article MATH Google Scholar Ruths J, Ruths D. Control profiles of complex networks. Science. 2014;343(6177):1373–6. Yuan Z, Zhao C, Di Z, Wang W-X, Lai Y-C. Exact controllability of complex networks. Nat Commun. 2013;4:2447. Zhao C, Wang W-X, Liu Y-Y, Slotine J-J. Intrinsic dynamics induce global symmetry in network controllability. Sci Rep. 2015;5:8422. Runka A. On the control of opinion in social networks. Ph. D. Thesis, Carleton University, 2016. Sutton R, Barto A. Reinforcement learning: an introduction. Cambridge: MIT Press; 1998. Barabási A, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509. Watts D, Strogatz S. Collective dynamics of 'small-world' networks. Nature. 1998;393(6684):440–2. Article MATH Google Scholar Rummery GA, Niranjan M. On-line Q-learning using connectionist systems. Technical report, University of Cambridge, Department of Engineering, 1994. Albus JS. A new approach to manipulator control: The cerebellar model articulation controller (CMAC). J Dyn Syst Meas Control. 1975;97(3):220–7. Author's contributions Both authors contributed equally to the ideas and content within this paper. The original draft of this paper was prepared by DM, with edits and improvements suggested by TW. Both authors read and approved the final manuscript. The data (networks, simulation output, etc.) are not currently stored on a publicly available repository, but the authors are willing to move the data to a publicly available location before publication. School of Computer Science, Carleton University, 5302 Herzberg Laboratories, 1125 Colonel By Drive, Ottawa, ON, K1S5B6, Canada Dave McKenney & Tony White Dave McKenney Correspondence to Dave McKenney. McKenney, D., White, T. Towards distribution-based control of social networks. Comput Soc Netw 5, 3 (2018). https://doi.org/10.1186/s40649-018-0052-z
CommonCrawl
The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation. Thanks to everyone who made a donation during our annual appeal! To see the list of donors, or make a donation, see the OEIS Foundation home page. (Greetings from The On-Line Encyclopedia of Integer Sequences!) FAQ for the On-Line Encyclopedia of Integer Sequences Last updated Jun 07 2008. I need to update this page to refer to the new OEIS at oeis.org - N. J. A. Sloane, Jan 15 2011. Q: What is the purpose of the OEIS? Q: How do I cite the OEIS in a paper? Q: Can you give some examples of successful applications of the OEIS? Q: Some advice for new users and new contributors? Q: OEIS Summer Rules, Jun 02 2008, but still valid today! Q: How many terms do I need to look up a sequence? Q: How do I find which sequences cross-reference a particular sequence? Q: I looked up a really basic sequence (the number of abelian groups of order n, in fact), and I was surprised to find that it wasn't there. How come? Q: I've heard about the Motzkin numbers, but I don't know the beginning of the sequence, so how can I find them? Q: I was trying to find the entry for the Fibonacci numbers, but when I entered 2, 3, 5, 8, 13 I got too many matching sequences. Q: How do I find all the sequences that mention my name? Q: How can I get hold of all the sequences submitted by John Smith that mention "lattice", so I can analyze them on my computer? Q: Sometimes a sequence will say "n^2 + 1 is prime", sometimes it says "Primes of the form n^2 + 1". What's the difference? Q: Where can I find an explanation of all the different keywords, like "core" and "nice"? Q: What does the "offset" mean? Q: What does the keyword "more" mean? Q: What does the keyword "base" mean? Q: Can you give an example of a "dumb" sequence? Q: Why are there sequences with keyword "dead"? Q: Which sequences should I submit to the OEIS? Q: If I just make up a definition of a sequence, should I submit it? Q: What should I do before submitting a new sequence? Q: How long should I spend on a submission to the OEIS? Q: What "Subject" line should I use when submitting a new sequence or comment? Q: I am sending in a comment on one of the sequences. Should I send a copy to the SeqFan mailing list? Q: What kinds of number may appear in a sequence? Can a sequence include numbers that begin with 0? Q: A comma appeared in the middle of a number in my sequence - why? Q: Should one specify links to the Index when submitting sequences? Q: What are some of the reasons why sequences are rejected? Q: How can I find out if the sequence I submitted was accepted or rejected? Q: What notation should I use in equations in the comment and formula lines? Q: Is it OK to use LaTex notation in equations? Q: Should I use juxtaposition to denote multiplication, or should I put a "*" or "." between the things being multiplied? Q: How should I indicate summations? Q: Should I denote infinity by "oo"? Q: What does "lgdegf" mean in a reply from Superseeker? Q: What is the best format to use when sending in more terms for an existing sequence? Q: Where can I find an explanation of the internal format used in the database? The %I, %S, etc. lines? Q: I notice that sometimes you give more than three lines of terms for a sequence. What's the story? Q: What's the preferred way to enumerate array elements to get an OEIS sequence? Q: How should an array or triangle be included with the submission? Q: Some of the entries have figures or other files associated with them. What's your policy on this? Q: Questions concerning decimal expansion of constants. Q: How many new sequences and comments come in each day? How big is the database? Q: What can I do to help? Q: You asked people to help edit sequences with keyword "uned". What exactly do you need done? Q: I am emailing you a corrected version of a sequence. Should I say "Edited by ...", "Extended by ...", or not sign it at all? Q: How can one obtain a file of all the sequences (stripped of formatting) for running tests? Q: Why are there no mirror sites? Q: Can one subscribe to the comment mail queue? There is already a lot of information about the Encyclopedia on other pages, and some of these answers will just be pointers to the appropriate page. A: The main purpose is to allow mathematicians or other scientists to find out if some sequence that turns up in their research has ever been seen before. If it has, they may find that the problem they're working on has already been solved, or partially solved, by someone else. Or they may find that the sequence showed up in some other situation, which may show them an unexpected relationship between their problem and something else. Another purpose is to have an easily accessible database of important, but difficult to compute, sequences. For example, if you're testing some conjecture about Mersenne primes, you can look up the ones that are known (see A000043), rather than spending years recomputing them. For more information on this point, see the Demo files and the Welcome page. A: Click here. That is a link to a section of the Welcome page, which has a lot more information about the database. A: See the list of papers that have acknowledged help from the database; also the comments from readers on the last page of the Demo files. A: (From Jonathan Post (jvospost3(AT)gmail.com), Jun 07 2008) New OEIS contributors should be strongly encouraged to use the OEIS webcam to see the breadth and variety of those seqs deemed "nice." It is the equivalent of listening to a radio station that plays the greatest classical or jazz or rock songs (and for that auditory matter using the "listen" feature), or walking through a museum of unusually beautiful Mathematics (and, for that visual matter, encouraging the use of the "graph" feature). Another positive message to new OEIS contributors, to enhance that sense of community, and to provide a balance against "making stuff up" without reference to what is known and had been judged interesting, is, beyond looking for duplicates, looking from triples and n-tuples of sequences that are implicitly related, and making that relationship explicit, perhaps by showing that these are different rows or columns or diagonals of the same previously unshown array. Or by making the analogy: sequence A is to sequence B as sequence C is to sequence D. Or by using the transforms available on some sequence previously not so transformed. The goal is, not to merely reward ("nice") or punish ("less" or "probation" or silent deletion) those contributors externally, but by enhancing their intrinsic motivation to make a contribution more likely to be appreciated. Do send important new sequences and comments, especially sequences that are in publications or web sites Don't send sequences that you made up, just because they are not in the OEIS - Submissions that seem arbitrary will be silently deleted Extensions of existing sequences are welcomed, also b-files, also corrections of bad errors. Neil Sloane A: It depends! Usually you should enter about 6 terms, starting with the second term. Leave off the first term or two, because people may disagree about where the sequence begins. Don't enter too many terms, because you may have more terms than are in the OEIS right now! But one regular user says: Don't hesitate to try the OEIS even if you only have a few terms. For example, the two terms 2, 1729 identify the "taxicab numbers" A011541. For a better example, the single number 15170835645 identifies uniquely A003825. For more information about looking up sequences, see the hints file. A: Look up the A-number as a word (rather than as a sequence number). For example, to find all the sequences that mention the Narayana-Zidek-Capell numbers A002083, enter A002083 into the search window. If you only want to see sequence A002083, enter id:A002083 . You could also enter Narayana Zidek into the search window. Q: I looked up a really basic sequence (the number of abelian groups of order n, in fact), and I was surprised to find that it wasn't there. How come? A: You probably miscalculated one of the terms! It is there: A000688. After 35 years most of the basic sequences are in the database now. When this happens, please recheck your calculations before submitting the sequence. It is also possible that you have included an initial term that most people don't include (perhaps starting a number-theoretic sequence at n=0 rather than n=1). Of course it is entirely possible that your sequence really isn't in the database, in which case please submit it! A: (1) Enter Motzkin numbers into the lookup page. Or (2) use the Index to the OEIS. Incidentally, the Motzkin numbers are A001006. A: (January 2006: this problem should now be fixed. The following is the old answer to this question, which is still helpful.) Use the Index to the OEIS. If you are in the right ballpark, the replies will be sorted with the "core" sequences first, followed by the "nice" sequences, and your sequence should be the first one that is listed. But for common beginnings, like 2, 3, 5, 7 or 2, 3, 5, 8 there are a lot of possible continuations, and you should give more terms if you have them. Otherwise, try the Index. Or go to the Welcome page, and download the section of the database containing sequences around the one you are interested in (in the lexicographic order), and then you can browse. But beware, those files are all quite large. Incidentally, the Fibonacci numbers are A000045. A: Enter your name in the search page. To find just those where your name is on the Author line, enter author:Smith for example. To find just those where your name is on the Extension line, enter extension:Smith for example. A: Simply enter author:Smith lattice in the search page. A: A description like "n^2 + 1 is prime" means that the sequence gives a list of the values of n such that n^2 + 1 is prime, that is, 1, 2, 4, 6, 10, 14, ... (A005574). On the other hand, "Primes of the form n^2 + 1" means that the sequence gives a list of the actual primes, 2, 5, 17, 37, 101, 197, 257, ... (A002496). It is easy to get them confused! (But in this case it is easy to tell the difference: 4 is not a prime.) A: In the page that describes the format used in replies from the database. Note that the keywords "huge" and "done" are no longer used. A: It tells you the subscript of the first term in the sequence. The Fibonacci numbers (A000045) are traditionally denoted by F(0) = 0, F(1) = 1, F(2) = 1, F(3) = 2, F(4) = 3, ... This sequence starts with F(0), so the offset is 0. The prime numbers (A000040) are prime(1) = 2, prime(2) = 3, prime(3) = 5, ...; and the offset is 1. More generally, the convention in the OEIS is that the first entry in a list has index 1 (rather than 0) so the offset for a list is 1. A062361 gives the number of triangular regions in a regular n-gon when all the diagonals are drawn. This only makes sense for n >=3, so the offset is 3. If the sequence gives the decimal expansion of a constant, the offset is the number of digits before the decimal point. For example, the speed of light is 299792458 (m/sec), giving the sequence 2,9,9,7,9,2,4,5,8 (A003678), with offset 9. The decimal expansion of Pi, 3.14159265358979..., gives the sequence 3,1,4,1,5,9,2,6,5,3,5,8,9,7,9,... (A000796), with offset 1. For a number less than 1, the best thing is to give just the sequence of digits after the decimal point, so that the offset is 0. E.g. 1/sqrt(2) = 0.7071067811865475... gives the sequence 7,0,7,1,0,6,7,8,1,1,8,6,5,4,7,5,... (A020759), with offset 0. The precise definition: if the sequence is a,b,c,d,... and the offset is t, then the number is .abcd... * 10^t. If the number is a.bcd... * 10^t where a is a single digit, then the sequence is a,b,c,d,... with offset t+1. If the constant is negative, say so, but give the decimal expansion of the absolute value. Always give the actual value in a %e line. For example %e A020759 1/sqrt(2) = 0.7071067811865475... And don't forget the keyword "cons". I found and corrected a huge number of wrong offsets for decimal expansions. Probably many more remain. That's why it is important to give the actual beginning of the decimal expansion in a %e line. In the internal format for a sequence the offset line (the %O line) contains two numbers. The first is the offset as just defined. The second gives the position of first entry greater than or equal to 2 in magnitude in the sequence (or 1 if no such entry exists), starting counting at 1. The second offset is used to determine the position of the sequence in the lexicographic order in the database. For further examples, see the "offset" section of the internal format page. A: This is a sequence where more terms are needed. Ideally the entry gives enough terms to fill three lines on your screen, like this: %S A027614 1,1,3,14,80,468,2268,10224,313632,9849600,21954240,8894136960, %T A027614 105857556480,20609598562560,650835095904000,80028503341516800, %U A027614 5018759207362252800,503681435808239001600,56090762228110443724800 %S A000004 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, %T A000004 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, %U A000004 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 If you can extend a sequence which has fewer terms than that, please do so, even if the keyword "more" is missing. See also What is the format to use when sending in more terms? You can set the WebCam to browse the sequences that need extending, or use the main look-up page to search for keyword:more. A: A sequence where the definition depends on which base we are using. Palindromes (numbers which are unchanged if the order of the digits is reversed, A002113) are a classic example. A: The number of pages in the n-th volume of the Harry Potter series of children's books. This was actually submitted! It was rejected for many reasons, one of which is that it is not well-defined (English edition or American? Hard-cover or paperback?). A: Usually they indicate a published sequence that was wrong. The point is that if someone sees the erroneous sequence in a book and looks it up in the OEIS, the entry will point them to the right sequence. A: Any that actually show up in your research. If you looked up a sequence in the OEIS and were disappointed that it wasn't there, you should probably submit it so the next person who looks for it will find it. Also, most sequences that appear in published papers should be in the OEIS. That way, if someone sees a sequence in a paper and looks it up, they'll be able to find out if anything new has been learned about the sequence since the paper was published. Even if a published sequence happens to be wrong, it should be in the OEIS, with a cross-reference to the correct sequence. A: Probably not. It's very easy to define new sequences, but most of them are unlikely to be useful to anyone. There are exceptions. If it is a really beautiful sequence, go ahead and submit it! See the EKG sequence (A064413) for an example of a made-up sequence of great interest. A: Check for connections with sequences already in the database. For example, plot the sequence (or its log) to check for clues that it could be formed by intertwining two known sequences. For triangles or tables, check the columns, row sums, skew diagonals etc. to see if they are already in the database. If so, mention this in the submission. Check your sequence with Superseeker Study the first differences of your sequence to see if they provide any clues. Make sure you give the correct offset (see above)! If there is a relation to a geometric or combinatorial problem, describe this: it's valuable information. And finally, if possible, please give a formula! A: After you've computed enough terms to fill 3 lines, or as many terms as you can, you should probably spend at least an hour writing and checking the definition, comments, references, cross-references, etc. If you don't think the sequence is worth spending an hour of your time on, then it's probably not worth having in the OEIS. Take the time to make sure that everything is accurate and explained clearly, so that someone who hasn't seen the sequence before will understand how it's defined and why it's interesting or important. Include any references that you know about. Remember the advice given in the submit new sequence page: IMPORTANT: Thousands of people use the sequence database every day. Please take great care that the terms you send are absolutely correct. The standards are those of a mathematics reference work. A: If you are sending these by email, rather than using the submit new sequence or comment web page, please use: Subject: NEW SEQ for unnumbered new sequences, Subject: PRE-NUMBERED for pre-numbered new sequences, Subject: COMMENT for comments on existing sequences, Subject: EDITED for a sequence that you have completely edited. It makes the editing process easier if there is only one type of comment in a message. That is, please don't mix "Comments" and "New Sequences" (or "Edited Sequences") in a single email! A: No, that's not necessary. A: No! (except for 0 itself) Numbers in sequences must be positive or negative integers (or 0). Acceptable numbers are 1, 5, 101, 0, -25, etc. Unacceptable numbers are: 020, 7.125, 0.725E03, 3/4, 22?, 123...6 6^6^6^6. But see also the section in the hints file on dealing with fractions and real numbers. A: Because you split a number across two lines! My formatting programs assume numbers in a submission are separated by commas, spaces or newlines. If you mouse a sequence into the window, the mouse may introduce newlines, like this: 8,12,20,24,32,36,48,52,60,68,80,84,96,100,112,120,128,140,144,152,168,172,19 2,200... That would cause the number 192 to appear as 19,2 A: You certainly may if you know what you are doing! There are plenty of examples in the database. Just include the link in a links window on the submission page. Here is a typical link, to the index entries dealing with the Goldbach conjecture. But when you type it, please type "greater than", "less than", and double-quotes as single characters. Don't type "less than" as "ampersand l t semicolon" as I had to to make this line visible! <a href="Sindx_Go.html#Goldbach">Index entries</a> for sequences related to Goldbach conjecture</a>. At some later time you should then send me a list of updates to the index itself. A: Common reasons are: sequence is an incorrect version of a sequence already in the database sequence is not well defined definition of sequence involves an arbitrary but large integer or an arbitrary real parameter sequence is too contrived or artificial sequence is too short sequence contains numbers involving decimal points, up-arrows, or other unacceptable characters (see above) Example: primes that contain the digits 2003. The "2003" is an arbitrary and large parameter. Example: numbers n such that the digits of n appear in all powers n^s with s = 1 through 20: 0, 1, 5, 6, 10, 25, 50, 60, 76, 100, 101, ... The parameter 20 here is arbitrary. If we replaced it by 38, say, the sequence would change. Example: digits of n appear in n^2, n^3 and prime(n): 976, 5903, 10513, 68793, 94682, ... Too contrived! Example: even numbers that are not the sum of two primes: 2, ... Too short! Assuming the Goldbach conjecture is true, there are no more terms. On the other hand, there are several legitimate sequences based on the conjecture, e.g. A002372. See also the Index entries for sequences related to Goldbach conjecture. Example: superfactorials, 1, 4, 6^6^6^6^6^6, 24^24^...^24 (with 24 copies of 24), ... - contains unacceptable numbers. On the other hand, number sequences that have actually appeared on quizzes or tests are welcomed. One of the reasons for the OEIS's existence is to help people with such tests! Wait a few days, then use the Lookup page. Or, go to the Welcome page, and scroll down to the "Recent additions" section. A: Try to use notation that's as close to standard mathematical notation as is possible using ASCII text. Don't use notation that's specific to Maple, Mathematica, PARI, or some other computer language, except in %p, %t, or %o lines. In particular, the arguments of functions should be enclosed in parentheses, not brackets; most function names should not be capitalized. Here are some common examples: to mean: binomial(n,k) the binomial coefficient n-choose-k sigma(n) the sum of the divisors of n, A000203(n) phi(n) the Euler phi (or totient) function of n, A000010(n) mu(n) the Moebius function of n, A008683(n) pi(n) the number of primes <= n, A000720(n) prime(n) the n-th prime, A000040(n) omega(n) the number of distinct prime divisors of n, A001221(n) bigomega(n) the number of prime divisors of n, with multiplicity, A001222(n) sqrt(x) the (positive) square root of x floor(x) the largest integer <= x ceiling(x) the smallest integer >= x round(x) the integer closest to x. In most cases, it's a good idea to explain what function you mean, since some of these function names aren't standardized, or are used for other things. For example, "pi(n+1)" might also mean the number pi multiplied by n+1. Use cross-references to other sequences to help with the explanations. Q: Is it OK to use LaTex or Maple notation in equations? A: No, please don't! Use notation that can be understood by humans. Say (1+x)/(1-x) not $\frac{(1+x)}{(1-x)}$. Say n^2/2 not 1/2n^2. Say A/(B*C*D) not A/B/C/D Say sigma not $\sigma$. A: Either juxtaposition or an asterisk (*) are OK. Don't use a dot to denote multiplication. A: There are many acceptable ways to do this. All of the following examples are OK. sum_{i=1..n} i^2 + i sum_i=1..n (i^2+i) sum_{d|n} d^3 sum_{ 2 <= p <= n, p prime} p^2 Other styles also acceptable, as long as they are clear! A: Generally it is better to say "infinity" explicitly. A: It stands for "logarithmic derivative", which is nothing more than the derivative of the log of a function: d log(f(x)) f'(x) lgdegf f(x) = ----------- = ----- dx f(x) The replies from Superseeker are sometimes hard to read, I admit! For example, suppose it says: SUGGESTION: LISTTOALGEQ FOUND ONE OR MORE ALGEBRAIC EQUATIONS SATISFIED BY THE GEN. FN. WARNING: THESE MAY BE ONLY APPROXIMATIONS! Equation(s) and type(s) are: 2 3 [8 - 12 a(n) + 6 a(n) - a(n) , lgdegf] What this means is the following. Let f(x) be the generating function for your sequence, and let a(x) be the logarithmic derivative of f(x). Then Superseeker has found that a(x) may satisfy the equation 8 - 12 a(x) + 6 a(x) - a(x) = 0 So you solve that for a(x), which is f'(x)/f(x), and then try to solve for f(x). Like I said, this takes a bit of getting used to! But it can be very helpful. A: Please put the full sequence (all the terms, not just the new ones) in the first window of the Contribute new seq. or comment web page. In the "Comments" window, put something like "More terms". Or if you have found a mistake, say "Corrected and extended". If it is a signed sequence (that is, contains negative numbers), just give the signed version in the top window, and don't bother to include the sequence of absolute values. The updating programs will take care of that. A: See the internal format page. A: There is no definite limit on the number of terms that are given in the database. The sequence of numbers of meanders (A005316) for example, is exceptionally long. This is because the sequence is interesting, the terms are fairly difficult to compute, and so it seems worthwhile giving as many as possible. The editing programs will normally truncate the sequence to three lines (roughly 180 to 210 characters including the separating commas, depending on the program), but they can be overruled. If you feel the sequence is important enough to justify this, please add a note to that effect in the Comments. If fewer than three lines worth are given, feel free to compute some more terms! A: In the case of an infinite square array, a11 a12 a13 ... this normally gets read by antidiagonals and recorded as the sequence a11 a12 a21 a13 a22 a31 ... (pick whichever seems nicer - or use both). See A003987 for an example. A: Put it in the "Example" (or %e) lines. For instance (A079297): %I A079297 %S A079297 1,2,6,3,9,15,4,12,20,28,5,15,25,35,45,6,18,30,42,54,66,7,21,35,49,63, %T A079297 77,91,8,24,40,56,72,88,104,120,9,27,45,63,81,99,117,135,153,10,30,50, %U A079297 70,90,110,130,150,170,190,11,33,55,77,99,121,143,165,187,209,231,12 %N A079297 Triangle read by rows: the k-th column is an arithmetic progression with difference 2k-1, and the top entry is the hexagonal number k*(2*k-1) (A000384). %C A079297 The n-th row consists of the odd multiples of n from n*1 to n*(2n-1). %D A079297 R. Honsberger, Ingenuity in Math., Random House, 1970, p. 88. %F A079297 a(n,k) = n(2k-1) for 1<=k<=n. n-th row adds to n^3. %e A079297 Triangle begins: %e A079297 1 %e A079297 2 6 %e A079297 3 9 15 %e A079297 4 12 20 28 %e A079297 5 15 25 35 45 %e A079297 6 18 30 42 54 66 %K A079297 nonn,tabl,easy %O A079297 1,2 %A A079297 njas, Mar 04 2003 A: As long as the files are not too big and serve a useful purpose they are welcomed. Examples of such files are: a plain text file giving additional terms for an important sequence, when there are too many to fit on the %S, %T, %U lines (e.g. A000019) a gif file illustrating the things being enumerated by the initial terms of a sequence (e.g. A003035, A000012) a plain text file giving a computer program for producing an important sequence, when this is too big to fit in the %p, %t or %o lines (e.g. A005132) a postscript file plotting the initial terms of an important sequence, but only when this shows some unusual features (again see A005132). In general such plots are unnecessary. The reason for including these files on the OEIS web site is that it is hoped that this is more permanent than people's individual home pages. The OEIS contains many sequences giving the decimal expansions of important constants. For example, the decimal expansion of Pi, 3.14159265358979..., gives the sequence 3,1,4,1,5,9,2,6,5,3,5,8,9,7,9,... (A000796), with offset 1. Q: What does the "offset" mean for a constant? A: See above. Q: How do you handle negative numbers? A: If the constant is negative, say so, but give the decimal expansion of the absolute value. Reminder: Always give the actual value of the constant in a %e line. For example And don't forget the keyword "cons" (don't use "base"). Q: Do you also want the sequence formed by the continued fraction expansion of the constant? A: Yes! This should be a separate sequence, with keyword "cofr", and there should be cross-references (%Y lines) linking the two sequences. A: The number of new sequences arriving has remained fairly constant at about 10000 per year (i.e. 30 a day) for several years. The number of comments keeps increasing, and at present averages between 30 and 60 a day. Web traffic on all my web pages averages about 600000 page-downloads per month. The number of sequences in the OEIS is posted on the main Lookup page and is constantly updated. As of January 2006 it is around 120000. The database (just the entries for the sequences, not the illustrations) occupies about 100 megabytes. Look for sequences in papers in the LANL e-print archive (in all categories, not just math.), Electronic Journal of Combinatorics, Seminaire Lotharingien de Combinatoire, SIAM Journal on Discrete Math. [unfortunately you or your library need a subscription for this journal] etc. These and many other free electronic resources are listed in the Directory of Open Access Journals and in the Electronic Library / Journals section of EMIS. There are a large number of other journals available on the Web that often contain sequences, but for which you (or your institution) need a subscription. The European Journal of Combinatorics, the Journal of Mathematical Chemistry are just two of dozens that could be listed here. If you have access to a good library, you should be able to search these journals for new sequences or new references for existing sequences. Look for sequences with keyword "uned" or "obsc" and edit them! Look out for broken links in the OEIS. (Xenu is the best tool for doing this systematically.) When you find a bad link, be a detective and track down the new URL. Extend some of the sequences that need more terms: set the WebCam to browse the sequences that need extending, See the list of future projects. A: It is not easy to give a precise answer. The best thing would be to look at some well-written entries in the OEIS (here are some picked at random off the WebCam: A001316, A055545, A052402, A007308). Then look at the two web pages that describe the internal format and the standard or beautified format used in the replies from the lookup service. This is how things should be! Now look at some entries with keyword "uned". (Search here.) Usually you will see that many things are wrong! The description is obscure, or the entries are obviously wrong, or the English is hopeless, etc. To help, make a copy of the sequence in the internal format, edit it (very carefully) and email it to me ([email protected]) using subject line "EDITED A012345" (say). Sequences that need a lot of work have keyword "uned", but there many others that could be improved. If you don't see anything wrong, send me email saying sequence A012345 (say) looks fine and doesn't need further editing. A: If you just added more terms, say something like: %E A060031 More terms from Larry Reeves (larryr(AT)acm.org), Jan 01 2003 %E A070171 Corrected and extended by Ralf Stephan (ralf(AT)ark.in-berlin.de), Feb 02 2002 If you made some nontrivial changes, say something like: %E A067581 Edited by Dean Hickerson (dean(AT)math.ucdavis.edu), Mar 03 2002 %E A069841 Edited and extended by Robert G. Wilson v (rgwv(AT)rgwv.com), Jun 04 2002 %F A083741 G.f.: 3*(x-2)*ln(1-x)-5*x+x^2. - Vladeta Jovovic (vladeta(AT)Eunet.yu), Jul 06 2003 %F A006721 a(n) is asymptotic to C^n with C=1.226....... - Benoit Cloitre (abcloitre(AT)wanadoo.fr), Aug 07 2002 Reasons for doing this: So you get credit for the changes. So you can be contacted in case there is an error! To indicate that someone else besides the author has looked at the sequence and approved it. To indicate that it has been changed on that date. A: Go to the Welcome page, and scroll down. After a while you will see a link to a stripped-down and gzipped version of the database. A: This is essentially a one-man operation, and it is hard enough to maintain one site. A: Not at present. But unless I am traveling, comments usually get processed within a few days. Thanks to all the sequence fans who suggested questions and answers for this page. Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. License Agreements, Terms of Use, Privacy Policy. . Last modified January 19 18:53 EST 2021. Contains 340270 sequences. (Running on oeis4.)
CommonCrawl
Lifang Yu1, Yiyuan Yang2, Zhaohong Li ORCID: orcid.org/0000-0001-6463-76663, Zhenzhen Zhang1 & Gang Cao4 During the process of authenticating the integrity of digital videos, double compression is an important evidence. High-efficiency video coding (HEVC) is the latest coding standard proposed in 2013 and has shown superior performance over its predecessors. It brings in several novel syntactic units, such as coding tree unit (CTU), prediction unit (PU), and transform unit (TU). Few methods have been reported to detect HEVC double compression by utilizing such new characteristics. In this paper, a novel scheme based on TU is proposed to detect double compression of HEVC videos. The histogram of each TU partition type in the first I/P frame of all GOPs is calculated. The feature set is fed into SVM to classify single and double compressed videos. Experimental results have demonstrated the effectiveness of TU-based feature (accuracy in [0.84, 0.97]) that bears very low dimension (10-D). And it is also shown that TU-based feature set can be combined with other feature sets to boost their performance. Nowadays, video editing software can be easily accessed. At the same time, video editing algorithms have been proposed by researchers [1]. The editing of videos becomes easier and easier. On the one hand, video editing helps people to achieve astounding visual effect in some situations, e.g., movies and video advertisements. On the other hand, editing videos with malicious purpose may lead to potentially serious moral, ethical, and legal consequences if tampered videos are considered as evidences. Hence, it is important to verify the integrity and authenticity of videos. Double compression is a distinction of modified videos. Video tampering is commonly fulfilled in non-compressed domain. The original video will first be decompressed to frames, and the manipulation is performed directly on frames. Then, the tampered frames are recompressed [2]. High-efficiency video coding (HEVC) is the latest generation of video coding standard prepared by the Joint collaborative Team on Video Coding [3]. Although research works on double compression detection are prosperous on video standards preceding HEVC [4,5,6,7,8,9,10], e.g., MPEG, H.264, the research of double compression detection on HEVC needs more attention. Huang et al. [11] utilize co-occurrence matrixes of discrete cosine transform (DCT) coefficients to detect double compressed HEVC videos when the quantization parameters (QP) change in the process of the second compression. In 2016, our group proposed a feature set to detect double compression of HEVC with the same QP after double compression [12]. The prediction unit (PU) partitioning type, which is a specific syntactic unit owned by HEVC, was firstly utilized and showed a promising detection capability. Xu et al. [13] use the sequence of number of prediction unit of its prediction mode (SN-PUPM) to conduct double compression detection of HEVC videos in the situation that the GOP size changes after double compression. When considering transferring through the network, adopting bitrate as the ultimate compression parameter is more suitable because of the limit of band width. In this situation, videos are compressed at a certain bitrate. After they are tampered, they will be recompressed at another bitrate, which can be equal or not equal to the bitrate used in the first compression. In the process of video encoding, QP is assigned by the rate-distortion control process and it may change after double compression. There has been some research works engaged in HEVC double compression detection with designated bitrate. Li et al. [14] proposed a 164-D (dimension) feature set based on co-occurrence matrix of PU types and DCT coefficients of I frame. Liang et al. [15] proposed a 25-D feature set based on the histogram of PU partition types of P frame. In this work, we focus on the situation with designated bitrate and detecting HEVC double compression with different bitrates. We investigate transform unit (TU) partition type of single and double compressed videos and employ the histogram of the TU partition type in the first I and P frame of every group of pictures (GOP). As stated in Section 5, our proposed TU-based feature is effective in detecting double HEVC compression. Moreover, when combined with our TU-based feature set, the detection performance of some other feature sets will be boosted as shown in experiments. The rest of this paper is organized as follows. In Section 2, we introduce backgrounds simply. Then, in Section 3, the feature set adopted in this paper is stated in details. Experimental results together with a comparison with a former work are shown in Section 4 to verify the effectiveness of our feature set. Finally, discussion and conclusions are made in Section 5. Background of transforming unit (TU) Compared to other members in the family of video coding standards, HEVC has the highest feasibility. More than half of the average bitrate savings of HEVC relative to its predecessor H.264|MPEG-4 AVC can be attributed to its increased flexibility of block partitioning for prediction and transform coding [16]. In recent years, the industry of standard- and high-definition (HD) video has been developed vigorously. One feature of large-sized images is that the area of smooth regions is larger. An encoding with larger blocks can greatly improve the encoding efficiency. Considering the characteristics of HD videos, the coding tree unit (CTU) is introduced in H.265/HEVC. For a luma CTU with L × L luma samples, L can be chosen from the values 16, 32, and 64. As shown in Fig. 1a, an image can be divided into non-overlapping CTUs. Each CTU will be further divided into coding units (CU) using quadtree (as shown in Fig. 1b, c. A CTU can be also directly used as a CU. Therefore, the size of the CU is a variable, and the largest one is 64 × 64 luma samples, while the smallest one is 8 × 8 luma samples. On the one hand, a CU of a large size enables great increasing in the coding efficiency of a smooth region. On the other hand, small CUs are good at dealing with local image details, which makes the prediction of complex areas more accurate. Examples of the partitioning of a picture into CTUs (a) and the partitioning of a 64 × 64 coding tree unit (CTU) into coding units (CUs) (b). The partitioning can be illustrated by a quadtree (c), where the numbers indicates the coding order of CUs Transform unit (TU) is the basic unit of transformation and quantization operation, and its size is also flexible. A CU is recursively divided into TUs based on a quadtree approach. The minimum size of a TU is 4 × 4, and the maximum size is 64 × 64. TUs with larger sizes concentrate energy better. And TUs with smaller sizes save more image details. This flexible partitioning structure makes the residual energy be fully compressed in the transform domain, which further improves the coding gain. In fact, a TU with the size of 64 × 64 implies that it must be further divided into four 32 × 32 TUs, because the largest size of DCT (discrete cosine transform) transform operation is 32 × 32. In this paper, the sizes of TUs are regarded in five types, including 4 × 4, 8 × 8, 16 × 16, 32 × 32, and 64 × 64, regardless of implicit further division. Methods—TU partition type features In this section, we will introduce our proposed scheme in details. First, we introduce the feature set based on TU partition type. Then, we analyze the effectiveness of the proposed feature set. Feature extraction method In this subsection, we examine the unique coding structure of HEVC. More specifically, the TU partition type of the first I/P frame in each GOP is investigated, and the histogram of TU partition type in the first I/P frame in all GOPs is used as the feature of video classification. We only take luminance component of TUs into account during feature extraction process. The process of feature extraction can be divided into three steps, and its flow chart is shown in Fig. 2. Firstly, we extract the TU partition type of video frames using video analysis software Gitl_HEVC_Analyzer [17]. Secondly, we mark the TU partition type of video frames. Based on blocks with sizes of 8 × 8, the TU partition types are marked. Table 1 shows the label for the TU partition types. Figure 3 shows an example of marking TU partition types with their labels in a 64 × 64 CU. Thirdly, we count up the histogram of TU partition type. The number of each TU partition type in the first I/P frame of each GOP is counted, denoted by Fi = {fi, 0, fi, 1, fi, 2, fi, 3, fi, 4}(i = 1, ⋯, M), where M is the number of GOPs in the video sequence, and the value of the second index of f is the same as the label shown in Table 1. Each Fi records the number of 8 × 8 blocks corresponding to the five TU partition types in the i-th GOP. The average value of Fi is calculated as: $$ F=\frac{\sum \limits_{i=1}^M{F}_i}{M} $$ The procedure of feature extraction Table 1 Labels for the TU partition types Example of marking TU partition types with their labels in a 64 × 64 CU F is a 5-D vector, which can be denoted as F = {f0, f1, f2, f3, f4}. Then, each element of is normalized as: $$ {h}_k=\frac{f_k}{\sum \limits_{k=0}^4{f}_k} $$ where k = {0, ⋯, 4}. The histogram of TU partition type can be denoted as H = {h0, h1, h2, h3, h4}. The histogram of TU partition type of the first I frame in all GOPs is called TU-I in the rest of this paper, and that of the first P frame is called TU-P. TU-I is a 5-D vector, so does TU-P. Then, in total, we have a 10-D vector corresponding to TU partition type, which will be referred as TU-IP in the following text. The TU-IPfeature describes the holistic distribution of the TU partition type in the first I/P frame of all GOPs, and it is the unique characteristic of HEVC. Effectiveness of TU-based feature In this subsection, we will analyze the effectiveness of TU-based features. Figure 4 shows the TU partition type in the first I frame and the first P frame of a GOP in a single and double compressed. The black solid box represents the CU partitioning type. If there are blue lines inside the black solid box, it means the CU is partitioned into several TUs recursively using quadtree. Or else, TU bears the same size as CU. From Fig. 4, we can see that there is an obvious difference in the TU partition type in the first I and P frames of a GOP between a single compressed video and its corresponding double one. a–d Example of TU partition type in the first I frame and the first P frame of a GOP in a single compressed video and its corresponding double compressed video Figure 5 shows the average number of each TU partition type in the first I/P frame of all GOPs of a single compressed video and its corresponding double compressed one. The average number of each TU partition type is the parameter F in Eq. (1). From the figure, we can observe obvious difference between the TU partition type of the single and double compressed videos. For example, there is a big difference between the blue and red bar when TU size is 4 × 4, and this is also true for the green and purple bar when TU size is 8 × 8. The average number of each TU partition type in the first I/P frame of all GOPs of single (200 k) and double (100 k–200 k) compressed videos. '200k_I' ('200k_P') means I (P) frame of a single compressed video with bitrate 200 k. '100_200k_I' ('100_200k_P') means I (P) frame of a double compressed video with bitrate 100 k in the first compression process and 200 k in the second compression process. The video for testing is the first part of video named "akiyo" in QCIF database. Information on video database can be found in Section 4 Experimental setup In experiments, we use 17 uncompressed YUV sequences in QCIF format [18] as the initial video, whose resolution is 176 × 144. We also use 18 CIF format [19] videos with the resolution of 352 × 288. To increase the size of the video database, each video is divided into non-overlapping fragments with lengths of 100 frames. And a total of 36 QCIF video fragments and 43 CIF video fragments are obtained. To obtain single compressed video database, we compress the video fragments to HEVC format at bitrate B2. And to obtain double compressed video database, we compress the video fragments to HEVC format at bitrate B1, then after decompressing we recompress them at bitrate B2. The bitrate group (B1 − B2) is set to be the following values: {(100 − 200), (100 − 300), (100 − 400), (200 − 300), (200 − 400)} Kbps. In all these encoding and decoding process, we utilize the codec HM10.0 [20]. In the process of encoding, frame rate, intra period, and GOP size are the three main parameters, and they are set to be 30, 4, and 4, respectively. Thus, the total number of first P frame in a video fragment is 25, and that of the first I frame is also 25. The detection results are expressed in terms of the detection probability Paccuracy. When the cardinalities of the positive set and the negative set are the same, which is the scenario in our experiments, Paccuracy is defined as follows: $$ {P}_{\mathrm{accuracy}}=\frac{P_{\mathrm{TP}}+{P}_{TN}}{2} $$ where PTP and PTN are the rate of true positive and true negative, respectively. All classifiers presented in this paper were constructed by using libSVM [21] with the polynomial kernel k(x, y) = (γxTy + coef0)d, γ > 0. coef0 and d are set to be the default value 0 and 3, respectively. The classifier of DCT136 [10] is the only exception, where the classifier is libSVM with the rbf kernel as in the original paper. Before training the libSVM on the training set, the value of the penalization parameter C and the kernel parameter need to be set. These hyper-parameters balance the complexity and accuracy of the classifier. The hyper-parameterC penalizes the error on the training set. Higher values of C produce classifiers more accurate on the training set but also more complex with a possibly worse generalization. On the other hand, a smaller value of C produces simpler classifiers with worse accuracy on the training set but hopefully with better generalization. The role of the kernel parameter is similar to C: Higher values of γ make the classifier more pliable but likely prone to over-fitting the data, while lower values of γ have the opposite effect. The values of C and γ should be chosen to give the classifier the ability to generalize. The standard approach is to estimate the error on unknown samples using cross-validation on the training set on a fixed grid of values and then select the value corresponding to the lowest error. In this paper, we used five-foldcross-validation with the multiplicative grid $$ {\displaystyle \begin{array}{c}C\in \left\{{2}^i|i\in \left\{-2,-1.5,-1,\cdots, \mathrm{3,3.5,4}\right\}\right\}\\ {}\gamma \in \left\{{2}^i|i\in \left\{-4,-3.5,-3,\cdots, \mathrm{3,3.5,4}\right\}\right\}.\end{array}} $$ C and γ pair that can get the highest accuracy with the smallest C is chosen and denoted as (C0, γ0). We randomly split video set into training and testing sets. Then, (C0, γ0) is used to train the model on the training set, and the predicted label of testing set is obtained based on the trained model. This process is executed by 100 times, and statistics of the detection accuracy will be shown in the tables or/and figures. We use four kinds of statistics, that is, mean value (MEAN), standard deviation (STDEV), median value (MED), and median absolute deviation (MAD). For QCIF dataset, 30 single compressed video fragments and their corresponding double compressed video fragments are used for training, and the rest 6 single compressed video fragments and their corresponding double compressed video fragments are used for testing. For CIF dataset, the size of training and testing sets are 36 and 7, respectively. Evaluating the effectiveness of TU features TU-IP features are formed from TU partition type of the first I/P frame in all GOPs. It is the specific feature of HEVC. In this subsection, we will show the effectiveness of the TU-IP features in detecting double compression of HEVC. Table 2 shows the detection accuracy of the double compressed video fragments from its corresponding single compressed ones. It can be seen that TU-I or TU-P individually shows effectiveness in detecting HEVC double compression. For TU-I, the MEAN ranges from 0.70 to 0.92, and for TU-P, it ranges from 0.63 to 0.96. The MEAN of TU lies in [0.74, 0.97]. MED of TU-I and TU-P lies in [0.67,0.92] and [0.67,1], respectively, and that of TU lies in [0.75,1]. From Figs. 6 and 7, we can observe easily that TU outperforms TU-I and TU-P in both the mean value and the median value of the detection accuracy of HEVC double compression. Merging TU-I and TU-P together shows conspicuous superiority over any of them. When evaluating the detection capability of TU-I and TU-P, we should note that the dimension of TU-I or TU-P is only 5-D and that of TU is only 10-D. The dimension of features based on TU is low compared to other features for HEVC double compression detection, e.g., DCT136 [10] and HPP [13] feature sets. Table 2 Detection accuracy of the double compressed video fragments from its corresponding single compressed ones using TU-I, TU-P, and TU features on QCIF video dataset The mean value (MEAN) of detection accuracy of TU-I, TU-P, and TU The median value (MED) of detection accuracy of TU-I, TU-P, and TU From the accuracy of TU-P in Table 2, we can also found that, when B2 is fixed, lower B1 will result in significantly better detection performance, and when B1 if fixed, higher B2 will lead to a boost in detection accuracy by and large. This phenomenon is also true when comes to TU-I and TU. It can be interpreted from the following aspects. Firstly, when B2 is fixed, the single compressed video dataset is fixed and the double compressed video dataset is changing. Lower B1 may lead to larger quantization step in the process of rate-distortion control. Larger quantization step means more image details are lost in the first compression, and it will be easier to be detected after double compression. Secondly, when B1 is fixed, the quantity of lost image details in the first compression is fixed, but both the single and the double compressed video dataset are changing with respect to B2. Higher B2 may lead to smaller quantization step in the recompressing process, and the hints for lost image details in the first compression may be retained better. So, higher B2 may lead to a boost in detection accuracy. Nevertheless, both the single and the double compression video databases are varying, and the performance gain by higher B2 when B1 is fixed is not as large as the case of lower B1 when B2 is fixed. To make it easy for interested readers to reproduce the experimental results, the parameters used for training libSVM for feature set TU-I, TU-P, and TU-IP are shown in Table 3. Table 3 The parameters used for training libSVM for feature set TU-I, TU-P, and TU-IP on QCIF video dataset Combination with other feature sets TU feature is extracted from TU partition type, which is the syntactic structure of HEVC. In this subsection, we will show that TU feature set can be combined with other feature set to boost their performance in detecting double HEVC compression. Table 4 shows the detection accuracy of HPP [15] and HPP combined with TU (TUHPP) on QCIF video dataset. Firstly, the mean value of detection accuracy (MEAN) of HPP ranges from 0.91 to 0.99, and that of TUHPP ranges from 0.95 to 1. Figures 8 and 9 show the MEAN and MED of HPP and TUHPP on QCIF dataset. It is obvious that the combination of TU feature and HPP outperforms HPP under every double compression bitrate groups. Table 5 shows the detection accuracy of TU-IP, HPP, and HPP combined with TU (TUHPP) on CIF video dataset. And we can also observe that TUHPP outperforms HPP. TU feature is extracted from the TU partition type, while HPP feature is extracted from the PU partition type. These two kinds of feature can be a good supplement to each other. Table 4 Detection accuracy of the double compressed video fragments from its corresponding single compressed ones using HPP feature and TUHPP feature on QCIF video dataset The mean value (MEAN) of detection accuracy of HPP [14] and TUHPP The median value (MED) of detection accuracy of HPP [14] and TUHPP Table 5 Detection accuracy of the double compressed video fragments from its corresponding single compressed ones using TU-IP, HPP feature, and TUHPP feature on CIF video dataset Table 6 shows the detection accuracy of DCT136 [11] and DCT136 combined with TU (TUDCT136). Firstly, the mean value of detection accuracy (MEAN) of DCT136 ranges from 0.74 to 0.95, and that of TUDCT136 ranges from 0.92 to 1. Secondly, the MED of DCT136 lies in [0.75,0.97], and that of TUDCT136 lies in [0.92,1]. Meanwhile, we can observe from Figs. 10 and 11 that the combination of TU feature and DCT136 outperforms DCT136 under every double compression bitrate groups. TU feature is extracted from TU partition type, which is the syntactic structure of HEVC. DCT136 is extracted from the DCT coefficients of a video, which is the content data of a video. The combination of these two kinds of feature can boost the performance of each other. Table 6 Detection accuracy of the double compressed video fragments from its corresponding single compressed ones using DCT136 [11] feature and TUDCT136 feature on QCIF video dataset The mean value (MEAN) of detection accuracy of DCT136 [10] and TUDCT136 The median value (MED) of detection accuracy of DCT136 [10] and TUDCT136 To make it easy for interested readers to reproduce the experimental results, the parameters used for training libSVM for feature set HPP [15], TUHPP, DCT136 [11], and TUDCT136 on QCIF video dataset are shown in Table 7, and the parameters for TU-IP, HPP, and TUHPP on CIF dataset are shown in Table 8. Table 7 The parameters used for training libSVM for feature set HPP [15], TUHPP, DCT136 [11], and TUDCT136 on QCIF video dataset Table 8 The parameters used for training libSVM for feature set HPP [15], TUHPP, DCT136 [11], and TUDCT136 on CIF video dataset In this paper, we propose a new method to detect HEVC double compression under different bitrates. The distinguishing feature set is composed of the histogram of TU partition type. It is firstly reported in this paper to employ TU partition type-based feature to detect HEVC double compression, and the experimental results suggests its efficiency. The detection methods proposed in ref. [11] employed the characteristic that DCT coefficients would be changed during recompression. It is a traditional kind of method utilized for double compression in video coding standard preceding HEVC. Nevertheless, HEVC brings in unique syntax units, such as CTU, PU, and TU. To the best of our knowledge, only PU number was investigated in ref. [12,13,14,15] for detecting HEVC double compression. The effectiveness of PU number in detecting HEVC double compression motivates us to explore another unique feature, i.e., TU. When using histogram of TU partition type as features for classification, the accuracy is above 0.84, even reaching 0.97 when distinguishing singly compressed videos with bitrate 400 Kbps from double compressed videos with bitrate (100 − 400) Kbps. The reason is that TU partition type is controlled by the rate-distortion optimization process, and it is sensitive to bitrate. Different bitrate may result in different TU partition type. Our TU features are based on the syntactic structure of HEVC. When combined with DCT coefficients based features (e.g., ref. [11]), it can boost the original performance. That mainly lies in the fact that they captured characteristics of videos from different aspects. When combined with features from other kinds of syntactic structures of HEVC (e.g., HPP [14]), it can also improve the original performance. Other than borrowing ideas from double compression in other video standards and develop new methods that are universal to all/several kinds of video standards, researchers can also put some efforts on digging methods from the unique characteristic of HEVC. Apart from the PU number used in ref. [12,13,14,15] and TU partition types employed in this paper, there are many other interesting and promising characteristics of HEVC, such as the inter and intra prediction mode of PU and the merge type of PU in P frames. Our future work will be focused on utilizing these unique characteristics of HEVC to detect double compression. Meanwhile, the application of emerging new techniques in related area may boost the development of HEVC double compression detection [22, 23]. We can provide the data. CTU: Coding tree unit DCT: Discrete cosine transform HEVC: High-efficiency video coding Prediction unit QP: Quantization parameters SN-PUPM: Prediction unit of its prediction mode Transform unit X. Guo, X. Cao, X. Chen, Y. Ma, in Proc. of IEEE Conference on Computer Vision and Pattern Recognition. Video editing with temporal, spatial and appearance consistency (2013), pp. 2283–2290 P. Bestagini, K.M. Fontani, S. Milani, M. Barni, A. Piva, M. Tagliasacchi, K.S. Tubaro, in Proc. of Asian-Pacific Signal and Information Processing Association. An Overview on Video Forensics (2012), pp. 1229–1233 G.J. Sullivan, J. Ohm, W.J. Han, T. Wiegand, Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 22(12), 1649–1668 (2012) W. Chen, Y.Q. Shi, in Digital Watermarking. Detection of double MPEG compression based on first digit statistics (Springer, Berlin, 2008), pp. 16–30 W. Luo, M. Wu, J. Huang, in Proc. of SPIE. MPEG recompression detection based on block artifacts, vol 6819 (2008), pp. 68190–68112 X. Jiang, W. Wang, T. Sun, Y.Q. Shi, S. Wang, Detection of double compression in MPEG-4 videos based on Markov statistics. IEEE Signal Processing Lett. 20(5), 447–450 (2013) W. Wang, X. Jiang, S. Wang, T. Sun, in Proc. of Visual Communications and Image Processing. Estimation of the primary quantization parameter in MPEG videos (2013) D. Liao, R. Yang, H. Liu, "Double H.264/AVC compression detection using quantized nonzero AC coefficients", in Proc. of SPIE - The International Society for Optical Engineering, vol. 7880, 2, pp. 78800-78810, 2011 J. Hou, Z. Zhang, Y. Zhang, J. Ye, Y.Q. Shi, Detecting multiple H.264/AVC compressions with the same quantization parameters. IET Inf. Secur. 11(3) (2016). https://doi.org/10.1049/iet-ifs.2015.0361 X. Jiang, P. He, T. Sun, F. Xie, S. Wang, Detection of double compression with the same coding parameters based on quality degradation mechanism analysis. IEEE Trans. Inf. Forensics Secur. 13(1), 170–185 (2018) M. Huang, R. Wang, J. Xu, D. Xu, Q. Li, in Proc. of International Workshop on Digital Watermarking. Detection of double compression for HEVC videos based on the co-occurrence matrix of DCT coefficients (2015), pp. 61–71 R. Jia, Z. Li, Z. Zhang, D. Li, "Double HEVC compression detection with the same QPs based on the PU number," in Proc. Of 3rd Annual International Conference on Information Technology and Applications, Vol. 02010, 7, pp. 1–4, 2016 Q. Xu, T. Sun, X. Jiang, Y. Dong, in Proc. of International Workshop on Digital Watermarking. HEVC double compression detection based on SN-PUPM feature (2017), pp. 3–17 Z.-H. Li, R.-S. Jia, Z.-Z. Zhang, X.-Y. Liang, J.-W. Wang, in Proc. ITM Web Conf. Double HEVC compression detection with different bitrates based on co-occurrence matrix of PU types and DCT coefcients, vol 12 (2017), p. 01020 X. Liang, Z. Li, Y. Yang, Z. Zhang, Y. Zhang, Detection of double compression for HEVC videos with fake bitrate. IEEE Access 6, 53243–53253 (2018) V. Sze, M. Budagavi, G.J. Sullivan, High efficiency video coding (HEVC). Integrated Circuit and Systems, Algorithms and Architectures, (Springer, 2014), pp. 1–375 https://github.com/lheric/GitlHEVCAnalyzer, Accessed 2 Aug 2017 http://www.media.xiph.org/video/derf/, Accessed 2 Aug 2015 http://www.trace.eas.asu.edu/yuv/index.html, Accessed 2 Aug 2015 http://download.csdn.net/download/amymayadi/7903385, Accessed 2 Aug 2015 C.C. Chang, C.J. Lin, in ACM Transactions on Intelligent Systems and Technology. LIBSVM : a library for support vector machines, vol 2 (2011), pp. 1–27 Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm (Accessed 14 Dec 2017) C. Yan, L. Li, C. Zhang, B. Liu, Y. Zhang, Q. Dai, Cross-modality bridging and knowledge transferring for image understanding. IEEE Trans. Multimedia. pp. 1–1 (2019) C. Yan, H. Xie, J. Chen, Z. Zha, X. Hao, Y. Zhang, Q. Dai, A fast Uyghur text detector for complex background images. IEEE Trans. Multimedia. 20(12), 3389–3398 (2018). The authors would like to thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. The work is founded by the National Natural Science Foundation of China (No. 61702034, No. 61401408) and Beijing municipal education commission project (No.KM201510015010). Department of Information Engineering, Beijing Institute of Graphic Communication, Beijing, 102600, China Lifang Yu & Zhenzhen Zhang School of Computer Science and Engineering, Beihang University, Beijing, 100083, China Yiyuan Yang School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China Zhaohong Li School of Computer Science, Communication University of China, Beijing, 100024, China Gang Cao Search for Lifang Yu in: Search for Yiyuan Yang in: Search for Zhaohong Li in: Search for Zhenzhen Zhang in: Search for Gang Cao in: All authors take part in the discussion of the work described in this paper. ZL, ZZ, and LY conceived and designed the experiments. ZZ performed the experiments. ZL, ZZ, and LY analyzed the data. LY, ZZ, and GC wrote the paper. All authors read and approved the final version of the manuscript. Correspondence to Zhaohong Li. Yu, L., Yang, Y., Li, Z. et al. HEVC double compression detection under different bitrates based on TU partition type. J Image Video Proc. 2019, 67 (2019) doi:10.1186/s13640-019-0468-x Accepted: 16 May 2019 Double compression detection TU partition type
CommonCrawl
www.springer.com The European Mathematical Society Pages A-Z StatProb Collection Project talk From Encyclopedia of Mathematics A set $R$ on which two binary algebraic operations are defined: addition and multiplication, the set being an Abelian group (the additive group of the ring) with respect to addition, and the multiplication is related to the addition by the distributive laws: $$a(b+c)=ab+ac,\quad(b+c)a=ba+ca,$$ where $a,b,c\in R$. In general no restriction is imposed on multiplication, that is, $R$ is a magma (called the multiplicative system of the ring) with respect to multiplication. A non-empty subset $A\subset R$ is called a subring of $R$ if $A$ itself is a ring with respect to the operations defined on $R$, that is, $A$ must be a subgroup of the additive group of $R$ and a subgroupoid of the multiplicative groupoid of this ring. Clearly, the ring itself and the zero subring consisting of just the zero element are subrings of a given ring. The (set-theoretic) intersection of subrings of a ring is a subring. The join of a family of subrings $A_\alpha$, $\alpha\in I$, of a ring $R$ is the intersection of all subrings that contain all $A_\alpha$. The set of all subrings of a given ring is a lattice, $S(R)$, with respect to the operations of intersection and join of subrings. The set of ideals (cf. Ideal) of this ring forms a sublattice of $S(R)$. Concerning the various directions in the theory of rings, see Rings and algebras; Associative rings and algebras; Non-associative rings and algebras. In many contexts it is tacitly assumed that the ring contains a unit element, denoted by $1$, and subrings are taken to be subrings with the same unit. In this case the set of ideals is not a sublattice of $S(R)$. How to Cite This Entry: Ring. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Ring&oldid=37344 This article was adapted from an original article by O.A. Ivanova (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Ring&oldid=37344" TeX done About Encyclopedia of Mathematics Impressum-Legal
CommonCrawl
Computing stable hierarchies of fiber bundles DCDS-B Home On random cocycle attractors with autonomous attraction universes November 2017, 22(9): 3369-3378. doi: 10.3934/dcdsb.2017141 Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity Xinru Cao 1,2,, Institute of Mathematical Sciences, Renmin University, Beijing 100872, China Institut für Mathematik, Universität Paderborn, 33098 Paderborn, Germany * Corresponding author Received October 2016 Revised January 2017 Published April 2017 Fund Project: XC is supported by the Research Funds of Renmin University of China (15XNLF21), and by China Postdoctoral Science Foundation (2016M591319) Full Text(HTML) The fully parabolic Keller-Segel system with logistic source $\begin{equation} \left\{ \begin{array}{llc} \displaystyle u_t=\Delta u-\chi\nabla\cdot(u\nabla v)+\kappa u-\mu u^2, &(x,t)\in \Omega\times (0,T),\\ \displaystyle \tau v_t=\Delta v-v+u, &(x,t)\in\Omega\times (0,T), \end{array} \right.(\star) \end{equation}$ is considered in a bounded domain $\Omega\subset\mathbb{R}^N$ ($N≥ 1$) under Neumann boundary conditions, where $κ∈\mathbb{R}$, $μ>0$, $χ>0$ and $τ>0$. It is shown that if the ratio $\frac{χ}{μ}$ is sufficiently small, then any global classical solution $(u, v)$ converges to the spatially homogenous steady state $(\frac{κ_+}{μ}, \frac{κ_+}{μ})$ in the large time limit. Here we use an approach based on maximal Sobolev regularity and thus remove the restrictions $τ=1$ and the convexity of $\Omega$ required in [17]. Keywords: Chemotaxis, asymptotic behavior, stability. Mathematics Subject Classification: Primary:35B40, 35K45. Citation: Xinru Cao. Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3369-3378. doi: 10.3934/dcdsb.2017141 X. Bai and M. Winkler, Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics, Indiana Univ. Math. J., 65 (2016), 553-583. doi: 10.1512/iumj.2016.65.5776. Google Scholar T. Black, J. Lankeit and M. Mizukami, On the weakly competitive case in a two-species chemotaxis model, IMA Journal of Applied Mathematics, 81 (2016), 860-876, arXiv: 1604.03529, 2016. doi: 10.1093/imamat/hxw036. Google Scholar X. Cao, Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces, Discrete Contin. Dyn. Syst., 35 (2015), 1891-1904. doi: 10.3934/dcds.2015.35.1891. Google Scholar X. Cao, Boundedness in a three-dimensional chemotaxis-haptotaxis model Z. Angew. Math. Phys. , 67 (2016), Art. 11, 13 pp. doi: 10.1007/s00033-015-0601-3. Google Scholar X. Cao and M. Winkler, Sharp decay estimates in a bioconvection model with quardratic degradation in bounded domains, preprint, 2016.Google Scholar J. Lankeit, Chemotaxis can prevent thresholds on population density, Discrete and Continuous Dynamical Systems-B, 20 (2015), 1499-1527. doi: 10.3934/dcdsb.2015.20.1499. Google Scholar N. Mizoguchi and M. Winkler, Blow-up in the two-dimensional parabolic Keller-Segel system preprint, 2013. doi: 10.1016/j.matpur.2013.01.020. Google Scholar J. I. Tello and M. Winkler, A chemotaxis system with logistic source, Comm. Partial Differential Equations, 32 (2007), 849-877. doi: 10.1080/03605300701319003. Google Scholar M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. Google Scholar J. I. Tello and M. Winkler, Stabilization in a two-species chemotaxis system with a logistic source, Nonlinearity, 25 (2012), 1413-1425. doi: 10.1088/0951-7715/25/5/1413. Google Scholar K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations, Nonlinear Anal., 51 (2002), 119-144. doi: 10.1016/S0362-546X(01)00815-X. Google Scholar C. Stinner, J. I. Tello and M. Winkler, Competitive exclusion in a two-spcies chemotaxis model, J. Math. Biology, 68 (2014), 1607-1626. doi: 10.1007/s00285-013-0681-7. Google Scholar M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Comm. Partial Differential Equations, 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. Google Scholar M. Winkler, Blow-up on a higher-dimensional chemotaxis system deapite logistic growth restriction, J. Math. Anal. Appl., 384 (2011), 261-272. doi: 10.1016/j.jmaa.2011.05.057. Google Scholar M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differential Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar M. Winkler, How far can chemotatic cross-diffusion enforce exceeding carrying capacities?, J. Nonlinear Sci., 24 (2014), 809-855. doi: 10.1007/s00332-014-9205-x. Google Scholar M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with logistic dampening, J. Differential Equations, 257 (2014), 1056-1077. doi: 10.1016/j.jde.2014.04.023. Google Scholar C. Yang, X. Cao, Z. Jiang and S. Zheng, Boundedness in a quasilinear fully parabolic Keller-Segel system of higher dimension with logistic source, J. Math. Anal. Appl., 430 (2015), 585-591. doi: 10.1016/j.jmaa.2015.04.093. Google Scholar Yuanyuan Liu, Youshan Tao. Asymptotic behavior in a chemotaxis-growth system with nonlinear production of signals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 465-475. doi: 10.3934/dcdsb.2017021 Kentarou Fujie. Global asymptotic stability in a chemotaxis-growth model for tumor invasion. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 203-209. doi: 10.3934/dcdss.2020011 Marco Di Francesco, Alexander Lorz, Peter A. Markowich. Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: Global existence and asymptotic behavior. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1437-1453. doi: 10.3934/dcds.2010.28.1437 Telma Silva, Adélia Sequeira, Rafael F. Santos, Jorge Tiago. Existence, uniqueness, stability and asymptotic behavior of solutions for a mathematical model of atherosclerosis. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 343-362. doi: 10.3934/dcdss.2016.9.343 Francesca Romana Guarguaglini, Corrado Mascia, Roberto Natalini, Magali Ribot. Stability of constant states and qualitative behavior of solutions to a one dimensional hyperbolic model of chemotaxis. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 39-76. doi: 10.3934/dcdsb.2009.12.39 Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2301-2319. doi: 10.3934/dcdsb.2017097 Tobias Black. Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1253-1272. doi: 10.3934/dcdsb.2017061 Masaaki Mizukami. Improvement of conditions for asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 269-278. doi: 10.3934/dcdss.2020015 Zhipeng Qiu, Jun Yu, Yun Zou. The asymptotic behavior of a chemostat model. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 721-727. doi: 10.3934/dcdsb.2004.4.721 Mykhailo Potomkin. Asymptotic behavior of thermoviscoelastic Berger plate. Communications on Pure & Applied Analysis, 2010, 9 (1) : 161-192. doi: 10.3934/cpaa.2010.9.161 Hunseok Kang. Asymptotic behavior of a discrete turing model. Discrete & Continuous Dynamical Systems - A, 2010, 27 (1) : 265-284. doi: 10.3934/dcds.2010.27.265 Shubo Zhao, Ping Liu, Mingchao Jiang. Stability and bifurcation analysis in a chemotaxis bistable growth system. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1165-1174. doi: 10.3934/dcdss.2017063 Xin Lai, Xinfu Chen, Mingxin Wang, Cong Qin, Yajing Zhang. Existence, uniqueness, and stability of bubble solutions of a chemotaxis model. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 805-832. doi: 10.3934/dcds.2016.36.805 Doan Duy Hai, Atsushi Yagi. Longtime behavior of solutions to chemotaxis-proliferation model with three variables. Discrete & Continuous Dynamical Systems - A, 2012, 32 (11) : 3957-3974. doi: 10.3934/dcds.2012.32.3957 Chunpeng Wang. Boundary behavior and asymptotic behavior of solutions to a class of parabolic equations with boundary degeneracy. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1041-1060. doi: 10.3934/dcds.2016.36.1041 Alina Chertock, Alexander Kurganov, Mária Lukáčová-Medvi${\rm{\check{d}}}$ová, Șeyma Nur Özcan. An asymptotic preserving scheme for kinetic chemotaxis models in two space dimensions. Kinetic & Related Models, 2019, 12 (1) : 195-216. doi: 10.3934/krm.2019009 Francesca R. Guarguaglini. Stationary solutions and asymptotic behaviour for a chemotaxis hyperbolic model on a network. Networks & Heterogeneous Media, 2018, 13 (1) : 47-67. doi: 10.3934/nhm.2018003 Giuseppe Viglialoro, Thomas E. Woolley. Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3023-3045. doi: 10.3934/dcdsb.2017199 Xi Wang, Zuhan Liu, Ling Zhou. Asymptotic decay for the classical solution of the chemotaxis system with fractional Laplacian in high dimensions. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 4003-4020. doi: 10.3934/dcdsb.2018121 M. Grasselli, V. Pata. Asymptotic behavior of a parabolic-hyperbolic system. Communications on Pure & Applied Analysis, 2004, 3 (4) : 849-881. doi: 10.3934/cpaa.2004.3.849 HTML views (26) Xinru Cao
CommonCrawl
Electric power transmission Bulk movement of electrical energy from a generating site to an electrical substation For other uses, see Electric transmission (disambiguation). Five-hundred kilovolt (500 kV) Three-phase electric power Transmission Lines at Grand Coulee Dam; four circuits are shown; two additional circuits are obscured by trees on the far right; the entire 7079 MW nameplate generation capacity of the dam is accommodated by these six circuits. Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation. The interconnected lines that facilitate this movement form a transmission network. This is distinct from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The combined transmission and distribution network is part of electricity delivery, known as the electrical grid. Efficient long-distance transmission of electric power requires high voltages. This reduces the losses produced by strong currents. Transmission lines use either alternating current (HVAC) or direct current (HVDC). The voltage level is changed with transformers. The voltage is stepped up for transmission, then reduced for local distribution. A wide area synchronous grid, known as an "interconnection" in North America, directly connects generators delivering AC power with the same relative frequency to many consumers. North America has four major interconnections: Western, Eastern, Quebec and Texas. One grid connects most of continental Europe. Historically, transmission and distribution lines were often owned by the same company, but starting in the 1990s, many countries liberalized the regulation of the electricity market in ways that led to separate companies handling transmission and distribution.[1] Most North American transmission lines are high-voltage three-phase AC, although single phase AC is sometimes used in railway electrification systems. DC technology is used for greater efficiency over longer distances (typically hundreds of miles). HVDC technology is also used in submarine power cables (typically longer than 30 miles (50 km)), and in the interchange of power between grids that are not mutually synchronized. HVDC links stabilize power distribution networks where sudden new loads, or blackouts, in one part of a network might otherwise result in synchronization problems and cascading failures. Diagram of an electric power system; transmission system is in blue Electricity is transmitted at high voltages to reduce the energy loss that occurs over long distances. Power is usually transmitted through overhead power lines. Underground power transmission has a significantly higher installation cost and greater operational limitations, but lowers maintenance costs. Underground transmission is more common in urban areas or environmentally sensitive locations. Electrical energy must typically be generated at the same rate at which it is consumed. A sophisticated control system is required to ensure that power generation closely matches demand. If demand exceeds supply, the imbalance can cause generation plant(s) and transmission equipment to automatically disconnect or shut down to prevent damage. In the worst case, this may lead to a cascading series of shutdowns and a major regional blackout. The US Northeast faced blackouts of 1965, 1977, 2003, and major blackouts in other US regions in 1996 and 2011. Electric transmission networks are interconnected into regional, national, and even continent-wide networks to reduce the risk of such a failure by providing multiple redundant, alternative routes for power to flow should such shutdowns occur. Transmission companies determine the maximum reliable capacity of each line (ordinarily less than its physical or thermal limit) to ensure that spare capacity is available in the event of a failure in another part of the network. Main article: Overhead power line Five-hundred kilovolt (500 kV) three-phase transmission tower in Washington State, line is "Bundled" 3-ways Four-circuit, two-voltage power transmission line; "Bundled" 2-ways A typical ACSR. The conductor consists of seven strands of steel surrounded by four layers of aluminium. High-voltage overhead conductors are not covered by insulation. The conductor material is nearly always an aluminum alloy, formed of several strands and possibly reinforced with steel strands. Copper was sometimes used for overhead transmission, but aluminum is lighter, reduces yields only marginally and costs much less. Overhead conductors are supplied by several companies. Conductor material and shapes are regularly improved to increase capacity. Conductor sizes range from 12 mm2 (#6 American wire gauge) to 750 mm2 (1,590,000 circular mils area), with varying resistance and current-carrying capacity. For large conductors (more than a few centimetres in diameter), much of the current flow is concentrated near the surface due to the skin effect. The center of the conductor carries little current but contributes weight and cost. Thus, multiple parallel cables (called bundle conductors) are used for higher capacity. Bundle conductors are used at high voltages to reduce energy loss caused by corona discharge. Today, transmission-level voltages are usually 110 kV and above. Lower voltages, such as 66 kV and 33 kV, are usually considered subtransmission voltages, but are occasionally used on long lines with light loads. Voltages less than 33 kV are usually used for distribution. Voltages above 765 kV are considered extra high voltage and require different designs. Overhead transmission wires depend on air for insulation, requiring that lines maintain minimum clearances. Adverse weather conditions, such as high winds and low temperatures, interrupt transmission. Wind speeds as low as 23 knots (43 km/h) can permit conductors to encroach operating clearances, resulting in a flashover and loss of supply.[2] Oscillatory motion of the physical line is termed conductor gallop or flutter depending on the frequency and amplitude of oscillation. Three abreast Electrical Pylons in Webster, Texas Main article: Undergrounding Electric power can be transmitted by underground power cables. Underground cables take up no right-of-way, have lower visibility, and are less affected by weather. However, cables must be insulated. Cable and excavation costs are much higher than overhead construction. Faults in buried transmission lines take longer to locate and repair. In some metropolitan areas, cables are enclosed by metal pipe and insulated with dielectric fluid (usually an oil) that is either static or circulated via pumps. If an electric fault damages the pipe and leaks dielectric, liquid nitrogen is used to freeze portions of the pipe to enable draining and repair. This extends the repair period and increases costs. The temperature of the pipe and surroundings are monitored throughout the repair period.[3][4][5] Underground lines are limited by their thermal capacity, which permits less overload or re-rating lines. Long underground AC cables have significant capacitance, which reduces their ability to provide useful power beyond 50 miles (80 kilometres). DC cables are not limited in length by their capacitance. Main article: History of electric power transmission New York City streets in 1890. Besides telegraph lines, multiple electric lines were required for each class of device requiring different voltages Commercial electric power was initially transmitted at the same voltage used by lighting and mechanical loads. This restricted the distance between generating plant and loads. In 1882, DC voltage could not easily be increased for long-distance transmission. Different classes of loads (for example, lighting, fixed motors, and traction/railway systems) required different voltages, and so used different generators and circuits.[6][7] Thus, generators were sited near their loads, a practice that later became known as distributed generation using large numbers of small generators.[8] Transmission of alternating current (AC) became possible after Lucien Gaulard and John Dixon Gibbs built what they called the secondary generator, an early transformer provided with 1:1 turn ratio and open magnetic circuit, in 1881. The first long distance AC line was 34 kilometres (21 miles) long, built for the 1884 International Exhibition of Electricity in Turin, Italy. It was powered by a 2 kV, 130 Hz Siemens & Halske alternator and featured several Gaulard transformers with primary windings connected in series, which fed incandescent lamps. The system proved the feasibility of AC electric power transmission over long distances.[7] The first commercial AC distribution system entered service in 1885 in via dei Cerchi, Rome, Italy, for public lighting. It was powered by two Siemens & Halske alternators rated 30 hp (22 kW), 2 kV at 120 Hz and used 19 km of cables and 200 parallel-connected 2 kV to 20 V step-down transformers provided with a closed magnetic circuit, one for each lamp. A few months later it was followed by the first British AC system, serving Grosvenor Gallery. It also featured Siemens alternators and 2.4 kV to 100 V step-down transformers – one per user – with shunt-connected primaries.[9] Working for Westinghouse, William Stanley Jr. spent his time recovering from illness in Great Barrington installing what is considered the world's first practical AC transformer system. Working to improve what he considered an impractical Gaulard-Gibbs design, electrical engineer William Stanley, Jr. developed the first practical series AC transformer in 1885.[10] Working with the support of George Westinghouse, in 1886 he demonstrated a transformer-based AC lighting system in Great Barrington, Massachusetts. It was powered by a steam engine-driven 500 V Siemens generator. Voltage was stepped down to 100 volts using the Stanley transformer to power incandescent lamps at 23 businesses over 4,000 feet (1,200 m).[11] This practical demonstration of a transformer and alternating current lighting system led Westinghouse to begin installing AC systems later that year.[10] In 1888 the first designs for an AC motor appeared. These were induction motors running on polyphase current, independently invented by Galileo Ferraris and Nikola Tesla. Westinghouse licensed Tesla's design. Practical three-phase motors were designed by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown.[12] Widespread use of such motors were delayed many years by development problems and the scarcity of polyphase power systems needed to power them.[13][14] In the late 1880s and early 1890s smaller electric companies merged into larger corporations such as Ganz and AEG in Europe and General Electric and Westinghouse Electric in the US. These companies developed AC systems, but the technical difference between direct and alternating current systems required a much longer technical merger.[15] Alternating current's economies of scale with large generating plants and long-distance transmission slowly added the ability to link all the loads. These included single phase AC systems, poly-phase AC systems, low voltage incandescent lighting, high-voltage arc lighting, and existing DC motors in factories and street cars. In what became a universal system, these technological differences were temporarily bridged via the rotary converters and motor-generators that allowed the legacy systems to connect to the AC grid.[15][16] These stopgaps were slowly replaced as older systems were retired or upgraded. Westinghouse alternating current polyphase generators on display at the 1893 World's Fair in Chicago, part of their "Tesla Poly-phase System". Such polyphase innovations revolutionized transmission The first transmission of single-phase alternating current using high voltage came in Oregon in 1890 when power was delivered from a hydroelectric plant at Willamette Falls to the city of Portland 14 miles (23 km) down river.[17] The first three-phase alternating current using high voltage took place in 1891 during the international electricity exhibition in Frankfurt. A 15 kV transmission line, approximately 175 km long, connected Lauffen on the Neckar and Frankfurt.[9][18] Transission voltages increased throughout the 20th century. By 1914, fifty-five transmission systems operating at more than 70 kV were in service. The highest voltage then used was 150 kV.[19] Interconnecting multiple generating plants over a wide area reduced costs. The most efficient plants could be used to supply varying loads during the day. Reliability was improved and capital costs were reduced, because stand-by generating capacity could be shared over many more customers and a wider area. Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to further lower costs.[6][9] The 20th century's rapid industrialization made electrical transmission lines and grids critical infrastructure. Interconnection of local generation plants and small distribution networks was spurred by World War I, when large electrical generating plants built by governments to power munitions factories.[20] Bulk transmission A transmission substation decreases the voltage of incoming electricity, allowing it to connect from long-distance high-voltage transmission, to local lower voltage distribution. It also reroutes power to other transmission lines that serve local markets. This is the PacifiCorp Hale Substation, Orem, Utah, USA These networks use components such as power lines, cables, circuit breakers, switches and transformers. The transmission network is usually administered on a regional basis by an entity such as a regional transmission organization or transmission system operator.[21] Transmission efficiency is improved at higher voltage and lower current. The reduced current reduces heating losses. Joule's Law states that energy losses are proportional to the square of the current. Thus, reducing the current by a factor of two lowers the energy lost to conductor resistance by a factor of four for any given size of conductor. The optimum size of a conductor for a given voltage and current can be estimated by Kelvin's law for conductor size, which states that size is optimal when the annual cost of energy wasted in resistance is equal to the annual capital charges of providing the conductor. At times of lower interest rates and low commodity costs, Kelvin's law indicates that thicker wires are optimal. Otherwise, thinner conductors are indicated.: Since power lines are designed for long-term use, Kelvin's law is used in conjunction with long-term estimates of the price of copper and aluminum as well as interest rates. Higher voltage is achieved in AC circuits by using a step-up transformer. HVDC systems require relatively (costly) conversion equipment that may be economically justified for particular projects such as submarine cables and longer distance high capacity point-to-point transmission. HVDC is necessary for sending energy between unsynchronized grids. A transmission grid is a network of power stations, transmission lines, and substations. Energy is usually transmitted within a grid with three-phase AC. Single-phase AC is used only for distribution to end users since it is not usable for large polyphase induction motors. In the 19th century, two-phase transmission was used but required either four wires or three wires with unequal currents. Higher order phase systems require more than three wires, but deliver little or no benefit. The synchronous grids of Europe While the price of generating capacity is high, energy demand is variable, making it often cheaper to import needed power than to generate it locally. Because loads often rise and fall together across large areas, power often comes from distant sources. Because of the economic benefits of load sharing, wide area transmission grids may span countries and even continents. Interconnections between producers and consumers enables power to flow even if some links are inoperative. The slowly varying portion of demand is known as the base load and is generally served by large facilities with constant operating costs, termed firm power. Such facilities are nuclear, coal or hydroelectric, while other energy sources such as concentrated solar thermal and geothermal power have the potential to provide firm power. Renewable energy sources, such as solar photovoltaics, wind, wave, and tidal, are, due to their intermittency, not considered to be firm. The remaining or "peak" power demand, is supplied by peaking power plants, which are typically smaller, faster-responding, and higher cost sources, such as combined cycle or combustion turbine plants typically fueled by natural gas. Long-distance transmission (hundreds of kilometers) is cheap and efficient, with costs of US$0.005–0.02 per kWh (compared to annual averaged large producer costs of US$0.01–0.025 per kWh, retail rates upwards of US$0.10 per kWh, and multiples of retail for instantaneous suppliers at unpredicted high demand moments.[22] New York often buys over 1000 MW of low-cost hydropower from Canada.[23] Local sources (even if more expensive and infrequently used) can protect the power supply from weather and other disasters that can disconnect distant suppliers. A high-power electrical transmission tower, 230 kV, double-circuit, also double-bundled Hydro and wind sources cannot be moved closer to big cities, and solar costs are lowest in remote areas where local power needs are nominal. Connection costs can determine whether any particular renewable alternative is economically realistic. Costs can be prohibitive for transmission lines, but high capacity, long distance super grid transmission network costs could be recovered with modest usage fees. Grid input At power stations, power is produced at a relatively low voltage between about 2.3 kV and 30 kV, depending on the size of the unit. The voltage is then stepped up by the power station transformer to a higher voltage (115 kV to 765 kV AC) for transmission. In the United States, power transmission is, variously, 230 kV to 500 kV, with less than 230 kV or more than 500 kV as exceptions. The Western Interconnection has two primary interchange voltages: 500 kV AC at 60 Hz, and ±500 kV (1,000 kV net) DC from North to South (Columbia River to Southern California) and Northeast to Southwest (Utah to Southern California). The 287.5 kV (Hoover Dam to Los Angeles line, via Victorville) and 345 kV (Arizona Public Service (APS) line) are local standards, both of which were implemented before 500 kV became practical. Transmitting electricity at high voltage reduces the fraction of energy lost to Joule heating, which varies by conductor type, the current, and the transmission distance. For example, a 100 mi (160 km) span at 765 kV carrying 1000 MW of power can have losses of 0.5% to 1.1%. A 345 kV line carrying the same load across the same distance has losses of 4.2%.[24] For a given amount of power, a higher voltage reduces the current and thus the resistive losses. For example, raising the voltage by a factor of 10 reduces the current by a corresponding factor of 10 and therefore the I 2 R {\displaystyle I^{2}R} losses by a factor of 100, provided the same sized conductors are used in both cases. Even if the conductor size (cross-sectional area) is decreased ten-fold to match the lower current, the I 2 R {\displaystyle I^{2}R} losses are still reduced ten-fold. Long-distance transmission is typically done with overhead lines at voltages of 115 to 1,200 kV. At higher voltages, where more than 2,000 kV exists between conductor and ground, corona discharge losses are so large that they can offset the lower resistive losses in the line conductors. Measures to reduce corona losses include larger conductor diameter, hollow cores[25] or conductor bundles. Factors that affect resistance and thus loss include temperature, spiraling, and the skin effect. Resistance increases with temperature. Spiraling, which refers to the way stranded conductors spiral about the center, also contributes to increases in conductor resistance. The skin effect causes the effective resistance to increase at higher AC frequencies. Corona and resistive losses can be estimated using a mathematical model.[26] US transmission and distribution losses were estimated at 6.6% in 1997,[27] 6.5% in 2007[27] and 5% from 2013 to 2019.[28] In general, losses are estimated from the discrepancy between power produced (as reported by power plants) and power sold; the difference constitutes transmission and distribution losses, assuming no utility theft occurs. As of 1980, the longest cost-effective distance for DC transmission was 7,000 kilometres (4,300 miles). For AC it was 4,000 kilometres (2,500 miles), though US transmission lines are substantially shorter.[22] In any AC line, conductor inductance and capacitance can be significant. Currents that flow solely in reaction to these properties, (which together with the resistance define the impedance) constitute reactive power flow, which transmits no power to the load. These reactive currents, however, cause extra heating losses. The ratio of real power transmitted to the load to apparent power (the product of a circuit's voltage and current, without reference to phase angle) is the power factor. As reactive current increases, reactive power increases and power factor decreases. For transmission systems with low power factor, losses are higher than for systems with high power factor. Utilities add capacitor banks, reactors and other components (such as phase-shifters; static VAR compensators; and flexible AC transmission systems, FACTS) throughout the system help to compensate for the reactive power flow, reduce the losses in power transmission and stabilize system voltages. These measures are collectively called 'reactive support'. Current flowing through transmission lines induces a magnetic field that surrounds the lines of each phase and affects the inductance of the surrounding conductors of other phases. The conductors' mutual inductance is partially dependent on the physical orientation of the lines with respect to each other. Three-phase lines are conventionally strung with phases separated vertically. The mutual inductance seen by a conductor of the phase in the middle of the other two phases is different from the inductance seen on the top/bottom. Unbalanced inductance among the three conductors is problematic because it may force the middle line to carry a disproportionate amount of the total power transmitted. Similarly, an unbalanced load may occur if one line is consistently closest to the ground and operates at a lower impedance. Because of this phenomenon, conductors must be periodically transposed along the line so that each phase sees equal time in each relative position to balance out the mutual inductance seen by all three phases. To accomplish this, line position is swapped at specially designed transposition towers at regular intervals along the line using various transposition schemes. Subtransmission A 115 kV subtransmission line in the Philippines, along with 20 kV distribution lines and a street light, all mounted on a wood subtransmission pole 115 kV H-frame transmission tower Subtransmission runs at relatively lower voltages. It is uneconomical to connect all distribution substations to the high main transmission voltage, because that equipment is larger and more expensive. Typically, only larger substations connect with this high voltage. Voltage is stepped down before the current is sent to smaller substations. Subtransmission circuits are usually arranged in loops so that a single line failure does not stop service to many customers for more than a short time. Loops can be "normally closed", where loss of one circuit should result in no interruption, or "normally open" where substations can switch to a backup supply. While subtransmission circuits are usually carried on overhead lines, in urban areas buried cable may be used. The lower-voltage subtransmission lines use less right-of-way and simpler structures; undergrounding is less difficult. No fixed cutoff separates subtransmission and transmission, or subtransmission and distribution. Their voltage ranges overlap. Voltages of 69 kV, 115 kV, and 138 kV are often used for subtransmission in North America. As power systems evolved, voltages formerly used for transmission were used for subtransmission, and subtransmission voltages became distribution voltages. Like transmission, subtransmission moves relatively large amounts of power, and like distribution, subtransmission covers an area instead of just point-to-point.[29] Transmission grid exit Substation transformers reduce the voltage to a lower level for distribution to loads. This distribution is accomplished with a combination of sub-transmission (33 to 132 kV) and distribution (3.3 to 25 kV). Finally, at the point of use, the energy is transformed to low voltage. Advantage of high-voltage transmission See also: Ideal transformer High-voltage power transmission allows for lesser resistive losses over long distances. This efficiency delivers a larger proportion of the generated power to the loads. Electrical grid without a transformer. Electrical grid with a transformer. In a simplified model, the grid delivers electricity from an ideal voltage source with voltage V {\displaystyle V} , delivering a power P V {\displaystyle P_{V}} ) to a single point of consumption, modelled by a resistance R {\displaystyle R} , when the wires are long enough to have a significant resistance R C {\displaystyle R_{C}} . If the resistances are in series with no intervening transformer, the circuit acts as a voltage divider, because the same current I = V R + R C {\displaystyle I={\frac {V}{R+R_{C}}}} runs through the wire resistance and the powered device. As a consequence, the useful power (at the point of consumption) is: P R = V 2 × I = V R R + R C × V R + R C = R R + R C × V 2 R + R C = R R + R C P V {\displaystyle P_{R}=V_{2}\times I=V{\frac {R}{R+R_{C}}}\times {\frac {V}{R+R_{C}}}={\frac {R}{R+R_{C}}}\times {\frac {V^{2}}{R+R_{C}}}={\frac {R}{R+R_{C}}}P_{V}} Should an ideal transformer convert high-voltage, low-current electricity into low-voltage, high-current electricity with a voltage ratio of a {\displaystyle a} (i.e., the voltage is divided by a {\displaystyle a} and the current is multiplied by a {\displaystyle a} in the secondary branch, compared to the primary branch), then the circuit is again equivalent to a voltage divider, but the wires now have apparent resistance of only R C / a 2 {\displaystyle R_{C}/a^{2}} . The useful power is then: P R = V 2 × I 2 = a 2 R × V 2 ( a 2 R + R C ) 2 = a 2 R a 2 R + R C P V = R R + R C / a 2 P V {\displaystyle P_{R}=V_{2}\times I_{2}={\frac {a^{2}R\times V^{2}}{(a^{2}R+R_{C})^{2}}}={\frac {a^{2}R}{a^{2}R+R_{C}}}P_{V}={\frac {R}{R+R_{C}/a^{2}}}P_{V}} For a > 1 {\displaystyle a>1} (i.e. conversion of high voltage to low voltage near the consumption point), a larger fraction of the generator's power is transmitted to the consumption point and a lesser fraction is lost to Joule heating. Main article: Performance and modelling of AC transmission "Black box" model for transmission line The terminal characteristics of the transmission line are the voltage and current at the sending (S) and receiving (R) ends. The transmission line can be modeled as a "black box" and a 2 by 2 transmission matrix is used to model its behavior, as follows: [ V S I S ] = [ A B C D ] [ V R I R ] {\displaystyle {\begin{bmatrix}V_{\mathrm {S} }\\I_{\mathrm {S} }\\\end{bmatrix}}={\begin{bmatrix}A&B\\C&D\\\end{bmatrix}}{\begin{bmatrix}V_{\mathrm {R} }\\I_{\mathrm {R} }\\\end{bmatrix}}} The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T has the properties: det ( T ) = A D − B C = 1 {\displaystyle \det(T)=AD-BC=1} A = D {\displaystyle A=D} The parameters A, B, C, and D differ depending on how the desired model handles the line's resistance (R), inductance (L), capacitance (C), and shunt (parallel, leak) conductance G. The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In such models, a capital letter such as R refers to the total quantity summed over the line and a lowercase letter such as c refers to the per-unit-length quantity. Lossless line The lossless line approximation is the least accurate; it is typically used on short lines where the inductance is much greater than the resistance. For this approximation, the voltage and current are identical at the sending and receiving ends. Voltage on sending and receiving ends for lossless line The characteristic impedance is pure real, which means resistive for that impedance, and it is often called surge impedance. When a lossless line is terminated by surge impedance, the voltage does not drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the line. For load > SIL, the voltage drops from sending end and the line "consumes" VARs. For load < SIL, the voltage increases from the sending end, and the line "generates" VARs. Short line The short line approximation is normally used for lines shorter than 80 km (50 mi). There, only a series impedance Z is considered, while C and G are ignored. The final result is that A = D = 1 per unit, B = Z Ohms, and C = 0. The associated transition matrix for this approximation is therefore: [ V S I S ] = [ 1 Z 0 1 ] [ V R I R ] {\displaystyle {\begin{bmatrix}V_{\mathrm {S} }\\I_{\mathrm {S} }\\\end{bmatrix}}={\begin{bmatrix}1&Z\\0&1\\\end{bmatrix}}{\begin{bmatrix}V_{\mathrm {R} }\\I_{\mathrm {R} }\\\end{bmatrix}}} Medium line The medium line approximation is used for lines running between 80 and 250 km (50 and 155 mi). The series impedance and the shunt (current leak) conductance are considered, placing half of the shunt conductance at each end of the line. This circuit is often referred to as a "nominal π (pi)" circuit because of the shape (π) that is taken on when leak conductance is placed on both sides of the circuit diagram. The analysis of the medium line produces: A = D = 1 + G Z 2 per unit B = Z Ω C = G ( 1 + G Z 4 ) S {\displaystyle {\begin{aligned}A&=D=1+{\frac {GZ}{2}}{\text{ per unit}}\\B&=Z\Omega \\C&=G{\Big (}1+{\frac {GZ}{4}}{\Big )}S\end{aligned}}} Counterintuitive behaviors of medium-length transmission lines: voltage rise at no load or small current (Ferranti effect) receiving-end current can exceed sending-end current Long line The long line model is used when a higher degree of accuracy is needed or when the line under consideration is more than 250 km (160 mi) long. Series resistance and shunt conductance are considered to be distributed parameters, such that each differential length of the line has a corresponding differential series impedance and shunt admittance. The following result can be applied at any point along the transmission line, where γ {\displaystyle \gamma } is the propagation constant. A = D = cosh ⁡ ( γ x ) per unit B = Z c sinh ⁡ ( γ x ) Ω C = 1 Z c sinh ⁡ ( γ x ) S {\displaystyle {\begin{aligned}A&=D=\cosh(\gamma x){\text{ per unit}}\\[3mm]B&=Z_{c}\sinh(\gamma x)\Omega \\[2mm]C&={\frac {1}{Z_{c}}}\sinh(\gamma x)S\end{aligned}}} To find the voltage and current at the end of the long line, x {\displaystyle x} should be replaced with l {\displaystyle l} (the line length) in all parameters of the transmission matrix. This model applies the Telegrapher's equations High-voltage direct current Main article: High-voltage direct current High-voltage direct current (HVDC) is used to transmit large amounts of power over long distances or for interconnections between asynchronous grids. When electrical energy is transmitted over very long distances, the power lost in AC transmission becomes appreciable and it is less expensive to use direct current instead. For a long transmission line, these lower losses (and reduced construction cost of a DC line) can offset the cost of the required converter stations at each end. HVDC is used for long submarine cables where AC cannot be used because of cable capacitance.[30] In these cases special high-voltage cables are used. Submarine HVDC systems are often used to interconnect the electricity grids of islands, for example, between Great Britain and continental Europe, between Great Britain and Ireland, between Tasmania and the Australian mainland, between the North and South Islands of New Zealand, between New Jersey and New York City, and between New Jersey and Long Island. Submarine connections up to 600 kilometres (370 mi) in length have been deployed.[31] HVDC links can be used to control grid problems. The power transmitted by an AC line increases as the phase angle between source end voltage and destination ends increases, but too large a phase angle allows the systems at either end to fall out of step. Since the power flow in a DC link is controlled independently of the phases of the AC networks that it connects, this phase angle limit does not exist, and a DC link is always able to transfer its full rated power. A DC link therefore stabilizes the AC grid at either end, since power flow and phase angle can then be controlled independently. As an example, to adjust the flow of AC power on a hypothetical line between Seattle and Boston would require adjustment of the relative phase of the two regional electrical grids. This is an everyday occurrence in AC systems, but one that can become disrupted when AC system components fail and place unexpected loads on the grid. With an HVDC line instead, such an interconnection would: Convert AC in Seattle into HVDC; Use HVDC for the 3,000 miles (4,800 km) of cross-country transmission; and Convert the HVDC to locally synchronized AC in Boston, (and possibly in other cooperating cities along the transmission route). Such a system could be less prone to failure if parts of it were suddenly shut down. One example of a long DC transmission line is the Pacific DC Intertie located in the Western United States. The amount of power that can be sent over a transmission line varies with the length of the line. The heating of short line conductors due to line losses sets a thermal limit. If too much current is drawn, conductors may sag too close to the ground, or conductors and equipment may overheat. For intermediate-length lines on the order of 100 kilometres (62 miles), the limit is set by the voltage drop in the line. For longer AC lines, system stability becomes the limiting factor. Approximately, the power flowing over an AC line is proportional to the cosine of the phase angle of the voltage and current at the ends. This angle varies depending on system loading. It is undesirable for the angle to approach 90 degrees, as the power flowing decreases while resistive losses remain. The product of line length and maximum load is approximately proportional to the square of the system voltage. Series capacitors or phase-shifting transformers are used on long lines to improve stability. HVDC lines are restricted only by thermal and voltage drop limits, since the phase angle is not material. Understanding the temperature distribution along the cable route became possible with the introduction of distributed temperature sensing (DTS) systems that measure temperatures all along the cable. Without them maximum current was typically set as a compromise between understanding of operation conditions and risk minimization. This monitoring solution uses passive optical fibers as temperature sensors, either inside a high-voltage cable or externally mounted on the cable insulation. For overhead cables the fiber is integrated into the core of a phase wire. The integrated Dynamic Cable Rating (DCR)/Real Time Thermal Rating (RTTR) solution makes it possible to run the network to its maximum. Furthermore, it allows the operator to predict the behavior of the transmission system to reflect major changes to its initial operating conditions. To ensure safe and predictable operation, system components are controlled with generators, switches, circuit breakers and loads. The voltage, power, frequency, load factor, and reliability capabilities of the transmission system are designed to provide cost effective performance. The transmission system provides for base load and peak load capability, with margins for safety and fault tolerance. Peak load times vary by region largely due to the industry mix. In hot and cold climates home air conditioning and heating loads affect the overall load. They are typically highest in the late afternoon in the hottest part of the year and in mid-mornings and mid-evenings in the coldest part of the year. Power requirements vary by season and time of day. Distribution system designs always take the base load and the peak load into consideration. The transmission system usually does not have a large buffering capability to match loads with generation. Thus generation has to be kept matched to the load, to prevent overloading generation equipment. Multiple sources and loads can be connected to the transmission system and they must be controlled to provide orderly transfer of power. In centralized power generation, only local control of generation is necessary. This involves synchronization of the generation units. In distributed power generation the generators are geographically distributed and the process to bring them online and offline must be carefully controlled. The load control signals can either be sent on separate lines or on the power lines themselves. Voltage and frequency can be used as signaling mechanisms to balance the loads. In voltage signaling, voltage is varied to increase generation. The power added by any system increases as the line voltage decreases. This arrangement is stable in principle. Voltage-based regulation is complex to use in mesh networks, since the individual components and setpoints would need to be reconfigured every time a new generator is added to the mesh. In frequency signaling, the generating units match the frequency of the power transmission system. In droop speed control, if the frequency decreases, the power is increased. (The drop in line frequency is an indication that the increased load is causing the generators to slow down.) Wind turbines, vehicle-to-grid, virtual power plants, and other locally distributed storage and generation systems can interact with the grid to improve system operation. Internationally, a slow move from a centralized to decentralized power system has taken place. The main draw of locally distributed generation systems is that they reduce transmission losses by leading to consumption of electricity closer to where it was produced.[32] Failure protection Under excess load conditions, the system can be designed to fail incrementally rather than all at once. Brownouts occur when power supplied drops below the demand. Blackouts occur when the grid fails completely. Rolling blackouts (also called load shedding) are intentionally engineered electrical power outages, used to distribute insufficient power to various loads in turn. Grid operators require reliable communications to manage the grid and associated generation and distribution facilities. Fault-sensing protective relays at each end of the line must communicate to monitor the flow of power so that faulted conductors or equipment can be quickly de-energized and the balance of the system restored. Protection of the transmission line from short circuits and other faults is usually so critical that common carrier telecommunications are insufficiently reliable, while in some remote areas no common carrier is available. Communication systems associated with a transmission project may use: Power-line communication Optical fibers Rarely, and for short distances, pilot-wires are strung along the transmission line path. Leased circuits from common carriers are not preferred since availability is not under control of the operator. Transmission lines can be used to carry data: this is called power-line carrier, or power-line communication (PLC). PLC signals can be easily received with a radio in the long wave range. High-voltage pylons carrying additional optical fibre cable in Kenya Optical fibers can be included in the stranded conductors of a transmission line, in the overhead shield wires. These cables are known as optical ground wire (OPGW). Sometimes a standalone cable is used, all-dielectric self-supporting (ADSS) cable, attached to the transmission line cross arms. Some jurisdictions, such as Minnesota, prohibit energy transmission companies from selling surplus communication bandwidth or acting as a telecommunications common carrier. Where the regulatory structure permits, the utility can sell capacity in extra dark fibers to a common carrier. Main article: Electricity market Electricity transmission is generally considered to be a natural monopoly, but one that is not inherently linked to generation.[33][34][35] Many countries regulate transmission separately from generation. Spain was the first country to establish a regional transmission organization. In that country, transmission operations and electricity markets are separate. The transmission system operator is Red Eléctrica de España (REE) and the wholesale electricity market operator is Operador del Mercado Ibérico de Energía – Polo Español, S.A. (OMEL) OMEL Holding | Omel Holding. Spain's transmission system is interconnected with those of France, Portugal, and Morocco. The establishment of RTOs in the United States was spurred by the FERC's Order 888, Promoting Wholesale Competition Through Open Access Non-discriminatory Transmission Services by Public Utilities; Recovery of Stranded Costs by Public Utilities and Transmitting Utilities, issued in 1996.[36] In the United States and parts of Canada, electric transmission companies operate independently of generation companies, but in the Southern United States vertical integration is intact. In regions of separation, transmission owners and generation owners continue to interact with each other as market participants with voting rights within their RTO. RTOs in the United States are regulated by the Federal Energy Regulatory Commission. Merchant transmission projects in the United States include the Cross Sound Cable from Shoreham, New York to New Haven, Connecticut, Neptune RTS Transmission Line from Sayreville, New Jersey to New Bridge, New York, and Path 15 in California. Additional projects are in development or have been proposed throughout the United States, including the Lake Erie Connector, an underwater transmission line proposed by ITC Holdings Corp., connecting Ontario to load serving entities in the PJM Interconnection region.[37] Australia has one unregulated or market interconnector - Basslink - between Tasmania and Victoria. Two DC links originally implemented as market interconnectors, Directlink and Murraylink, were converted to regulated interconnectors.[38] A major barrier to wider adoption of merchant transmission is the difficulty in identifying who benefits from the facility so that the beneficiaries pay the toll. Also, it is difficult for a merchant transmission line to compete when the alternative transmission lines are subsidized by utilities with a monopolized and regulated rate base.[39] In the United States, the FERC's Order 1000, issued in 2010, attempted to reduce barriers to third party investment and creation of merchant transmission lines where a public policy need is found.[40] Transmission costs The cost of high voltage transmission is comparatively low, compared to all other costs constituting consumer electricity bills. In the UK, transmission costs are about 0.2 p per kWh compared to a delivered domestic price of around 10 p per kWh.[41] The level of capital expenditure in the electric power T&D equipment market was estimated to be $128.9 bn in 2011.[42] Main article: Electromagnetic radiation and health Mainstream scientific evidence suggests that low-power, low-frequency, electromagnetic radiation associated with household currents and high transmission power lines does not constitute a short- or long-term health hazard. Some studies failed to find any link between living near power lines and developing any sickness or diseases, such as cancer. A 1997 study reported no increased risk of cancer or illness from living near a transmission line.[43] Other studies, however, reported statistical correlations between various diseases and living or working near power lines. No adverse health effects have been substantiated for people not living close to power lines.[44] The New York State Public Service Commission conducted a study[45] to evaluate potential health effects of electric fields. The study measured the electric field strength at the edge of an existing right-of-way on a 765 kV transmission line. The field strength was 1.6 kV/m, and became the interim maximum strength standard for new transmission lines in New York State. The opinion also limited the voltage of new transmission lines built in New York to 345 kV. On September 11, 1990, after a similar study of magnetic field strengths, the NYSPSC issued their Interim Policy Statement on Magnetic Fields. This policy established a magnetic field standard of 200 mG at the edge of the right-of-way using the winter-normal conductor rating. As a comparison with everyday items, a hair dryer or electric blanket produces a 100 mG – 500 mG magnetic field.[46][47] Applications for a new transmission line typically include an analysis of electric and magnetic field levels at the edge of rights-of-way. Public utility commissions typically do not comment on health impacts. Biological effects have been established for acute high level exposure to magnetic fields above 100 µT (1 G) (1,000 mG). In a residential setting, one study reported "limited evidence of carcinogenicity in humans and less than sufficient evidence for carcinogenicity in experimental animals", in particular, childhood leukemia, associated with average exposure to residential power-frequency magnetic field above 0.3 µT (3 mG) to 0.4 µT (4 mG). These levels exceed average residential power-frequency magnetic fields in homes, which are about 0.07 µT (0.7 mG) in Europe and 0.11 µT (1.1 mG) in North America.[48][49] The Earth's natural geomagnetic field strength varies over the surface of the planet between 0.035 mT and 0.07 mT (35 µT – 70 µT or 350 mG – 700 mG) while the international standard for continuous exposure is set at 40 mT (400,000 mG or 400 G) for the general public.[48] Tree growth regulators and herbicides may be used in transmission line right of ways,[50] which may have health effects. Policy by country The Federal Energy Regulatory Commission (FERC) is the primary regulatory agency of electric power transmission and wholesale electricity sales within the United States. FERC was originally established by Congress in 1920 as the Federal Power Commission and has since undergone multiple name and responsibility modifications. Electric power distribution and the retail sale of power is under state jurisdiction. Order No. 888 Order No. 888 was adopted by FERC on April 24, 1996. It was "designed to remove impediments to competition in the wholesale bulk power marketplace and to bring more efficient, lower cost power to the Nation's electricity consumers. The legal and policy cornerstone of these rules is to remedy undue discrimination in access to the monopoly owned transmission wires that control whether and to whom electricity can be transported in interstate commerce."[51] The Order required all public utilities that own, control, or operate facilities used for transmitting electric energy in interstate commerce, to have open access, non-discriminatory transmission tariffs. These tariffs allow any electricity generator to utilize existing power lines to transmit the power that they generate. The Order also permits public utilities to recover the costs associated with providing their power lines as an open access service.[51][52] Energy Policy Act of 2005 The Energy Policy Act of 2005 (EPAct) expanded federal authority to regulate power transmission. EPAct gave FERC significant new responsibilities, including enforcement of electric transmission reliability standards and the establishment of rate incentives to encourage investment in electricity transmission.[53] Historically, local governments exercised authority over the grid and maintained significant disincentives to actions that would benefit states other than their own. Localities with cheap electricity have a disincentive to encourage making interstate commerce in electricity trading easier, since other regions would be able to compete for that energy and drive up rates. For example, some regulators in Maine refused to address congestion problems because the congestion protects Maine rates.[54] Further, local constituencies can block or slow permitting by pointing to visual impacts, environmental, and health concerns. In the US, generation is growing four times faster than transmission, but transmission upgrades require the coordination of multiple jurisdictions, complex permitting, and cooperation between a significant portion of the many companies that collectively own the grid. The US national security interest in improving transmission was reflected in the EPAct which gave the Department of Energy the authority to approve transmission if states refused to act.[55] Specialized transmission Grids for railways Main article: Traction power network In some countries where electric locomotives or electric multiple units run on low frequency AC power, separate single phase traction power networks are operated by the railways. Prime examples are countries such as Austria, Germany and Switzerland that utilize AC technology based on 16 2/3 Hz. Norway and Sweden also use this frequency but use conversion from the 50 Hz public supply; Sweden has a 16 2/3 Hz traction grid but only for part of the system. Superconducting cables High-temperature superconductors (HTS) promise to revolutionize power distribution by providing lossless transmission. The development of superconductors with transition temperatures higher than the boiling point of liquid nitrogen has made the concept of superconducting power lines commercially feasible, at least for high-load applications.[56] It has been estimated that waste would be halved using this method, since the necessary refrigeration equipment would consume about half the power saved by the elimination of resistive losses. Companies such as Consolidated Edison and American Superconductor began commercial production of such systems in 2007.[57] Superconducting cables are particularly suited to high load density areas such as the business district of large cities, where purchase of an easement for cables is costly.[58] HTS transmission lines[59] Voltage (kV) Capacity (GW) Carrollton, Georgia 2000 Albany, New York[60] 0.35 34.5 0.048 2006 Holbrook, Long Island[61] 0.6 138 0.574 2008 Tres Amigas 5 Proposed 2013 Manhattan: Project Hydra Proposed 2014 Essen, Germany[62][63] 1 10 0.04 2014 Single-wire earth return Main article: Single-wire earth return Single-wire earth return (SWER) or single-wire ground return is a single-wire transmission line for supplying single-phase electrical power to remote areas at low cost. It is principally used for rural electrification, but also finds use for larger isolated loads such as water pumps. Single-wire earth return is also used for HVDC over submarine power cables. Wireless power transmission Main article: Wireless power transfer Both Nikola Tesla and Hidetsugu Yagi attempted to devise systems for large scale wireless power transmission in the late 1800s and early 1900s, without commercial success. In November 2009, LaserMotive won the NASA 2009 Power Beaming Challenge by powering a cable climber 1 km vertically using a ground-based laser transmitter. The system produced up to 1 kW of power at the receiver end. In August 2010, NASA contracted with private companies to pursue the design of laser power beaming systems to power low earth orbit satellites and to launch rockets using laser power beams. Wireless power transmission has been studied for transmission of power from solar power satellites to the earth. A high power array of microwave or laser transmitters would beam power to a rectenna. Major engineering and economic challenges face any solar power satellite project. The Federal government of the United States stated that the power grid is susceptible to cyber-warfare.[64][65] The United States Department of Homeland Security works with industry to identify vulnerabilities and to help industry enhance the security of control system networks.[66] In June 2019, Russia conceded that it was "possible" its electrical grid is under cyber-attack by the United States.[67] The New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid.[68] Highest capacity system: 12 GW Zhundong–Wannan(准东-皖南)±1100 kV HVDC.[69][70] Highest transmission voltage (AC): planned: 1.20 MV (Ultra-High Voltage) on Wardha-Aurangabad line (India) – under construction. Initially will operate at 400 kV.[71] worldwide: 1.15 MV (Ultra-High Voltage) on Ekibastuz-Kokshetau line (Kazakhstan) Largest double-circuit transmission, Kita-Iwaki Powerline (Japan). Highest towers: Yangtze River Crossing (China) (height: 345 m or 1,132 ft) Longest power line: Inga-Shaba (Democratic Republic of Congo) (length: 1,700 kilometres or 1,056 miles) Longest span of power line: 5,376 m (17,638 ft) at Ameralik Span (Greenland, Denmark) Longest submarine cables: North Sea Link, (Norway/United Kingdom) – (length of submarine cable: 720 kilometres or 447 miles) NorNed, North Sea (Norway/Netherlands) – (length of submarine cable: 580 kilometres or 360 miles) Basslink, Bass Strait, (Australia) – (length of submarine cable: 290 kilometres or 180 miles, total length: 370.1 kilometres or 230 miles) Baltic Cable, Baltic Sea (Germany/Sweden) – (length of submarine cable: 238 kilometres or 148 miles, HVDC length: 250 kilometres or 155 miles, total length: 262 kilometres or 163 miles) Longest underground cables: Murraylink, Riverland/Sunraysia (Australia) – (length of underground cable: 170 kilometres or 106 miles) Energy portal Dynamic demand (electric power) Demand response List of energy storage power plants Traction power network Backfeeding Conductor marking lights Double-circuit transmission line Electromagnetic Transients Program (EMTP) Flexible AC transmission system (FACTS) Geomagnetically induced current, (GIC) Grid-tied electrical system List of high-voltage underground and submarine cables Load profile National Grid (disambiguation) Power-line communications (PLC) Power system simulation Radio frequency power transmission Wheeling (electric power transmission) ^ "A Primer on Electric Utilities, Deregulation, and Restructuring of U.S. Electricity Markets" (PDF). United States Department of Energy Federal Energy Management Program (FEMP). May 2002. Archived (PDF) from the original on October 9, 2022. Retrieved October 30, 2018. ^ Hans Dieter Betz, Ulrich Schumann, Pierre Laroche (2009). Lightning: Principles, Instruments and Applications. Springer, pp. 202–203. ISBN 978-1-4020-9078-3. Retrieved on 13 May 2009. ^ Banerjee, Neela (September 16, 2001). "AFTER THE ATTACKS: THE WORKERS; Con Edison Crews Improvise as They Rewire a Truncated System". The New York Times. ^ "INVESTIGATION OF THE SEPTEMBER 2013 ELECTRIC OUTAGE OF A PORTION OF METRO-NORTH RAILROAD'S NEW HAVEN LINE". documents.dps.ny.gov. 2014. Retrieved December 29, 2019. ^ NYSPSC case no. 13-E-0529 ^ a b Thomas P. Hughes (1993). Networks of Power: Electrification in Western Society, 1880–1930. Baltimore: Johns Hopkins University Press. pp. 119–122. ISBN 0-8018-4614-5. ^ a b Guarnieri, M. (2013). "The Beginning of Electric Energy Transmission: Part One". IEEE Industrial Electronics Magazine. 7 (1): 57–60. doi:10.1109/MIE.2012.2236484. S2CID 45909123. ^ "Electricity Transmission: A primer" (PDF). National Council on Electricity Policy. Archived (PDF) from the original on October 9, 2022. Retrieved September 17, 2019. ^ a b c Guarnieri, M. (2013). "The Beginning of Electric Energy Transmission: Part Two". IEEE Industrial Electronics Magazine. 7 (2): 52–59. doi:10.1109/MIE.2013.2256297. S2CID 42790906. ^ a b "Great Barrington Experiment". edisontechcenter.org. ^ "William Stanley - Engineering and Technology History Wiki". ethw.org. August 8, 2017. ^ Arnold Heertje, Mark Perlman Evolving Technology and Market Structure: Studies in Schumpeterian Economics, page 138 ^ Carlson, W. Bernard (2013). Tesla: Inventor of the Electrical Age. Princeton University Press. ISBN 1-4008-4655-2, page 130 ^ Jonnes, Jill (2004). Empires of Light: Edison, Tesla, Westinghouse, and the Race to Electrify the World. Random House Trade Paperbacks. ISBN 978-0-375-75884-3, page 161. ^ a b Parke Hughes, Thomas (1993). Networks of Power: Electrification in Western Society, 1880-1930. JHU Press. pp. 120–121. ^ Garud, Raghu; Kumaraswamy, Arun; Langlois, Richard (2009). Managing in the Modular Age: Architectures, Networks, and Organizations. John Wiley & Sons. p. 249. ISBN 9781405141949. ^ Argersinger, R.E. (1915). "Electric Transmission of Power". General Electric Review. XVIII: 454. ^ Kiessling F, Nefzger P, Nolasco JF, Kaintzyk U. (2003). Overhead power lines. Springer, Berlin, Heidelberg, New York, p. 5 ^ Bureau of Census data reprinted in Hughes, pp. 282–283 ^ Hughes, pp. 293–295 ^ "Distribution Substations - Michigan Technological University" (PDF). Archived (PDF) from the original on October 9, 2022. Retrieved April 20, 2019. ^ a b Paris, L.; Zini, G.; Valtorta, M.; Manzoni, G.; Invernizzi, A.; De Franco, N.; Vian, A. (1984). "Present Limits of Very Long Distance Transmission Systems" (PDF). CIGRE International Conference on Large High Voltage Electric Systems, 1984 Session, 29 August – 6 September. Global Energy Network Institute. Retrieved March 29, 2011. 4.98 MB ^ "NYISO Zone Maps". New York Independent System Operator. Archived from the original on December 2, 2018. Retrieved January 10, 2014. ^ "Transmission Facts, page 4" (PDF). American Electric Power. Archived from the original (PDF) on June 4, 2011. ^ California Public Utilities Commission Corona and induced currents ^ Curt Harting (October 24, 2010). "AC Transmission Line Losses". Stanford University. Retrieved June 10, 2019. ^ a b "Where can I find data on electricity transmission and distribution losses?". Frequently Asked Questions – Electricity. U.S. Energy Information Administration. November 19, 2009. Archived from the original on December 12, 2012. Retrieved March 29, 2011. ^ "How much electricity is lost in electricity transmission and distribution in the United States?". Frequently Asked Questions – Electricity. U.S. Energy Information Administration. January 9, 2019. Retrieved February 27, 2019. ^ Donald G. Fink and H. Wayne Beaty. (2007), Standard Handbook for Electrical Engineers (15th Edition). McGraw-Hill. ISBN 978-0-07-144146-9 section 18.5 ^ Donald G. Fink, H. Wayne Beatty, Standard Handbook for Electrical Engineers 11th Edition, McGraw Hill, 1978, ISBN 0-07-020974-X, pages 15-57 and 15-58 ^ Guarnieri, M. (2013). "The Alternating Evolution of DC Power Transmission". IEEE Industrial Electronics Magazine. 7 (3): 60–63. doi:10.1109/MIE.2013.2272238. S2CID 23610440. ^ "The Bumpy Road to Energy Deregulation". EnPowered. March 28, 2016. Archived from the original on April 7, 2017. Retrieved April 6, 2017. ^ Schmalensee, Richard (November 12, 2021). "Strengths and weaknesses of traditional arrangements for electricity supply". Handbook on Electricity Markets. Edward Elgar Publishing. p. 16. doi:10.4337/9781788979955.00008. ISBN 9781788979955. S2CID 244796440. ^ Raghuvir Srinivasan (August 15, 2004). "Power transmission business is a natural monopoly". The Hindu Business Line. The Hindu. Retrieved January 31, 2008. ^ Lynne Kiesling (August 18, 2003). "Rethink the Natural Monopoly Justification of Electricity Regulation". Reason Foundation. Archived from the original on February 13, 2008. Retrieved January 31, 2008. ^ "FERC: Landmark Orders - Order No. 888". www.ferc.gov. Archived from the original on December 19, 2016. Retrieved December 7, 2016. ^ "How ITC Holdings plans to connect PJM demand with Ontario's rich renewables". Utility Dive. December 8, 2014. ^ "NEMMCO Power System Planning". web.archive.org. July 18, 2008. Archived from the original on July 18, 2008. Retrieved November 14, 2022. ^ Fiona Woolf (February 2003). Global Transmission Expansion. Pennwell Books. pp. 226, 247. ISBN 0-87814-862-0. ^ "FERC: Industries - Order No. 1000 - Transmission Planning and Cost Allocation". www.ferc.gov. Archived from the original on October 30, 2018. Retrieved October 30, 2018. ^ What is the cost per kWh of bulk transmission / National Grid in the UK (note this excludes distribution costs) ^ "The Electric Power Transmission & Distribution (T&D) Equipment Market 2011–2021". Archived from the original on June 18, 2011. Retrieved June 4, 2011. ^ Power Lines and Cancer Archived April 17, 2011, at the Wayback Machine, The Health Report / ABC Science - Broadcast on 7 June 1997 (Australian Broadcasting Corporation) ^ "WHO | Electromagnetic fields and public health". web.archive.org. December 24, 2007. Retrieved November 14, 2022. ^ Opinion No. 78-13 (issued June 19, 1978) ^ "EMF Report for the CHPE". TRC. March 2010. pp. 1–4. Retrieved November 9, 2018. ^ "Electric and Magnetic Field Strengths" (PDF). Transpower New Zealand Ltd. p. 2. Archived (PDF) from the original on October 9, 2022. Retrieved November 9, 2018. ^ a b "Electromagnetic fields and public health". Fact sheet No. 322. World Health Organization. June 2007. Archived from the original on July 1, 2007. Retrieved January 23, 2008. ^ "Electric and Magnetic Fields Associated with the Use of Power" (PDF). National Institute of Environmental Health Sciences. June 2002. Archived (PDF) from the original on October 9, 2022. Retrieved January 29, 2008. ^ "Transmission Vegetation Management NERC Standard FAC-003-2 Technical Reference Page 14/50" (PDF). nerc.com. Archived (PDF) from the original on October 9, 2022. ^ a b "Order No. 888". United States of America Federal Energy Regulatory Commission. ^ Order No. 888, FERC. "Promoting Wholesale Competition Through Open Access Non-discriminatory Transmission Services by Public Utilities; Recovery of Stranded Costs by Public Utilities and Transmitting Utilities". Archived from the original on December 19, 2016. Retrieved December 7, 2016. ^ Energy Policy Act of 2005 Fact Sheet (PDF). FERC Washington, D.C. August 8, 2006. Archived from the original (PDF) on December 20, 2016. Retrieved December 7, 2016. ^ Brown, Matthew H.; Sedano, Richard P. (2004). Electricity transmission : a primer (PDF). Denver, Colorado: National Council on Electricity Policy. p. 32 (page 41 in .pdf). ISBN 1-58024-352-5. Archived from the original (PDF) on July 30, 2009. Retrieved May 29, 2022. ^ Wald, Matthew (August 27, 2008). "Wind Energy Bumps into Power Grid's Limits". The New York Times: A1. Retrieved December 12, 2008. ^ Jacob Oestergaard; et al. (2001). "Energy losses of superconducting power transmission cables in the grid" (PDF). IEEE Transactions on Applied Superconductivity. 11 (1): 2375. Bibcode:2001ITAS...11.2375O. doi:10.1109/77.920339. S2CID 55086502. Archived (PDF) from the original on October 9, 2022. ^ Reuters, New Scientist Tech and (May 22, 2007). "Superconducting power line to shore up New York grid". New Scientist. {{cite web}}: |last= has generic name (help) ^ "Superconducting cables will be used to supply electricity to consumers". Archived from the original on July 14, 2014. Retrieved June 12, 2014. ^ "Superconductivity's First Century". Archived from the original on August 12, 2012. Retrieved August 9, 2012. ^ "HTS Transmission Cable". www.superpower-inc.com. ^ "IBM100 - High-Temperature Superconductors". www-03.ibm.com. August 10, 2017. ^ Patel, 03/01/2012 | Sonal (March 1, 2012). "High-Temperature Superconductor Technology Stepped Up". POWER Magazine. ^ "Operation of longest superconducting cable worldwide started". phys.org. ^ Shiels, Maggie (April 9, 2009). "Spies 'infiltrate US power grid'". BBC News. ^ "Hackers reportedly have embedded code in power grid". CNN. April 9, 2009. ^ Holland, Steve; Mikkelsen, Randall (April 8, 2009). "UPDATE 2-US concerned power grid vulnerable to cyber-attack". Reuters. ^ "US and Russia clash over power grid 'hack attacks". BBC News. June 18, 2019. ^ Greenberg, Andy (June 18, 2019). "How Not To Prevent a Cyberwar With Russia". Wired. ^ "Development of UHV Transmission and Insulation Technology in China" (PDF). Archived (PDF) from the original on October 9, 2022. ^ "准东-皖南±1100千伏特高压直流输电工程竣工投运". xj.xinhuanet.com. Archived from the original on September 30, 2019. ^ "India Steps It Up". Transmission & Distribution World. January 2013. Grigsby, L. L., et al. The Electric Power Engineering Handbook. USA: CRC Press. (2001). ISBN 0-8493-8578-4 Hughes, Thomas P., Networks of Power: Electrification in Western Society 1880–1930, The Johns Hopkins University Press, Baltimore 1983 ISBN 0-8018-2873-2, an excellent overview of development during the first 50 years of commercial electric power Reilly, Helen (2008). Connecting the Country – New Zealand's National Grid 1886–2007. Wellington: Steele Roberts. pp. 376 pages. ISBN 978-1-877448-40-9. Pansini, Anthony J, E.E., P.E. undergrounding electric lines. USA Hayden Book Co, 1978. ISBN 0-8104-0827-9 Westinghouse Electric Corporation, "Electric power transmission patents; Tesla polyphase system". (Transmission of power; polyphase system; Tesla patents) The Physics of Everyday Stuff - Transmission Lines Wikimedia Commons has media related to Electric power transmission. Look up grid electricity in Wiktionary, the free dictionary. Electricity delivery Automatic generation control Base load Demand factor Droop speed control Economic dispatch Electric power Demand management Energy return on investment Electrical fault Home energy storage Grid storage Grid code Grid strength Load-following Merit order Nameplate capacity Peak demand Power factor Power quality Power-flow study Utility frequency Vehicle-to-grid Non-renewable Fossil fuel power station Natural gas Oil shale Osmotic Sustainable biofuel AC power Cogeneration Combined cycle Cooling tower Induction generator Micro CHP Rankine cycle Three-phase electric power Virtual power plant and distribution Distributed generation Dynamic demand Electric power distribution Electricity retailing Electrical busbar system Electric power system Electric power transmission Electrical grid Electrical interconnector High-voltage direct current High-voltage shore connection Load management Mains electricity by country Power line Power station Pumped hydro Smart grid Single-wire earth return Super grid Transmission system operator (TSO) Transmission tower Utility pole Failure modes Blackout (Rolling blackout) Black start Cascading failure Arc-fault circuit interrupter Circuit breaker Earth-leakage circuit breaker Generator interlock kit Residual-current device (GFI) Power system protection Protective relay Numerical relay Sulfur hexafluoride circuit breaker Availability factor Capacity factor Carbon offset Cost of electricity by source Environmental tax Energy subsidies Feed-in tariff Fossil fuel phase-out Load factor Net metering Pigovian tax Renewable Energy Certificates Renewable energy payments Renewable energy policy Spark/Dark/Quark/Bark spread Statistics and List of electricity sectors Electric energy consumption Electricity economics Power station technology Renewable energy This page was last edited on 25 January 2023, at 10:07
CommonCrawl
June 2018 , Volume 108, Issue 6, pp 1383–1405 | Cite as Quiver elliptic W-algebras Taro Kimura Vasily Pestun We define elliptic generalization of W-algebras associated with arbitrary quiver using our construction (Kimura and Pestun in Quiver W-algebras, 2015. arXiv:1512.08533 [hep-th]) with six-dimensional gauge theory. Supersymmetric gauge theories Conformal field theories W-algebras Quantum groups Quiver Instanton 81T60 81R10 14D21 81R50 The work of T.K. was supported in part by Keio Gijuku Academic Development Funds, JSPS Grant-in-Aid for Scientific Research (No. JP17K18090), the MEXT-Supported Program for the Strategic Research Foundation at Private Universities "Topological Science" (No. S1511006), JSPS Grant-in-Aid for Scientific Research on Innovative Areas "Topological Materials Science" (No. JP15H05855), and "Discrete Geometric Analysis for Materials Design" (No. JP17H06462). VP acknowledges grant RFBR 15-01-04217 and RFBR 16-02-01021. The research of VP on this project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (QUASIFT grant agreement 677368). Appendix A: Proof of trace formula (3.19) In this Appendix we prove the equivalence (3.19) using the coherent state basis. Coherent state basis The argument in this part is essentially parallel to the textbook [56]. For oscillator algebra generated by \((t, \partial _{t})\) with \([\partial _{t}, t]= 1\) we consider the coherent state basis in the Fock space $$\begin{aligned} \left| n \right\rangle = \frac{t^n}{\sqrt{n!}} \left| 0 \right\rangle \,, \qquad \left\langle n \right| = \left\langle 0 \right| \frac{\partial ^n}{\sqrt{n!}} \,, \qquad |z) = e^{zt} \left| 0 \right\rangle \,, \qquad (z| = \left\langle 0 \right| e^{z^* \partial } \end{aligned}$$ (A.1) The normalization is $$\begin{aligned} \left\langle n|m \right\rangle = \delta _{n,m} \,, \qquad (z|w) = e^{z^* w} \,. \end{aligned}$$ The states in (A.1) are eigenstates of the filling number operator \(t \partial \left| n \right\rangle = n \left| n \right\rangle \) and the annihilation/creation operators \(\partial |z) = z |z)\), \((z| t = (z| z^*\). Notice that the operator \(a^{t \partial _t}\) acts on the states |z) and (z| as, $$\begin{aligned} a^{t \partial _t} |z) = |az) \,, \qquad (z| a^{t \partial _t} = (a^* z| \,. \end{aligned}$$ The identity operator can be expressed in terms of the coherent state basis: $$\begin{aligned} \mathbb {1} = \frac{1}{\pi } \int d^2 z \, |z) e^{-|z|^2} (z| \end{aligned}$$ $$\begin{aligned} \left\langle n | \mathbb {1} | m \right\rangle = \delta _{n,m} \,, \end{aligned}$$ so that the trace of an operator is $$\begin{aligned} {{\mathrm{Tr}}}\mathcal {O} = \frac{1}{\pi } \int d^2 z \, e^{-|z|^2} (z| \mathcal {O} |z) \,. \end{aligned}$$ Then we find [57] $$\begin{aligned} {{\mathrm{Tr}}}\left[ a^{t \partial } e^{b \partial } e^{c t} \right] = \frac{1}{1-a} \exp \left( \frac{abc}{1-a} \right) \end{aligned}$$ $$\begin{aligned} \frac{1}{\pi } \int d^2 z \, e^{-|z|^2} (z| a^{t\partial _t} e^{bt} e^{c\partial _t} |z) = \frac{1}{\pi } \int d^2 z \, e^{-(1-a)|z|^2 + ab z^* + cz} \,. \end{aligned}$$ Torus correlation function Let us compute the torus correlation function (3.18). The product of the 5d screening currents is given by $$\begin{aligned} S_{i,x}^\text {5d} S_{j,x'}^\text {5d} = \exp \left( - \sum _{m=1}^\infty \frac{1}{m} \frac{1-q_1^m}{1-q_2^{-m}} c_{ji}^{[m]} \frac{x'^m}{x^m} \right) : S_{i,x}^\text {5d} S_{j,x'}^\text {5d} : \,. \end{aligned}$$ Then we compute the trace part $$\begin{aligned}&{{\mathrm{Tr}}}\left[ p^{L_0} : S_{i,x}^\text {5d} S_{j,x'}^\text {5d} : \right] \nonumber \\&= {{\mathrm{Tr}}}\Bigg [ \left( \prod _{i' \in \Gamma _0} \prod _{n = 1}^\infty p^{n t_{i',n} \partial _{i',n}} \right) \exp \left( \sum _{n=1}^\infty (1-q_1^n) \left( x^n t_{i,n} + x'^n t_{j,n}\right) \right) \nonumber \\&\qquad \times \exp \left( \sum _{n=1}^\infty - \frac{1}{n(1-q_2^{-n})} \left( x^{-n} c_{ki}^{[n]} \partial _{k,n} + x'^{-n} c_{lj}^{[n]} \partial _{l,n} \right) \right) \Bigg ] \nonumber \\&= \exp \left( \sum _{n=1}^\infty \left( - \frac{1-q_1^n}{n(1-q_2^{-n})} \frac{p^n}{1-p^n} c_{ji}^{[n]} \frac{x'^n}{x^n} + \frac{1-q_1^{-n}}{n(1-q_2^{n})} \frac{1}{1-p^{-n}} c_{ji}^{[-n]} \frac{x^{n}}{x'^{n}} \right) \right) \nonumber \\&\qquad \times \text {const} \end{aligned}$$ (A.10) where we have used the formulas (A.7) and (2.20), and the constant term does not contain x nor \(x'\). Thus we obtain the torus correlator $$\begin{aligned} {{\mathrm{Tr}}}\left[ p^{L_0} S_{i,x}^\text {5d} S_{j,x'}^\text {5d} \right] = \exp \left( - \sum _{n \ne 0} \frac{1-q_1^n}{n(1-q_2^{-n})(1-p^n)} c_{ji}^{[n]} \frac{x'^n}{x^n} \right) \,. \end{aligned}$$ This is equivalent to (3.17), and proves the relation (3.19). Kimura, T., Pestun, V.: Quiver W-algebras (2015). arXiv:1512.08533 [hep-th] Seiberg, N., Witten, E.: Monopoles, duality and chiral symmetry breaking in \(N=2\) supersymmetric QCD. Nucl. Phys. B431, 484–550 (1994). arXiv:hep-th/9408099 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Seiberg, N., Witten, E.: Electric-magnetic duality, monopole condensation, and confinement in \(N=2\) supersymmetric Yang–Mills theory. Nucl. Phys. B426, 19–52 (1994). arXiv:hep-th/9407087 [hep-th]. [Erratum: Nucl. Phys. B430, 485 (1994)] Gorsky, A., Krichever, I., Marshakov, A., Mironov, A., Morozov, A.: Integrability and Seiberg-Witten exact solution. Phys. Lett. B355, 466–474 (1995). arXiv:hep-th/9505035 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Martinec, E.J., Warner, N.P.: Integrable systems and supersymmetric gauge theory. Nucl. Phys. B459, 97–112 (1996). arXiv:hep-th/9509161 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Donagi, R., Witten, E.: Supersymmetric Yang–Mills theory and integrable systems. Nucl. Phys. B460, 299–334 (1996). arXiv:hep-th/9510101 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Seiberg, N., Witten, E.: Gauge dynamics and compactification to three-dimensions. In: The mathematical beauty of physics: a memorial volume for Claude Itzykson, vol. 24 of Advanced Series in Mathematical Physics, pp. 333–366. World Scientific (1997). arXiv:hep-th/9607163 [hep-th] Gorsky, A., Marshakov, A., Mironov, A., Morozov, A.: \(\cal{N}=2\) supersymmetric QCD and integrable spin chains: Rational case \(N_F < 2 N_c\). Phys. Lett. B380, 75–80 (1996). arXiv:hep-th/9603140 [hep-th]ADSCrossRefGoogle Scholar Nekrasov, N.: Five dimensional gauge theories and relativistic integrable systems. Nucl. Phys. B531, 323–344 (1998). arXiv:hep-th/9609219 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Nekrasov, N., Pestun, V.: Seiberg–Witten geometry of four dimensional \(N=2\) quiver gauge theories (2012). arXiv:1211.2240 [hep-th] Moore, G.W., Nekrasov, N., Shatashvili, S.: Integrating over Higgs branches. Commun. Math. Phys. 209, 97–121 (2000). arXiv:hep-th/9712241 ADSMathSciNetCrossRefzbMATHGoogle Scholar Nekrasov, N.A.: Seiberg-Witten prepotential from instanton counting. Adv. Theor. Math. Phys. 7, 831–864 (2004). arXiv:hep-th/0206161 MathSciNetCrossRefzbMATHGoogle Scholar Nekrasov, N., Shatashvili, S.: Bethe Ansatz and supersymmetric vacua. AIP Conf. Proc. 1134, 154–169 (2009)ADSCrossRefzbMATHGoogle Scholar Nekrasov, N.A., Shatashvili, S.L.: Quantization of integrable systems and four dimensional gauge theories. In: XVIth International Congress on Mathematical Physics, pp. 265–289 (2009). arXiv:0908.4052 [hep-th] Frenkel, E., Reshetikhin, N.: The \(q\)-characters of representations of quantum affine algebras and deformations of \(\cal{W}\)-algebras. In: Recent Developments in Quantum Affine Algebras and Related Topics, vol. 248 of Contemporary Mathematics, pp. 163–205. American Mathematical Society (1999). arXiv:math/9810055 [math.QA] Nekrasov, N., Pestun, V., Shatashvili, S.: Quantum Geometry and Quiver Gauge Theories. Commun. Math. Phys. 357, 519–567 (2018). https://doi.org/10.1007/s00220-017-3071-y ADSMathSciNetCrossRefzbMATHGoogle Scholar Frenkel, E., Hernandez, D.: Baxters relations and spectra of quantum integrable models. Duke Math. J. 164(12), 2407–2460 (2015). arXiv:1308.3444 [math.QA]MathSciNetCrossRefzbMATHGoogle Scholar Nekrasov, N.: BPS/CFT correspondence: non-perturbative Dyson-Schwinger equations and \(qq\)-characters. JHEP 1603, 181 (2016). arXiv:1512.05388 [hep-th]ADSCrossRefGoogle Scholar Frenkel, E., Reshetikhin, N.: Deformations of \(\cal{W}\)-algebras associated to simple Lie algebras. Commun. Math. Phys. 197, 1–32 (1998). arXiv:q-alg/9708006 [math.QA]MathSciNetzbMATHGoogle Scholar Shiraishi, J., Kubo, H., Awata, H., Odake, S.: A Quantum deformation of the Virasoro algebra and the Macdonald symmetric functions. Lett. Math. Phys. 38, 33–51 (1996). arXiv:q-alg/9507034 ADSMathSciNetCrossRefzbMATHGoogle Scholar Frenkel, E., Reshetikhin, N.: Quantum affine algebras and deformations of the Virasoro and \(\mathscr {W}\)-algebras. Commun. Math. Phys. 178, 237–264 (1996). arXiv:q-alg/9505025 ADSMathSciNetCrossRefzbMATHGoogle Scholar Kim, H.-C.: Line defects and 5d instanton partition functions. JHEP 1603, 199 (2016). arXiv:1601.06841 [hep-th]ADSCrossRefGoogle Scholar Awata, H., Yamada, Y.: Five-dimensional AGT conjecture and the deformed Virasoro algebra. JHEP 1001, 125 (2010). arXiv:0910.4431 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Alday, L.F., Gaiotto, D., Tachikawa, Y.: Liouville correlation functions from four-dimensional gauge theories. Lett. Math. Phys. 91, 167–197 (2010). arXiv:0906.3219 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Bao, L., Pomoni, E., Taki, M., Yagi, F.: M5-branes, toric diagrams and gauge theory duality. JHEP 1204, 105 (2012). arXiv:1112.5228 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Aganagic, M., Klemm, A., Marino, M., Vafa, C.: The topological vertex. Commun. Math. Phys. 254, 425–478 (2005). arXiv:hep-th/0305132 ADSMathSciNetCrossRefzbMATHGoogle Scholar Maulik, D., Nekrasov, N., Okounkov, A., Pandharipande, R.: Gromov–Witten theory and Donaldson–Thomas theory, I. Compos. Math. 142, 1263–1285 (2006). arXiv:math/0312059 [math.AG]MathSciNetCrossRefzbMATHGoogle Scholar Maulik, D., Nekrasov, N., Okounkov, A., Pandharipande, R.: Gromov–Witten theory and Donaldson–Thomas theory, II. Compos. Math. 142, 1286–1304 (2006). arXiv:math/0406092 [math.AG]MathSciNetCrossRefzbMATHGoogle Scholar Iqbal, A., Kozçaz, C., Vafa, C.: The refined topological vertex. JHEP 0910, 069 (2009). arXiv:hep-th/0701156 ADSMathSciNetCrossRefGoogle Scholar Mironov, A., Morozov, A., Runov, B., Zenkevich, Y., Zotov, A.: Spectral dualities in XXZ spin chains and five dimensional gauge theories. JHEP 1312, 034 (2013). arXiv:1307.1502 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Iqbal, A., Kozçaz, C., Yau, S.-T.: Elliptic Virasoro conformal blocks (2015). arXiv:1511.00458 [hep-th] Nieri, F.: An elliptic Virasoro symmetry in 6d. Lett. Math. Phys. 107, 2147–2187 (2017). arXiv:1511.00574 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Mironov, A., Morozov, A., Zenkevich, Y.: On elementary proof of AGT relations from six dimensions. Phys. Lett. B756, 208–211 (2016). arXiv:1512.06701 [hep-th]ADSCrossRefGoogle Scholar Mironov, A., Morozov, A., Zenkevich, Y.: Spectral duality in elliptic systems, six-dimensional gauge theories and topological strings. JHEP 1605, 121 (2016). arXiv:1603.00304 [hep-th]ADSMathSciNetCrossRefGoogle Scholar Mironov, A., Morozov, A., Zenkevich, Y.: Ding-Iohara-Miki symmetry of network matrix models. Phys. Lett. B762, 196–208 (2016). arXiv:1603.05467 [hep-th]ADSMathSciNetCrossRefGoogle Scholar Awata, H., Kanno, H., Matsumoto, T., Mironov, A., Morozov, A., Morozov, A., Ohkubo, Y., Zenkevich, Y.: Explicit examples of DIM constraints for network matrix models. JHEP 1607, 103 (2016). arXiv:1604.08366 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Tan, M.-C.: An M-theoretic derivation of a 5d and 6d AGT correspondence, and relativistic and elliptized integrable systems. JHEP 1312, 031 (2013). arXiv:1309.4775 [hep-th]ADSCrossRefGoogle Scholar Koroteev, P., Sciarappa, A.: Quantum hydrodynamics from large-\(n\) supersymmetric gauge theories. Lett. Math. Phys. (2017). arXiv:1510.00972 [hep-th] Koroteev, P., Sciarappa, A.: On Elliptic Algebras and Large-\(n\) Supersymmetric Gauge Theories. J. Math. Phys. 57, 112302 (2016). arXiv:1601.08238 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Tan, M.-C.: Higher AGT correspondences, W-algebras, and higher quantum geometric langlands duality from M-theory (2016). arXiv:1607.08330 [hep-th] Hollowood, T.J., Iqbal, A., Vafa, C.: Matrix models, geometric engineering and elliptic genera. JHEP 0803, 069 (2008). arXiv:hep-th/0310272 [hep-th]ADSMathSciNetCrossRefGoogle Scholar Haghighat, B., Iqbal, A., Kozçaz, C., Lockhart, G., Vafa, C.: M-strings. Commun. Math. Phys. 334, 779–842 (2015). arXiv:1305.6322 [hep-th]ADSCrossRefzbMATHGoogle Scholar Benini, F., Eager, R., Hori, K., Tachikawa, Y.: Elliptic genera of 2d \({\cal{N}}\) = 2 gauge theories. Commun. Math. Phys. 333, 1241–1286 (2015). arXiv:1308.4896 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Aganagic, M., Okounkov, A.: Elliptic stable envelope (2016). arXiv:1604.00423 [math.AG] Saito, Y.: Elliptic Ding-Iohara algebra and the free field realization of the elliptic Macdonald operator. Pub. Res. Inst. Math. Sci. 50, 411–455 (2014). arXiv:1301.4912 [math.QA]MathSciNetCrossRefzbMATHGoogle Scholar Gadde, A., Gukov, S.: 2d index and surface operators. JHEP 1403, 080 (2014). arXiv:1305.0266 [hep-th]ADSMathSciNetCrossRefzbMATHGoogle Scholar Marshakov, A., Nekrasov, N.: Extended Seiberg–Witten theory and integrable hierarchy. JHEP 0701, 104 (2007). arXiv:hep-th/0612019 ADSMathSciNetCrossRefGoogle Scholar Clavelli, L., Shapiro, J.A.: Pomeron factorization in general dual models. Nucl. Phys. B57, 490–535 (1973)ADSCrossRefGoogle Scholar Aganagic, M., Haouzi, N.: ADE little string theory on a Riemann surface (and triality) (2015). arXiv:1506.04183 [hep-th] Aganagic, M., Haouzi, N., Kozçaz, C., Shakirov, S.: Gauge/Liouville triality (2013). arXiv:1309.1687 [hep-th] Aganagic, M., Haouzi, N., Shakirov, S.: \(A_n\)-triality (2014). arXiv:1403.3657 [hep-th] Feigin, B., Frenkel, E.: Quantum \(\mathscr {W}\)-algebras and elliptic algebras. Commun. Math. Phys. 178, 653–678 (1996). arXiv:q-alg/9508009 [q-alg]ADSMathSciNetCrossRefzbMATHGoogle Scholar Farghly, R.M., Konno, H., Oshima, K.: Elliptic algebra \(U_{q,p}(\widehat{\mathfrak{g}})\) and quantum \(Z\)-algebras. Algebras Represent. Theory 18, 103–135 (2015). arXiv:1404.1738 [math.QA]MathSciNetCrossRefzbMATHGoogle Scholar Bourgine, J.-E., Mastuo, Y., Zhang, H.: Holomorphic field realization of SH\(^c\) and quantum geometry of quiver gauge theories. JHEP 1604, 167 (2016). arXiv:1512.02492 [hep-th]ADSGoogle Scholar Bourgine, J.-E., Fukuda, M., Matsuo, Y., Zhang, H., Zhu, R.-D.: Coherent states in quantum \(\cal{W}_{1+\infty }\) algebra and qq-character for 5d super Yang-Mills. PTEP 2016, 123B05 (2016). arXiv:1606.08020 [hep-th]zbMATHGoogle Scholar Green, M.B., Schwarz, J.H., Witten, E.: Superstring Theory. Cambridge University Press, Cambridge (1987)zbMATHGoogle Scholar Yamada, Y.: Introduction to Conformal Field Theory. Baifukan, Tokyo (2006). (in Japanese)Google Scholar © Springer Science+Business Media B.V., part of Springer Nature 2018 1.Keio UniversityTokyoJapan 2.IHESBures-sur-YvetteFrance Kimura, T. & Pestun, V. Lett Math Phys (2018) 108: 1383. https://doi.org/10.1007/s11005-018-1073-0 Revised 04 January 2018
CommonCrawl
Hölder estimates of mild solutions for nonlocal SPDEs Rongrong Tian1, Liang Ding2, Jinlong Wei ORCID: orcid.org/0000-0002-9182-45873 & Suyi Zheng3 We consider nonlocal PDEs driven by additive white noises on \({\mathbb{R}}^{d}\). For \(L^{q}\) integrable coefficients, we derive the existence and uniqueness, as well as Hölder continuity, of mild solutions. Precisely speaking, the unique mild solution is almost surely Hölder continuous with Hölder index \(0<\theta <(1/2-d/(q \alpha))(1\wedge \alpha)\). Moreover, we show that any order \(\gamma (< q)\) moment of Hölder normal for u on every bounded domain of \({\mathbb{R}}_{+}\times {\mathbb{R}}^{d}\) is finite. Let \((\varOmega,{\mathcal{F}},\{{\mathcal{F}} _{t}\}_{t\geq 0},{\mathbb{P}})\) be a filtered probability space that satisfies the usual hypotheses of completeness and right continuity. \(\{W_{t}\}_{t\geq 0}\) is a one-dimensional standard Wiener process on \((\varOmega,{\mathcal{F}},\{{\mathcal{F}}_{t}\}_{t\geq 0},{\mathbb{P}})\). In this paper, we are concerned with the Hölder-estimates of mild solutions for the following nonlocal stochastic partial differential equations (SPDEs for short): $$\begin{aligned} \textstyle\begin{cases} du(t,x)+(-\Delta)^{\frac{\alpha }{2}} u(t,x)\,dt=h(t,x)\,dt+f(t,x)\,dW_{t},\quad t>0, x\in {\mathbb{R}}^{d}, \\ u(t,x)|_{t=0}=0,\quad x\in \mathbb{R}^{d}, \end{cases}\displaystyle \end{aligned}$$ where \(\alpha \in (0,2]\), \((-\Delta)^{\frac{\alpha }{2}}\) is the fractional Laplacian on \({\mathbb{R}}^{d}\). When \(\alpha =2\), these SPDEs have been studied widely. \(W^{k,2}\)-theory was well established by Pardoux [15] and Rozovskii [16]. A more general \(W^{2,q}\)-theory was founded by Krylov [9,10,11] for \(2\leq q<\infty \). Krylov's result was then generalized by Denis, Matoussi, and Stoica [5] for \(q=\infty \) to nonlinear SPDEs. There is also some Hölder estimates for solutions of (1.1) when \(\alpha =2\). As \(f(t,\cdot)\) belongs to \(L^{q}\) with large enough q (or \(q=\infty \)), h vanishes and \({\mathbb{R}}^{d}\) is replaced by a bounded domain (with smooth boundary), the space and time Hölder estimates have been discussed by Kuksin, Nadirashvili, and Piatnitski [12, 13]. This result was further developed by Kim [8] for general Hölder estimates for generalized solutions with \(L_{p}(L_{q})\) coefficients. Using a different philosophy, Hsu, Wang, and Wang [6] discussed (1.1) with \(\alpha =2\) for general f (u dependent). By applying a stochastic De Giorgi iteration technique, they built the Hölder estimates for weak solutions on \([T,2T]\times {\mathbb{R}}^{d}\) (\(T>0\)). Recently, by using the heat kernel estimate technique, Wei, Duan, and Lv [18] also derived the Hölder estimates for stochastic transport-diffusion equations driven by Lévy noises. When \(\alpha \in (0,2)\), Chang and Lee [4], Kim and Kim [7] studied the \(L^{q}\) (\(2\leq q<\infty\)) theory for SPDE (1.1). When f is bounded measurable, by constructing stochastic BMO and Morrey–Campanato spaces, Lv et al. [14] established the BMO and Hölder estimates for solutions. However, as far as we know, there have been very few papers dealing with the Hölder estimates for (1.1) with \(L^{q}\) coefficients. In this paper, we will fill this gap and derive the Hölder estimates for mild solutions. Here the mild solution of (1.1) is defined as follows. Let \(\alpha \in (0,2)\) and \(P_{t}\) denote the forward heat semigroup generated by negative fractional Laplacian \(-(-\Delta)^{\alpha /2}\). Suppose that u is given by $$\begin{aligned} u(t,x)= \int _{0}^{t}P_{t-r}h(r,\cdot) (x)\,dr+ \int _{0}^{t}P_{t-r}f(r, \cdot) (x) \,dW_{r}. \end{aligned}$$ We call u a mild solution of (1.1) if \(u \in L^{\infty }_{\mathrm{loc}}([0, \infty);L^{\infty }({\mathbb{R}}^{d};L^{2}(\varOmega)))\) which is \({\mathcal{F}}_{t}\)-adapted and as a family of \(L^{2}(\varOmega,{\mathcal{F}}, {\mathbb{P}})\)-valued random variables is continuous. Let \(p(t,x,y)\) be the transition density of symmetric α-stable process, then \(p(t,x,y)=p(t,x-y)\) and $$\begin{aligned} P_{t}\varphi (x)= \int _{{\mathbb{R}}^{d}}p(t,x-y)\varphi (y)\,dy,\quad { \varphi \in L^{q}\bigl({\mathbb{R}}^{d}\bigr), q>1.} \end{aligned}$$ Moreover, \(p(t,\cdot)\) is smooth for \(t>0\) and from [4, Lemma 2.2] (also see [3, 19, 20]), we have the following estimates: $$\begin{aligned}& \begin{aligned} &p(t,x-y)\approx \frac{t}{ \vert y-x \vert ^{d+\alpha }} \wedge t^{-\frac{d}{ \alpha }}, \\ & \bigl\vert \nabla _{x}p(t,x-y) \bigr\vert \approx \vert y-x \vert \biggl(\frac{t}{ \vert y-x \vert ^{d+2+ \alpha }} \wedge t^{-\frac{d+2}{\alpha }} \biggr). \end{aligned} \end{aligned}$$ For every \(t>0\), by the scaling property, then \(p(t,x-y)=t^{-\frac{d}{ \alpha }}p(1,(x-y)t^{-1/\alpha })\), which implies $$\begin{aligned} \bigl\vert \partial _{t}p(t,x-y) \bigr\vert \leq C \biggl(\frac{1}{ \vert y-x \vert ^{d+\alpha }} \wedge t ^{-\frac{d+\alpha }{\alpha }} \biggr). \end{aligned}$$ Our main result is the following. Let us consider the nonlocal SPDE (1.1) associated with \(\alpha \in (0,2)\). We suppose that \(q>2d/\alpha \vee 2\), \(f,h\in L ^{\infty }_{\mathrm{loc}}({\mathbb{R}}_{+};L^{q}({\mathbb{R}}^{d}\times \varOmega))\) which are \({\mathcal{F}}_{t}\)-adapted. Let us set \(\vartheta =(1/2-d/(q \alpha))(1\wedge \alpha)\). Then there is a mild solution u of (1.1) and \(u\in L^{\infty }_{\mathrm{loc}}({\mathbb{R}}_{+};L^{\infty }( {\mathbb{R}}^{d}; L^{q}(\varOmega)))\). In addition, if \(q>4(d+1)/(1\wedge \alpha)\), \(u\in {\mathcal{C}}^{ \vartheta -}([0,t]\times {\mathbb{R}}^{d};L^{q}(\varOmega))\cap L^{q-}( \varOmega;{\mathcal{C}}^{\vartheta -}_{\mathrm{loc}}({\mathbb{R}}_{+}\times {\mathbb{R}}^{d}))\) for every \(0< t<\infty \). Moreover, for every \(t>0\), every \(0<\theta <\vartheta \), every bounded domain \(Q\subset {\mathbb{R}} ^{d}\), every \(0<\gamma <q\), there exist two positive constants \(C(q,\alpha,d,\theta,t)\) and \(C(q,\alpha,d,\gamma,\theta,t,Q)\) (independent of h and f) such that $$\begin{aligned} \Vert u \Vert _{{\mathcal{C}}^{\theta }([0,t]\times {\mathbb{R}}^{d};L^{q}( \varOmega))} \leq & C(q,\alpha,d,\theta,t) \bigl[ \Vert h \Vert _{L^{\infty }([0,t];L ^{q}({\mathbb{R}}^{d}\times \varOmega))} \\ & {}+ \Vert f \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d}\times \varOmega))} \bigr] \end{aligned}$$ $$\begin{aligned} {\mathbb{E}} \Vert u \Vert ^{\gamma }_{{\mathcal{C}}^{\theta }([0,t]\times \overline{Q})} \leq & C(q,\alpha,d,\gamma,\theta,t,Q) \bigl[ \Vert h \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d}\times \varOmega))} \end{aligned}$$ $$\begin{aligned} & {} + \Vert f \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d}\times \varOmega))} \bigr]^{\gamma }, \end{aligned}$$ $$\begin{aligned} {\mathcal{C}}^{\vartheta -}\bigl([0,t]\times {\mathbb{R}}^{d};L^{q}( \varOmega)\bigr):=\bigcap_{0< \theta < \vartheta }{\mathcal{C}}^{\theta } \bigl([0,t]\times {\mathbb{R}}^{d};L^{q}(\varOmega)\bigr) \end{aligned}$$ $$\begin{aligned} L^{q-}\bigl(\varOmega;{\mathcal{C}}^{\vartheta -}_{\mathrm{loc}} \bigl({\mathbb{R}}_{+}\times {\mathbb{R}}^{d}\bigr)\bigr):= \bigcap_{1\leq p< q}L^{p}\biggl(\varOmega; \bigcap_{0< \theta < \vartheta }{\mathcal{C}}^{\theta }_{\mathrm{loc}} \bigl({\mathbb{R}}_{+}\times {\mathbb{R}}^{d}\bigr)\biggr). \end{aligned}$$ (i) In [4] Chang and Lee discussed (1.1); under the assumptions that \(h\in H^{k}_{q}(T,{\mathbb{R}}^{d})\), \(f\in H^{k+\frac{ \alpha }{2}+\delta }_{q}(T,{\mathbb{R}}^{d})\) (\(T>0\) is a given real number, \(0<\delta <\alpha /2\)), they founded the \(H^{k+\alpha }_{q}\) theory of solutions on \({\mathbb{R}}^{d}\). As a direct consequence, if \(k=0\) and \(q\alpha >d\), the Hölder estimate for solutions in space variable satisfies $$\begin{aligned} {\mathbb{E}} \int ^{t}_{0} \bigl\Vert u(s) \bigr\Vert ^{\gamma }_{{\mathcal{C}}^{\theta }( {\mathbb{R}}^{d})}\,ds < \infty, \quad t>0, \end{aligned}$$ where θ is given by the Sobolev imbedding theorem. Different from [4], \(L^{\infty }(L^{q})\) integrability in space and time variables is enough to ensure the Hölder continuity of solutions in space and time variables. (ii) Our main idea comes from [13]. In [13], Kuksin, Nadirashvili, and Piatnitski argued (1.1) with \(\alpha =2\) on a bounded domain. By estimating the tail probability, they gained the space and time Hölder estimates. Here, we study (1.1) on \({\mathbb{R}}^{d}\) with \(\alpha \in (0,2)\). By using the techniques developed in [13], we gain the space and time Hölder estimates on every bounded domain. This paper is organized as follows. In Sect. 2, we present some useful lemmas, and Sect. 3 is devoted to giving the proof details. \(a\wedge b=\min \{a,b\}\), \(a\vee b=\max \{a,b\}\). \({\mathbb{R}}_{+}=\{r\in {\mathbb{R}}, r\geq 0\}\). The letter C will mean a positive constant whose values may change in different places. For a parameter or a function ϱ, \(C(\varrho)\) means the constant is only dependent on ϱ. \({\mathbb{N}}\) is the set of natural numbers, and \({\mathbb{Z}}\) denotes the set of integral numbers. Let \(Q\subset {\mathbb{R}}^{k}\) (\(k\in {\mathbb{N}}\)) be a bounded domain. For \(0<\theta <1\), we define \({\mathcal{C}}^{\theta }( \overline{Q})\) to be the set of all continuous functions u on Q such that $$\begin{aligned} \Vert u \Vert _{{\mathcal{C}}^{\theta }(\overline{Q})}:=\sup_{x\in \overline{Q}} \bigl\vert u(x) \bigr\vert +\sup_{x,y\in \overline{Q}, x\neq y}\frac{ \vert u(x)-u(y) \vert }{ \vert x-y \vert ^{ \theta }}< \infty. \end{aligned}$$ Useful lemmas Let \(\rho _{0}\in L^{q}({\mathbb{R}}^{d}\times \varOmega)\). Consider the Cauchy problem $$\begin{aligned} \partial _{t}\rho (t,x)+(-\Delta)^{\frac{\alpha }{2}} \rho (t,x)=0,\quad t>0, x\in {\mathbb{R}}^{d}, \rho (t,x)|_{t=0}= \rho _{0}(x). \end{aligned}$$ Then, for any \(0<\beta <1\), the unique mild solution (given by (1.3) if one replaces φ by \(\rho _{0}\)) of (2.1) meets the following estimates: $$\begin{aligned} \bigl\Vert \rho (t) \bigr\Vert _{{\mathcal{C}}^{\beta }({\mathbb{R}}^{d})} \leq Ct^{-\frac{ \beta }{\alpha }-\frac{d}{q\alpha }} \Vert \rho _{0} \Vert _{L^{q}({\mathbb{R}} ^{d})}, \quad { \mathbb{P}}\textit{-a.s. } \omega \in \varOmega, \end{aligned}$$ $$\begin{aligned} \bigl\vert \rho (t+\delta,x)-\rho (t,x) \bigr\vert \leq Ct^{-\beta -\frac{d}{q\alpha }} \delta ^{\beta } \Vert \rho _{0} \Vert _{L^{q}({\mathbb{R}}^{d})}, \quad {\mathbb{P}}\textit{-a.s. } \omega \in \varOmega. \end{aligned}$$ Obviously, the unique mild solution ρ of (2.1) can be represented by (1.3) if one replaces φ by \(\rho _{0}\). Hence, for any \(t>0\), $$\begin{aligned} \bigl\vert \rho (t,x) \bigr\vert \leq & \int _{{\mathbb{R}}^{d}} \bigl\vert p(t,x-y)\rho _{0}(y) \bigr\vert \,dy \\ \leq & \biggl[ \int _{{\mathbb{R}}^{d}} \bigl\vert p(t,y) \bigr\vert ^{\frac{q}{q-1}} \,dy \biggr]^{ \frac{q-1}{q}} \Vert \rho _{0} \Vert _{L^{q}({\mathbb{R}}^{d})}, \quad {\mathbb{P}}\textit{-a.s. } \omega \in \varOmega. \end{aligned}$$ According to (1.4), then $$\begin{aligned} \int _{{\mathbb{R}}^{d}} \bigl\vert p(t,y) \bigr\vert ^{\frac{q}{q-1}} \,dy \leq & C \biggl[ \int _{ \vert y \vert \geq t^{\frac{1}{\alpha }}} \biggl\vert \frac{t}{ \vert y \vert ^{d+\alpha }}\wedge t^{-\frac{d}{\alpha }} \biggr\vert ^{\frac{q}{q-1}}\,dy + \int _{ \vert y \vert < t^{\frac{1}{\alpha }}} \biggl\vert \frac{t}{ \vert y \vert ^{d+\alpha }}\wedge t^{-\frac{d}{\alpha }} \biggr\vert ^{\frac{q}{q-1}}\,dy \biggr] \\ \leq & C \biggl[ \int _{ \vert y \vert \geq t^{\frac{1}{\alpha }}} \biggl\vert \frac{t}{ \vert y \vert ^{d+\alpha }} \biggr\vert ^{\frac{q}{q-1}}\,dy + \int _{ \vert y \vert < t^{\frac{1}{\alpha }}}\bigl(t^{-\frac{d}{\alpha }}\bigr)^{\frac{q}{q-1}}\,dy \biggr] \\ =& C \biggl[t^{\frac{q}{q-1}} \int _{ \vert y \vert \geq t^{\frac{1}{\alpha }}} \vert y \vert ^{-\frac{q(d+\alpha)}{q-1}}\,dy +t^{-\frac{dq}{(q-1)\alpha }} \int _{ \vert y \vert < t^{\frac{1}{\alpha }}}\,dy \biggr] \\ \leq & C \bigl[t^{\frac{q}{q-1}} \vert y \vert ^{-\frac{q(d+\alpha)}{q-1}+d}|_{y=t^{\frac{1}{\alpha }}} +t^{-\frac{dq}{(q-1)\alpha }} \vert y \vert ^{d}|_{y=t^{\frac{1}{\alpha }}} \bigr] \\ =& Ct^{-\frac{d}{(q-1)\alpha }}. \end{aligned}$$ Combining (2.4) and (2.5), then $$\begin{aligned} \bigl\vert \rho (t,x) \bigr\vert \leq \int _{{\mathbb{R}}^{d}} \bigl\vert p(t,x-y)\rho _{0}(y) \bigr\vert \,dy\leq Ct ^{-\frac{d}{q\alpha }} \Vert \rho _{0} \Vert _{L^{q}({\mathbb{R}}^{d})}, \quad {\mathbb{P}}\textit{-a.s. } \omega \in \varOmega. \end{aligned}$$ Let us calculate ∇ρ and \(\partial _{t}\rho \). For \(1\leq i\leq d\), we manipulate that $$\begin{aligned} \partial _{x_{i}}\rho (t,x)= \int _{{\mathbb{R}}^{d}}\partial _{x_{i}}p(t,x-y) \rho _{0}(y)\,dy, \end{aligned}$$ which suggests that $$\begin{aligned} \bigl\vert \nabla \rho (t,x) \bigr\vert \leq \biggl[ \int _{{\mathbb{R}}^{d}} \bigl\vert \nabla p(t,y) \bigr\vert ^{ \frac{q}{q-1}}\,dy \biggr]^{\frac{q-1}{q}} \Vert \rho _{0} \Vert _{L^{q}({\mathbb{R}} ^{d})}, \quad {\mathbb{P}}\textit{-a.s. } \omega \in \varOmega. \end{aligned}$$ By virtue of (1.4) and analogue calculations for (2.5) imply that $$\begin{aligned}& \int _{{\mathbb{R}}^{d}} \bigl\vert \nabla p(t,y) \bigr\vert ^{\frac{q}{q-1}}\,dy \\& \quad \leq C \biggl[ \int _{ \vert y \vert \geq t^{\frac{1}{\alpha }}} \biggl\vert \vert y \vert \biggl( \frac{t}{ \vert y \vert ^{d+2+\alpha }} \wedge t^{-\frac{d+2}{\alpha }} \biggr) \biggr\vert ^{\frac{q}{q-1}} \,dy + \int _{ \vert y \vert < t^{\frac{1}{\alpha }}} \biggl\vert \vert y\vert \biggl( \frac{t}{ \vert y \vert ^{d+2+\alpha }} \wedge t^{-\frac{d+2}{\alpha }} \biggr) \biggr\vert ^{\frac{q}{q-1}} \,dy \biggr] \\& \quad \leq C \biggl[ \int _{ \vert y \vert \geq t^{\frac{1}{\alpha }}}t^{\frac{q}{q-1}} \vert y \vert ^{-\frac{q(d+1+\alpha)}{q-1}} \,dy + \int _{ \vert y \vert < t^{\frac{1}{\alpha }}} \vert y \vert ^{\frac{q}{q-1}}t^{-\frac{q(d+2)}{(q-1)\alpha }} \,dy \biggr] \\& \quad \leq C \bigl[t^{\frac{q}{q-1}} \vert y \vert ^{-\frac{q(d+1+\alpha)}{q-1}+d}\big|_{y=t^{\frac{1}{\alpha }}} +t^{-\frac{(d+2)q}{(q-1)\alpha }} \vert y \vert ^{d+\frac{q}{q-1}}\big|_{y=t^{\frac{1}{\alpha }}} \bigr] \\& \quad = Ct^{-\frac{d+q}{(q-1)\alpha }}. \end{aligned}$$ Therefore, one arrives at $$\begin{aligned} \bigl\vert \nabla \rho (t,x) \bigr\vert \leq Ct^{-\frac{d}{q\alpha }-\frac{1}{\alpha }} \Vert \rho _{0} \Vert _{L^{q}({\mathbb{R}}^{d})}, \quad { \mathbb{P}}\textit{-a.s. } \omega \in \varOmega. \end{aligned}$$ Applying the interpolation inequality $$\begin{aligned} \bigl\Vert \rho (t) \bigr\Vert _{{\mathcal{C}}^{\beta }({\mathbb{R}}^{d})} \leq C \bigl\Vert \rho (t) \bigr\Vert _{L^{\infty }({\mathbb{R}}^{d})}^{1-\beta } \bigl\Vert \rho (t) \bigr\Vert ^{\beta }_{{\mathcal{C}}^{1}({\mathbb{R}}^{d})} \end{aligned}$$ to (2.6) and (2.7), (2.2) holds true. Repeating the above calculations, and by virtue of (1.5), one derives that $$\begin{aligned} \bigl\vert \partial _{t}\rho (t,x) \bigr\vert \leq Ct^{-\frac{d}{q\alpha }-1} \Vert \rho _{0} \Vert _{L^{q}({\mathbb{R}}^{d})}, \quad { \mathbb{P}}\textit{-a.s. } \omega \in \varOmega. \end{aligned}$$ The interpolation inequality (2.8) uses (2.6) and (2.9), for every \(t_{2}>t_{1}>0\), we get $$\begin{aligned} \bigl\Vert \rho (\cdot,x) \bigr\Vert _{{\mathcal{C}}^{\beta }([t_{1},t_{2}])} \leq Ct _{1}^{-\beta -\frac{d}{q\alpha }} \Vert \rho _{0} \Vert _{L^{q}({\mathbb{R}}^{d})}, \quad {\mathbb{P}}\textit{-a.s. } \omega \in \varOmega. \end{aligned}$$ From (2.10), inequality (2.3) is legitimate, and we finish the proof. □ (Minkowski inequality [17]) Assume that \((S_{1}, {\mathcal{F}} _{1},\mu _{1})\) and \((S_{2}, {\mathcal{F}}_{2},\mu _{2})\) are two measure spaces and that \(G: S_{1} \times S_{2} \rightarrow {\mathbb{R}}\) is measurable. For given real numbers \(1\leq p_{1}\leq p_{2}\), we also assume that \(G\in L^{p_{1}}(S_{1};L^{p_{2}}(S_{2}))\). Then \(G\in L ^{p_{2}}(S_{2};L^{p_{1}}(S_{1}))\) and $$\begin{aligned}& \biggl[ \int _{S_{2}} \biggl( \int _{S_{1}} \bigl\vert G(x,y) \bigr\vert ^{p_{1}}\mu _{1}(dx) \biggr) ^{\frac{p_{2}}{p_{1}}}\mu _{2}(dy) \biggr]^{\frac{1}{p_{2}}} \\& \quad \leq \biggl[ \int _{S_{1}} \biggl( \int _{S_{2}} \bigl\vert G(x,y) \bigr\vert ^{p_{2}}\mu _{2}(dy) \biggr)^{\frac{p_{1}}{p_{2}}}\mu _{1}(dx) \biggr]^{\frac{1}{p_{1}}}. \end{aligned}$$ The next lemmas will play an important role in estimating stochastic integrals. (Interpolation inequality) Suppose that \(1\leq p_{1}< p_{2}\leq \infty \). Let E be a Banach space and F be a linear operator from \(L^{p_{1}}(\varOmega;E)+L^{p_{2}}(\varOmega;E)\) into the space \(L^{p_{1}}( \varOmega)+L^{p_{2}}(\varOmega)\). If F is bounded from \(L^{p_{1}}( \varOmega;E)\) into \(L^{p_{1}}(\varOmega)\) and also bounded from \(L^{p_{2}}(\varOmega;E)\) into \(L^{p_{2}}(\varOmega)\), then F is bounded from \(L^{p_{3}}(\varOmega;E)\) into \(L^{p_{3}}(\varOmega)\) for every \(p_{1}\leq p_{3}\leq p_{2}\). When \(E={\mathbb{R}}\), this result can be recovered from the Marcinkiewicz interpolation theorem [1, Theorem 2.58]. For a general Banach space E, what we should do is to replace \(\delta _{u}(\tau)\) [1, pp. 56–57] by \(\delta _{\|u\|_{E}}( \tau)\), and then the lemma is proved. □ Let \({\mathcal{F}}\) be given in the introduction, that g be an \({\mathcal{F}}\times {\mathcal{B}}({\mathbb{R}}_{+})\times {\mathcal{B}}( {\mathbb{R}}_{+})\times {\mathcal{B}}({\mathbb{R}}^{d})\)-measurable function. Suppose that \(\{M_{t}(x)\}_{t\geq 0}\) is a Brownian type integral of the form $$ M_{t}(x)= \int _{0}^{t}g(t,r,x)\,dW_{r}, g( \cdot,r,\cdot) \quad \textit{is } {\mathcal{F}}_{r}\textit{-measurable}. $$ Suppose that \(q\geq 2\) and $$\begin{aligned} {\mathbb{E}} \biggl[ \int ^{t}_{0} \bigl\vert g(t,r,x) \bigr\vert ^{2}\,dr \biggr]^{\frac{q}{2}}< \infty \quad \textit{for almost everywhere } x\in {\mathbb{R}}^{d}. \end{aligned}$$ There exists a positive constant \(C(q)>0\), which is independent of x, such that for each \(t\geq 0\), $$\begin{aligned} {\mathbb{E}}\bigl[ \bigl\vert M_{t}(x) \bigr\vert ^{q}\bigr] \leq C(q){\mathbb{E}} \biggl[ \int ^{t}_{0} \bigl\vert g(t,r,x) \bigr\vert ^{2}\,dr \biggr]^{\frac{q}{2}}. \end{aligned}$$ First, we assume that F has the following form: $$\begin{aligned} g(t,r,x)=\sum_{j=1}^{k}g_{j}(t,x)1_{(t_{j-1},t_{j}]}(r), \end{aligned}$$ where \(k\in {\mathbb{N}}\), \(g_{j}\) are \((\varOmega \times {\mathbb{R}} _{+}\times {\mathbb{R}}^{d};{\mathcal{F}}_{t_{j-1}}\times {\mathcal{B}}( {\mathbb{R}}_{+})\times {\mathcal{B}}({\mathbb{R}}^{d}))\)-measurable, and \(0=t_{0}< t_{1}< t_{2}<\cdots <t_{k}=t\). For \(q=2\), by using the Itô isometry, we obtain $$\begin{aligned} {\mathbb{E}} \bigl\vert M_{t}(x) \bigr\vert ^{2}={\mathbb{E}} \Biggl\vert \sum_{j=1}^{k} (W_{t_{j}}-W _{t_{j-1}})g_{j}(t,x) \Biggr\vert ^{2}={\mathbb{E}} \int _{0}^{t} \bigl\vert g(t,r,x) \bigr\vert ^{2}\,dr. \end{aligned}$$ For \(q=4\), according to Burkholder's inequality [2, Theorem 4.4.21], we also have $$\begin{aligned} &{\mathbb{E}} \bigl\vert M_{t}(x) \bigr\vert ^{4}={\mathbb{E}} \Biggl\vert \sum_{j=1}^{k} (W_{t_{j}}-W _{t_{j-1}})g_{j}(t,x) \Biggr\vert ^{4} \leq C{\mathbb{E}} \biggl[ \int _{0}^{t} \bigl\vert g(t,r,x) \bigr\vert ^{2}\,dr \biggr]^{2}. \end{aligned}$$ From (2.15) and (2.16), for every \(t>0\), the linear operator $$\begin{aligned} F: g \rightarrow \int _{0}^{t}g(t,r,x)\,dW_{r} \end{aligned}$$ is bounded from \(L^{2}(\varOmega;L^{2}(0,t))\) into \(L^{2}(\varOmega)\) and also bounded from \(L^{4}(\varOmega;L^{2}(0,t))\) into \(L^{4}(\varOmega)\). According to Lemma 2.3, F is bounded from \(L^{q}(\varOmega;L ^{2}(0,t))\) into \(L^{q}(\varOmega)\) for every \(2\leq q\leq 4\), i.e., (2.13) holds true if g has the form (2.14). Observing that the functions which meet condition (2.13) can be approximated by the step functions, we thus complete the proof for \(q\in [2,4]\). Analogously, for every even number and every step function of the form (2.14), one can prove that (2.16) holds. In view of Lemma 2.3, one derives an inequality of (2.13) for every \(q>4\). Then, by an approximating argument, we accomplish the proof. □ When \(g(t,r,x)\) is a deterministic function, the Marcinkiewicz interpolation inequality is not needed. Indeed, an \(L^{q}\) type interpolation inequality is enough. ([13, Lemma 4]) Let a function g satisfy the estimate $$\begin{aligned} \mathop{\operatorname{osc}}_{J}g=\sup_{x,y\in J} \bigl\vert g(x)-g(y) \bigr\vert \leq \kappa _{n} \end{aligned}$$ in any small cube J which is a mesh of the grid \(2^{-n}{\mathbb{Z}} ^{d+1}\), i.e., in any \(J=2^{-n}j+[0,2^{-n}]^{d+1}\), where \(j\in {\mathbb{Z}} ^{d+1}\). Then, for any \(\Delta \in {\mathbb{R}}^{d+1}\), one has $$\begin{aligned} \bigl\vert g(y+\Delta)-g(y) \bigr\vert \leq 2\kappa _{[\log _{2}(1/\Delta)]}, \end{aligned}$$ where \([\cdot ]\) stands for the integer part. Proof of Theorem 1.1 The existence result follows by using the explicit formula $$\begin{aligned} u(t,x)= \int _{0}^{t} \int _{{\mathbb{R}}^{d}}p(t-r,x-y)h(r,y)\,dy\,dr+ \int _{0}^{t} \int _{{\mathbb{R}}^{d}}p(t-r,x-y)f(r,y)\,dy\,dW_{r}, \end{aligned}$$ where \(p(t,x-y)\) fulfills (1.4) and (1.5). By this obvious representation, to prove u is a mild solution, we need to show \(u \in L^{\infty }_{\mathrm{loc}}({\mathbb{R}}_{+};L^{\infty }({\mathbb{R}} ^{d};L^{2}(\varOmega)))\). Now let us verify that \(u \in L^{\infty }_{\mathrm{loc}}( {\mathbb{R}}_{+};L^{\infty }({\mathbb{R}}^{d};L^{q}(\varOmega)))\). If one uses Lemma 2.4 for given \(q\geq 2\), then $$\begin{aligned} {\mathbb{E}} \bigl\vert u(t,x) \bigr\vert ^{q} \leq & C(q){ \mathbb{E}} \biggl\vert \int _{0}^{t} \int _{{\mathbb{R}}^{d}}p(t-r,x-z)h(r,z)\,dz \,dr \biggr\vert ^{q} \\ &{}+C(q){\mathbb{E}} \biggl[ \int _{0}^{t} \biggl\vert \int _{{\mathbb{R}}^{d}}p(t-r,x-z)f(r,z)\,dz \biggr\vert ^{2} \,dr \biggr]^{\frac{q}{2}}. \end{aligned}$$ With the aid of Lemma 2.2 and the Hölder inequality, we arrive at $$\begin{aligned} {\mathbb{E}} \bigl\vert u(t,x) \bigr\vert ^{q} \leq &C(q) \biggl\vert \int _{0}^{t} \int _{{\mathbb{R}}^{d}}p(t-r,x-z) \bigl[{\mathbb{E}} \bigl\vert h(r,z) \bigr\vert ^{q} \bigr]^{\frac{1}{q}}\,dz\,dr \biggr\vert ^{q} \\ &{}+C(q){\mathbb{E}} \biggl[ \int _{0}^{t} \biggl\vert \int _{{\mathbb{R}}^{d}} p(t-r,x-z) \bigl[{\mathbb{E}} \bigl\vert f(r,z) \bigr\vert ^{q} \bigr]^{\frac{1}{q}}\,dz \biggr\vert ^{2}\,dr \biggr]^{ \frac{q}{2}} \\ \leq &C(q) \biggl\vert \int _{0}^{t} \biggl[ \int _{{\mathbb{R}}^{d}} \bigl\vert p(r,y) \bigr\vert ^{ \frac{q}{q-1}} \,dy \biggr]^{\frac{q-1}{q}}\,dr \biggr\vert ^{q} \sup _{0\leq r\leq t} {\mathbb{E}} \int _{{\mathbb{R}}^{d}} \bigl\vert h(r,z) \bigr\vert ^{q}\,dz \\ &{}+C(q) \biggl\vert \int _{0}^{t} \biggl[ \int _{{\mathbb{R}}^{d}} \bigl\vert p(r,y) \bigr\vert ^{ \frac{q}{q-1}} \,dy \biggr]^{\frac{2(q-1)}{q}}\,dr \biggr\vert ^{\frac{q}{2}} \sup _{0\leq r\leq t}{\mathbb{E}} \int _{{\mathbb{R}}^{d}} \bigl\vert f(r,z) \bigr\vert ^{q} \,dz. \end{aligned}$$ By using inequality (2.5), one derives $$\begin{aligned} {\mathbb{E}} \bigl\vert u(t,x) \bigr\vert ^{q} \leq & C(q) \biggl\vert \int _{0}^{t}r^{-\frac{d}{q \alpha }}\,dr \biggr\vert ^{q} \Vert h \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d} \times \varOmega))}^{q} \\ &{}+C(q) \biggl\vert \int _{0}^{t} r^{-\frac{2d}{q\alpha }}\,dr \biggr\vert ^{\frac{q}{2}} \Vert f \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q}. \end{aligned}$$ Observing that \(q\alpha >2d\), therefore $$\begin{aligned} {\mathbb{E}} \bigl\vert u(t,x) \bigr\vert ^{q}\leq C(q) \bigl(1+t^{q-\frac{d}{\alpha }}\bigr) \bigl[ \Vert h \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} + \Vert f \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \bigr]. \end{aligned}$$ Now let us consider point-wise estimates for mild solutions of (1.1). By the scaling transformations on space and time variables, to prove (1.6) and (1.7) are true for u, it is sufficient to show that u meets (1.6) and (1.7) on \([0,1]\times {\mathbb{R}}^{d}\) and \([0,1]^{d+1}\), respectively. Initially, let us check (1.6). For every \(x_{1},x_{2}\in {\mathbb{R}}^{d}\), \(t>0\), $$\begin{aligned} u(t,x_{1})-u(t,x_{2}) =& \int _{0}^{t} \bigl[P_{t-r}h(r,x_{1})-P_{t-r}h(r,x _{2})\bigr]\,dr \\ &{}+ \int _{0}^{t} \bigl[P_{t-r}f(r,x_{1})-P_{t-r}f(r,x_{2}) \bigr]\,dW_{r}. \end{aligned}$$ According to (2.13), one derives that $$\begin{aligned} {\mathbb{E}} \bigl\vert u(t,x_{1})-u(t,x_{2}) \bigr\vert ^{q} \leq & C(q) \biggl\{ {\mathbb{E}} \biggl\vert \int _{0}^{t} \bigl[P_{t-r}h(r,x_{1})-P_{t-r}h(r,x_{2}) \bigr]\,dr \biggr\vert ^{q} \\ &{}+{\mathbb{E}} \biggl\vert \int _{0}^{t} \bigl[P_{t-r}f(r,x_{1})-P_{t-r}f(r,x_{2}) \bigr]^{2}\,dr \biggr\vert ^{\frac{q}{2}} \biggr\} . \end{aligned}$$ Let \(0<\beta <(1/2-d/(q\alpha))(1\wedge \alpha)\) be a real number. In view of Lemma 2.1 (2.2) and Lemma 2.2 (2.11), from (3.2), one concludes that $$\begin{aligned}& {\mathbb{E}} \bigl\vert u(t,x_{1})-u(t,x_{2}) \bigr\vert ^{q} \\& \quad \leq C(q) \vert x_{1}-x_{2} \vert ^{q\beta } \Vert h \Vert _{L^{\infty }([0,t];L^{q}( {\mathbb{R}}^{d}\times \varOmega))}^{q} \biggl\vert \int _{0}^{t} r^{-\frac{ \beta }{\alpha }-\frac{d}{q\alpha }}\,dr \biggr\vert ^{q} \\& \qquad {} +C(q) \vert x_{1}-x_{2} \vert ^{q\beta } \Vert f \Vert _{L^{\infty }([0,t];L^{q}( {\mathbb{R}}^{d}\times \varOmega))}^{q} \biggl\vert \int _{0}^{t}r^{-\frac{2 \beta }{\alpha }-\frac{2d}{{q}\alpha }}\,dr \biggr\vert ^{\frac{{q}}{2}} \\& \quad \leq C({q},\alpha,d,\beta,t) \vert x_{1}-x_{2} \vert ^{q\beta } \bigl[ \Vert h \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q}+ \Vert f \Vert _{L^{\infty }([0,t];L^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \bigr]. \end{aligned}$$ Similarly, for every \(t>0\), \(\delta >0\), one can define $$\begin{aligned}& u(t+\delta,x)-u(t,x) \\& \quad = \int _{0}^{t+\delta }P_{t+\delta -r}h(r,x)\,dr - \int _{0}^{t}P_{t-r}h(r,x)\,dr \\& \qquad {} + \int _{0}^{t+\delta }P_{t+\delta -r}f(r,x) \,dW_{r}- \int _{0} ^{t}P_{t-r}f(r,x) \,dW_{r} \\& \quad = \int _{t}^{t+\delta }P_{t+\delta -r}h(r,x)\,dr + \int _{0}^{t} \bigl[P _{t+\delta -r}h(r,x)-P_{t-r}h(r,x) \bigr]\,dr \\& \qquad {} + \int _{t}^{t+\delta }P_{t+\delta -r}f(r,x) \,dW_{r}+ \int _{0} ^{t} \bigl[P_{t+\delta -r}f(r,x)-P_{t-r}f(r,x) \bigr]\,dW_{r} \\& \quad =:J_{1}+J_{2}+J_{3}+J_{4}. \end{aligned}$$ Let us estimate \(J_{1},\ldots ,J_{4}\). To calculate \(J_{1}\), we use (2.5) to get $$\begin{aligned} {\mathbb{E}} \vert J_{1} \vert ^{q} \leq & C(q) \Vert h \Vert _{L^{\infty }([0,t+\delta ];L ^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \biggl\vert \int _{t}^{t+\delta } \biggl[ \int _{{\mathbb{R}}^{d}} \bigl\vert p(t+\delta -r,y) \bigr\vert ^{\frac{q}{q-1}}\,dy \biggr]^{\frac{(q-1)}{q}}\,dr \biggr\vert ^{q} \\ \leq & C(q) \Vert h \Vert _{L^{\infty }([0,t+\delta ];L^{q}({\mathbb{R}}^{d} \times \varOmega))}^{q} \biggl\vert \int _{0}^{\delta }r^{-\frac{d}{q\alpha }}\,dr \biggr\vert ^{q} \\ \leq &C(q,\alpha,d) \Vert h \Vert _{L^{\infty }([0,t+\delta ];L^{q}({\mathbb{R}} ^{d}\times \varOmega))}^{q} \delta ^{q-\frac{d}{\alpha }}. \end{aligned}$$ An analogue calculation also implies that $$\begin{aligned} {\mathbb{E}} \vert J_{3} \vert ^{q} \leq & C(q) \Vert f \Vert _{L^{\infty }([0,t+\delta ];L ^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \biggl\vert \int _{t}^{t+\delta } \biggl[ \int _{{\mathbb{R}}^{d}} \bigl\vert p(t+\delta -r,y) \bigr\vert ^{\frac{q}{q-1}}\,dy \biggr]^{\frac{2(q-1)}{q}}\,dr \biggr\vert ^{\frac{q}{2}} \\ \leq &C(q) \Vert f \Vert _{L^{\infty }([0,t+\delta ];L^{q}({\mathbb{R}}^{d} \times \varOmega))}^{q} \biggl\vert \int _{0}^{\delta }r^{-\frac{2d}{q\alpha }}\,dr \biggr\vert ^{\frac{q}{2}} \\ \leq &C(q,\alpha,d) \Vert f \Vert _{L^{\infty }([0,t+\delta ];L^{q}({\mathbb{R}} ^{d}\times \varOmega))}^{q}\delta ^{\frac{q}{2} -\frac{d}{\alpha }}. \end{aligned}$$ For \(J_{2}\), we use Lemma 2.1 (2.3), one concludes that $$\begin{aligned} {\mathbb{E}} \vert J_{2} \vert ^{q} \leq & {\mathbb{E}} \biggl\vert \int _{0}^{t} \bigl\vert P_{t+ \delta -r}h(r,x)-P_{t-r}h(r,x) \bigr\vert \,dr \biggr\vert ^{q} \\ \leq & \Vert h \Vert _{L^{\infty }([0,t+\delta ];L^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q}\delta ^{\beta q} \biggl\vert \int _{0}^{t}(t-r)^{-\beta -\frac{d}{q \alpha }}\,dr \biggr\vert ^{q} \\ \leq & C(q,\alpha,d,\beta,t) \Vert h \Vert _{L^{\infty }([0,t+\delta ];L ^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \delta ^{\beta q}. \end{aligned}$$ Similarly, one gains $$\begin{aligned} {\mathbb{E}} \vert J_{4} \vert ^{q} \leq & {\mathbb{E}} \biggl\vert \int _{0}^{t} \bigl\vert P_{t+ \delta -r}f(r,x)-P_{t-r}f(r,x) \bigr\vert ^{2}\,dr \biggr\vert ^{\frac{q}{2}} \\ \leq & C(q) \Vert f \Vert _{L^{\infty }([0,t+\delta ];L^{q}({\mathbb{R}}^{d} \times \varOmega))}^{q}\delta ^{\beta q} \biggl\vert \int _{0}^{t}(t-r)^{-2 \beta -\frac{2d}{q\alpha }}\,dr \biggr\vert ^{\frac{q}{2}} \\ \leq & C(q,\alpha,d,\beta,t) \Vert f \Vert _{L^{\infty }([0,t+\delta ];L ^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \delta ^{\beta q}. \end{aligned}$$ Combining (3.4)–(3.7), one arrives at $$\begin{aligned}& {\mathbb{E}} \bigl\vert u(t+\delta,x)-u(t,x) \bigr\vert ^{q} \\& \quad \leq C(q,\alpha,d,\beta,t) \bigl[ \Vert h \Vert _{L^{\infty }([0,t+\delta ];L ^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} + \Vert f \Vert _{L^{\infty }([0,t+ \delta ];L^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \bigr] \bigl[ \delta ^{q-\frac{d}{\alpha }}+\delta ^{\beta q} \bigr], \end{aligned}$$ $$\begin{aligned}& {\mathbb{E}} \bigl\vert u(t+\delta,x)-u(t,x) \bigr\vert ^{q} \\& \quad \leq C(q,\alpha,d,\beta,t) \bigl[ \Vert h \Vert _{L^{\infty }([0,t+\delta ];L ^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} + \Vert f \Vert _{L^{\infty }([0,t+ \delta ];L^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \bigr]\delta ^{ \beta q} \end{aligned}$$ if \(\delta <1\). Therefore, we accomplish from (3.3) and (3.8) that $$\begin{aligned}& {\mathbb{E}} \bigl\vert u(t_{2},x_{2})-u(t_{1},x_{1}) \bigr\vert ^{q} \\& \quad \leq C(q,\alpha,d,\beta) \bigl[ \Vert h \Vert _{L^{\infty }([0,t_{2}];L^{q}( {\mathbb{R}}^{d}\times \varOmega))}^{q} + \Vert f \Vert _{L^{\infty }([0,t_{2}];L ^{q}({\mathbb{R}}^{d}\times \varOmega))}^{q} \bigr] \\& \qquad {} \times \bigl( \vert t_{2}-t_{1} \vert + \vert x_{2}-x_{1} \vert \bigr)^{\beta q} \end{aligned}$$ for every \(x_{1},x_{2}\in {\mathbb{R}}^{d}\), \(0\leq t_{1}\leq t_{2} \leq 1\). According to (3.1) and (3.9), (1.6) is true. Notice that \(q>4(d+1)/(1\wedge \alpha)\) and (3.9) holds for every \(0<\beta <(1/2-d/(q\alpha))(1\wedge \alpha)\). For a given sufficiently large natural number \(0< m\in {\mathbb{N}}\), if one obtains $$\begin{aligned} \beta =\frac{m}{1+m}\biggl(\frac{1}{2}- \frac{d}{q\alpha }\biggr) (1\wedge \alpha), \end{aligned}$$ $$\begin{aligned} q\beta =q\frac{m}{1+m}\biggl(\frac{1}{2}- \frac{d}{q\alpha }\biggr) (1\wedge \alpha)>\frac{m}{1+m}(d+2)>d+1. \end{aligned}$$ In view of (3.9) and (3.11), by using Kolmogorov's theorem, u has a continuous version. It remains to prove the Hölder estimate (1.7) on \([0,1]^{d+1}\), and for writing simplicity, we set $$\begin{aligned} A= \Vert h \Vert _{L^{\infty }([0,1];L^{q}({\mathbb{R}}^{d}\times \varOmega))}+ \Vert f \Vert _{L^{\infty }([0,1];L^{q}({\mathbb{R}}^{d}\times \varOmega))}. \end{aligned}$$ One introduces a sequence of sets: \({\mathcal{S}}_{n}=\{z\in {\mathbb{Z}} ^{d+1} | z2^{-n}\in (0,1)^{d+1}\}\), \(0< n\in {\mathbb{N}}\). For an arbitrary \(e=(e^{1},\ldots ,e^{d+1})\in {\mathbb{N}}\times {\mathbb{Z}} ^{d}\) such that \(\|e\|_{\infty }=\max_{1\leq i\leq d+1}|e^{i}|=1\), and every \(z,z+e\in {\mathcal{S}}_{n}\), we define \(v_{z}^{n,e}=|u((z+e)2^{-n})-u(z2^{-n})|\). Then $$\begin{aligned} {\mathbb{E}} \bigl\vert v_{z}^{n,e} \bigr\vert ^{q}\leq C(q,\alpha,d)A^{q}2^{-n\beta q}. \end{aligned}$$ For any \(\tau >0\) and \(K>0\), one sets a number of events \({\mathcal{A}} _{z,\tau }^{n,e}=\{\omega \in \varOmega | v_{z}^{n,e}\geq K\tau ^{n} \}\) (\(z,z+e\in {\mathcal{S}}_{n}\)), it yields that $$\begin{aligned} {\mathbb{P}}\bigl({\mathcal{A}}_{z,\tau }^{n,e}\bigr)\leq \frac{{\mathbb{E}} \vert v _{z}^{n,e} \vert ^{q}}{K^{q}\tau ^{qn}}\leq \frac{ C(q,\alpha,d)A^{q}2^{-n \beta q}}{K^{q}\tau ^{qn}}. \end{aligned}$$ Observe that, for each n, the total number of the events \({\mathcal{A}} _{z,\tau }^{n,e}\) (\(z,z+e\in {\mathcal{S}}_{n}\)) is not greater than \(2^{(d+1)n}3^{d+1}\). Hence the probability of the union \({\mathcal{A}} _{\tau }^{n}=\bigcup_{z,z+e\in S_{n}}(\bigcup_{\|e\|_{\infty }=1}{\mathcal{A}} _{z,\tau }^{n,e})\) meets the estimate $$\begin{aligned} {\mathbb{P}}\bigl({\mathcal{A}}_{\tau }^{n}\bigr)\leq C(q, \alpha,d)A^{q} \frac{2^{-n \beta q}}{K^{q}\tau ^{qn}}2^{(d+1)n}\leq C(q, \alpha,d)A^{q}K^{-q} \biggl(\frac{2^{(d+1)}}{(2^{\beta }\tau)^{q}} \biggr)^{n}. \end{aligned}$$ For \(m>0\) large enough (given in (3.10)), one takes β by (3.10), \(\tau =2^{-\beta /m}\), then the probability of the event \({\mathcal{A}}=\bigcup_{n\geq 1}{\mathcal{A}}_{\tau }^{n}\) can be calculated as follows: $$\begin{aligned} {\mathbb{P}}({\mathcal{A}})\leq C(q,\alpha,d)A^{q}K^{-q}. \end{aligned}$$ For every point \(\xi =(t,x)\in (0,1)^{d+1}\), we have \(\xi =\sum_{i=0} ^{\infty }e_{i}2^{-i}\) (\(\|e_{i}\|_{\infty }\leq 1\)). Denote \(\xi _{k}=\sum_{i=0}^{k}e_{i}2^{-i}\) (\(\xi _{0}=0\)). For any \(\omega \mathbin{\overline{ \in }}\mathcal{A}\), we have \(|u(\xi _{k+1})-u(\xi _{k})|< K\tau ^{k+1}\), which suggests that $$\begin{aligned} \bigl\vert u(t,x) \bigr\vert \leq \sum _{k=0}^{\infty } \bigl\vert u(\xi _{k+1})-u(\xi _{k}) \bigr\vert < K\sum_{k=1} ^{\infty }\tau ^{k}=K\frac{\tau }{1-\tau }\leq K\bigl(2^{\frac{\beta }{m}}-1 \bigr)^{-1}. \end{aligned}$$ Set \(v_{1}=\sup_{(t,x)\in (0,1)^{d+1}}|u(t,x)|\), then \(v_{1}= \sup_{(t,x)\in [0,1]^{d+1}}|u(t,x)|\) since u has a continuous version. For any \(0<\gamma < q\), it yields that $$\begin{aligned} {\mathbb{E}} v_{1}^{\gamma } =&\gamma \int _{0}^{\infty }r^{\gamma -1} { \mathbb{P}}(v_{1}\geq r)\,dr \\ =&\gamma \int _{0}^{cK}r^{\gamma -1}{ \mathbb{P}}(v_{1}\geq r)\,dr+ \gamma \int _{cK}^{\infty }r^{\gamma -1}{\mathbb{P}}(v_{1} \geq r)\,dr. \end{aligned}$$ If one chooses \(c\geq (2^{\frac{\beta }{m}}-1)^{-1}\), according to (3.12) and (3.13), from (3.14) one finishes at $$\begin{aligned} {\mathbb{E}} v_{1}^{\gamma }\leq (cK)^{\gamma }+C(q, \alpha,d)A^{q} \gamma \int _{cK}^{\infty }r^{\gamma -1-q}\,dr \leq (cK)^{\gamma }+C(q, \alpha,d)A^{q}\gamma K^{\gamma -q}, \end{aligned}$$ which hints that $$\begin{aligned} {\mathbb{E}} v_{1}^{\gamma }\leq C(q,\alpha,d, \gamma)A^{\gamma } \end{aligned}$$ if one chooses \(K=A\). Let us calculate the Hölder semi-norm of u. For a solution of (1.1) and for every \(\omega \mathbin{\overline{\in }}{\mathcal{A}}\), inequality (2.17) holds for \(\kappa _{n}=K\tau ^{n}\). With the help of Lemma 2.5 (2.18), one has $$\begin{aligned} \bigl\vert u\bigl((t,x)+\Delta\bigr)-u(t,x) \bigr\vert \leq 2K\tau ^{-1} \vert \Delta \vert ^{\log _{2}(1/\tau)} \end{aligned}$$ for \((t,x),(t,x)+\Delta \in (0,1)^{d+1}\). Let β be described in (3.10). For any \(0<\theta <\beta \), if one has \(\tau =2^{-\theta }\), we arrive at $$\begin{aligned} \bigl\vert u\bigl((t,x)+\Delta\bigr)-u(t,x) \bigr\vert \leq 4K \vert \Delta \vert ^{\theta }, \end{aligned}$$ $$\begin{aligned} {\mathbb{P}}\bigl([u]_{{\mathcal{C}}^{\theta }([0,1]^{d+1})}\geq 4K\bigr)\leq {\mathbb{P}}({ \mathcal{A}})\leq C(q,\alpha,d,\theta)A^{q}K^{-q}. \end{aligned}$$ Finally, for any \(0<\gamma <q\), analogue calculations of (3.14) and (3.15) imply that $$\begin{aligned} {\mathbb{E}} \Vert u \Vert ^{\gamma }_{{\mathcal{C}}^{\theta }([0,1]^{d+1})} \leq C(q,\alpha,d,\gamma,\theta)A^{\gamma }. \end{aligned}$$ From (3.15) and (3.16), and observing that \(m\in {\mathbb{N}}\) is arbitrary, the desired conclusion holds true. □ Adams, A., Fourier, J.F.: Sobolev Space. Pure and Applied Mathematics Series, vol. 140. Elsevier, Amsterdam (2005) Applebaum, D.: Lévy Processes and Stochastic Calculus. Cambridge University Press, Cambridge (2009) Bogdan, K., Jakubowski, T.: Estimates of heat kernel of fractional Laplacian perturbed by gradient operators. Commun. Math. Phys. 271(1), 179–198 (2007) Chang, T., Lee, K.: On a stochastic partial differential equation with a fractional Laplacian operator. Stoch. Process. Appl. 122(9), 3288–3311 (2012) Denis, L., Matoussi, A., Stoica, L.: \(L^{p}\) estimates for the uniform norm of solutions of quasilinear SPDEs. Probab. Theory Relat. Fields 133(133), 437–463 (2005) Hsu, E., Wang, Y., Wang, Z.: Stochastic De Giorgi iteration and regularity of stochastic partial differential equations. Ann. Probab. 45(5), 2855–2866 (2017) Kim, I., Kim, K.H.: An \(L^{p}\)-theory of a class of stochastic equations with the random fractional Laplacian driven by Lévy processes. Stoch. Process. Appl. 122(12), 3921–3952 (2012) Kim, K.H.: \(L_{q}(L_{p})\) theory and Hölder estimates for parabolic SPDEs. Stoch. Process. Appl. 114(2), 313–330 (2004) Krylov, N.V.: On \(L_{p}\)-theory of stochastic partial differential equations in the whole space. SIAM J. Math. Anal. 27(2), 313–340 (1996) Krylov, N.V.: An analytic approach to SPDEs. Stoch. Partial Differ. Equ.: Six Perspect. 64, 185–242 (1999) Krylov, N.V.: SPDEs in \(L^{q}(0,\tau,L^{p})\) spaces. Electron. J. Probab. 5(13), 1–29 (2000) Kuksin, S.B.: A stochastic nonlinear Schrödinger equation I: a priori estimates. Proc. Steklov Inst. Math. 225, 219–242 (1999) Kuksin, S.B., Nadirashvili, N.S., Piatnitski, A.L.: Hölder estimates for solutions of parabolic SPDEs. Theory Probab. Appl. 47(1), 152–159 (2003) Lv, G., Gao, H., Wei, J., Wu, J.: BMO and Morrey–Campanato estimates for stochastic convolutions and Schauder estimates for stochastic parabolic equations. J. Differ. Equ. 266, 2666–2717 (2019) Pardoux, É.: Stochastic Partial Differential Equations. Lecture Notes Given in Fudan (2007) Rozovskii, B.L.: Stochastic Evolution Systems. Springer, Netherlands (1990) Stein, E.: Singular Integrals and Differentiability Properties of Functions, vol. 30. Princeton University Press, Princeton (1970) Wei, J., Duan, J., Lv, G.: Schauder estimates for stochastic transport-diffusion equations with Lévy processes. J. Math. Anal. Appl. 474, 1–22 (2019) Xie, L.: Singular SDEs with critical non-local and non-symmetric Lévy type generator. Stoch. Process. Appl. 127, 3792–3824 (2017) Xie, X., Duan, J., Li, X., Lv, G.: A regularity result for the nonlocal Fokker–Planck equation with Ornstein–Uhlenbeck drift (2015) arXiv:1504.04631 The authors sincerely thank the referees and the editors for their helpful comments and suggestions. The first author is partially supported by the Fundamental Research Funds for the Central Universities (WUT: 193114001). The third author is partially supported by National Science Foundation of China (11501577). Department of Statistics, College of Science, Wuhan University of Technology, Wuhan, China Rongrong Tian School of Data Science and Information Engineering, Guizhou Minzu University, Guizhou, China Liang Ding School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan, China Jinlong Wei & Suyi Zheng Jinlong Wei Suyi Zheng All authors carried out the proofs and conceived the study. All authors read and approved the final manuscript. Correspondence to Jinlong Wei. Tian, R., Ding, L., Wei, J. et al. Hölder estimates of mild solutions for nonlocal SPDEs. Adv Differ Equ 2019, 159 (2019). https://doi.org/10.1186/s13662-019-2097-1 60H15; 35R60 Stochastic partial differential equation Hölder-estimates
CommonCrawl
A Hamiltonian approach for nonlinear rotational capillary-gravity water waves in stratified flows Zero sequence entropy and entropy dimension January 2017, 37(1): 405-434. doi: 10.3934/dcds.2017017 Long-time behavior of a fully discrete Lagrangian scheme for a family of fourth order equations Horst Osberger Zentrum Mathematik, TU München Boltzmannstr. 3, D-85748 Garching, Germany Received January 2015 Revised August 2016 Published November 2016 Fund Project: This research was supported by the DFG Collaborative Research Center TRR 109, "Discretization in Geometry and Dynamics". Full Text(HTML) Figure(3) A fully discrete Lagrangian scheme for solving a family of fourth order equations numerically is presented. The discretization is based on the equations' underlying gradient flow structure with respect to the Wasserstein metric, and preserves numerous of their most important structural properties by construction, like conservation of mass and entropy-dissipation. In this paper, the long-time behavior of our discretization is analysed: We show that discrete solutions decay exponentially to equilibrium at the same rate as smooth solutions of the original problem. Moreover, we give a proof of convergence of discrete entropy minimizers towards Barenblatt-profiles or Gaussians, respectively, using $Γ$-convergence. Keywords: Gradient flows, Wasserstein, numerical scheme, thin-film equation, quantum drift diffusion equation, $Γ$-convergence. Mathematics Subject Classification: Primary:65M12, 35G31, 35A1. Citation: Horst Osberger. Long-time behavior of a fully discrete Lagrangian scheme for a family of fourth order equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 405-434. doi: 10.3934/dcds.2017017 L. Ambrosio, N. Gigli and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel, 2005. doi: 978-3-7643-2428-5. Google Scholar L. Ambrosio, S. Lisini and G. Savaré, Stability of flows associated to gradient vector fields and convergence of iterated transport maps, Manuscripta Mathematica, 121 (2006), 1-50. doi: 10.1007/s00229-006-0003-0. Google Scholar J. Becker and G. Grün, The thin-film equation: Recent advances and some new perspectives, Journal of Physics: Condensed Matter, 17 (2015), 291-307. doi: 10.1088/0953-8984/17/9/002. Google Scholar F. Bernis and F. Avner, Higher order nonlinear degenerate parabolic equations, Journal of Differential Equations, 83 (1990), 179-206. doi: 10.1016/0022-0396(90)90074-Y. Google Scholar M. Bertsch, R. Dal Passo, H. Garcke and G. Grün, The thin viscous flow equation in higher space dimensions, Journal of Differential Equations, 3 (1998), 417-440. Google Scholar A. Blanchet, V. Calvez and J. A. Carrillo, Convergence of the mass-transport steepest descent scheme for the subcritical Patlak-Keller-Segel mode, SIAM Journal on Numerical Analysis, 46 (2008), 691-721. doi: 10.1137/070683337. Google Scholar P. M. Bleher, J. L. Lebowitz and E. R. Speer, Existence and positivity of solutions of a fourth-order nonlinear PDE describing interface fluctuations, Communications on Pure and Applied Mathematics, 47 (1994), 923-942. doi: 10.1002/cpa.3160470702. Google Scholar A. Braides, $Γ$ -convergence for Beginners Oxford Lecture Series in Mathematics and its Applications, Oxford University Press, Oxford, 2002. doi: 10.1093/acprof:oso/9780198507840.001.0001. Google Scholar C. J. Budd, G. J. Collins, W. Z. Huang and R. D. Russell, Self-similar numerical solutions of the porous-medium equation using moving mesh methods, The Royal Society of London. Philosophical Transactions. Series A. Mathematical, Physical and Engineering Sciences, 357 (1999), 1047-1077. doi: 10.1098/rsta.1999.0364. Google Scholar M. Bukal, E. Emmrich and A. Jüngel, Entropy-stable and entropy-dissipative approximations of a fourth-order quantum diffusion equation, Numerische Mathematik, 127 (2014), 365-396. doi: 10.1007/s00211-013-0588-7. Google Scholar M. Burger, J. A. Carrillo and M.-T. Wolfram, A mixed finite element method for nonlinear diffusion equations, Kinetic and Related Models, 3 (2010), 59-83. doi: 10.3934/krm.2010.3.59. Google Scholar M. J. Cáceres, J. A. Carrillo and G. Toscani, Long-time behavior for a nonlinear fourth-order parabolic equation, Transactions of the American Mathematical Society, 357 (2005), 1161-1175. doi: 10.1090/S0002-9947-04-03528-7. Google Scholar E. A. Carlen and S. Ulusoy, Asymptotic equipartition and long time behavior of solutions of a thin-film equation, Journal of Differential Equations, 241 (2007), 279-292. doi: 10.1016/j.jde.2007.07.005. Google Scholar J. A. Carrillo, J. Dolbeault, I. Gentil and A. Jüngel, Entropy-energy inequalities and improved convergence rates for nonlinear parabolic equations, Discrete and Continuous Dynamical Systems. Series B. A Journal Bridging Mathematics and Sciences, 6 (2006), 1027-1050. doi: 10.3934/dcdsb.2006.6.1027. Google Scholar J. A. Carrillo, A. Jüngel, P. A. Markowich, G. Toscani and A. Unterreiter, Entropy dissipation methods for degenerate parabolic problems and generalized {S}obolev inequalities, Monatshefte für Mathematik, 133 (2001), 1-82. doi: 10.1007/s006050170032. Google Scholar J. A. Carrillo, A. Jüngel and S. Tang, Positive entropic schemes for a nonlinear fourth-order parabolic equation, Discrete and Continuous Dynamical Systems. Series B. A Journal Bridging Mathematics and Sciences, 3 (2003), 1-20. Google Scholar J. A. Carrillo and J. S. Moll, Numerical simulation of diffusive and aggregation phenomena in nonlinear continuity equations by evolving diffeomorphisms, SIAM Journal on Scientific Computing, 31 (2009/10), 4305-4329. doi: 10.1137/080739574. Google Scholar J. A. Carrillo and G. Toscani, Long-time asymptotics for strong solutions of the thin film equation, Communications in Mathematical Physics, 225 (2002), 551-571. doi: 10.1007/s002200100591. Google Scholar J. A. Carrillo and M. -T. Wolfram, A finite element method for nonlinear continuity equations in Lagrangian coordinates, work in progress.Google Scholar F. Cavalli and G. Naldi, A Wasserstein approach to the numerical solution of the one-dimensional Cahn-Hilliard equation, Kinetic and Related Models, 3 (2010), 123-142. doi: 10.3934/krm.2010.3.123. Google Scholar R. Dal Passo, H. Garcke and G. Grün, On a fourth-order degenerate parabolic equation: Global entropy estimates, existence, and qualitative behavior of solutions, SIAM Journal on Mathematical Analysis, 29 (1998), 321-342. doi: 10.1137/S0036141096306170. Google Scholar J. Denzler and R. J. McCann, Nonlinear diffusion from a delocalized source: affine self-similarity, time reversal, & nonradial focusing geometries, Annales de l'Institut Henri Poincaré. Analyse Non Linéaire, 25 (2008), 865-888. doi: 10.1016/j.anihpc.2007.05.002. Google Scholar B. Derrida, J. L. Lebowitz, E. R. Speer and H. Spohn, Dynamics of an anchored Toom interface, Journal of Physics. A. Mathematical and General, 24 (1991), 4805-4834. doi: 10.1088/0305-4470/24/20/015. Google Scholar B. Derrida, J. L. Lebowitz, E. R. Speer and H. Spohn, Fluctuations of a stationary nonequilibrium interface, Physical Review Letters, 67 (1991), 165-168. doi: 10.1103/PhysRevLett.67.165. Google Scholar B. Düring, D. Matthes and J. P. Milišić, A gradient flow scheme for nonlinear fourth order equations, Discrete and Continuous Dynamical Systems. Series B. A Journal Bridging Mathematics and Sciences, 14 (2010), 935-959. doi: 10.3934/dcdsb.2010.14.935. Google Scholar L. C. Evans, O. Savin and W. Gangbo, Diffeomorphisms and nonlinear heat flows, SIAM Journal on Mathematical Analysis, 37 (2005), 737-751. doi: 10.1137/04061386X. Google Scholar J. Fischer, Uniqueness of solutions of the Derrida-Lebowitz-Speer-Spohn equation and quantum drift-diffusion models, SIAM Journal on Mathematical Analysis, 38 (2013), 2004-2047. doi: 10.1080/03605302.2013.823548. Google Scholar L. Giacomelli and F. Otto, Variational formulation for the lubrication approximation of the Hele-Shaw flow, Calculus of Variations and Partial Differential Equations, 13 (2001), 377-403. doi: 10.1007/s005260000077. Google Scholar U. Gianazza, G. Savaré and G. Toscani, The {W}asserstein gradient flow of the Fisher information and the quantum drift-diffusion equation, Archive for Rational Mechanics and Analysis, 194 (2009), 133-220. doi: 10.1007/s00205-008-0186-5. Google Scholar E. Giusti, Minimal Surfaces and Functions of Bounded Variation Monographs in Mathematics, Birkhäuser Verlag, Basel, 1984. doi: 10.1007/978-1-4684-9486-0. Google Scholar L. Gosse and G. Toscani, Identification of asymptotic decay to self-similarity for one-dimensional filtration equations, SIAM Journal on Numerical Analysis, 43 (2006), 2590-2606. doi: 10.1137/040608672. Google Scholar L. Gosse and G. Toscani, Lagrangian numerical approximations to one-dimensional convolution-diffusion equations, SIAM Journal on Scientific Computing, 28 (2006), 1203-1227. doi: 10.1137/050628015. Google Scholar G. Grün, Droplet spreading under weak slippage-existence for the Cauchy problem, Communications in Partial Differential Equations, 29 (2004), 1697-1744. doi: 10.1081/PDE-200040193. Google Scholar M. P. Gualdani, A. Jüngel and G. Toscani, A nonlinear fourth-order parabolic equation with nonhomogeneous boundary conditions, SIAM Journal on Mathematical Analysis, 37 (2006), 1761-1779. doi: 10.1137/S0036141004444615. Google Scholar R. Jordan, D. Kinderlehrer and F. Otto, The variational formulation of the Fokker-Planck equation, SIAM Journal on Mathematical Analysis, 29 (1998), 1-17. doi: 10.1137/S0036141096303359. Google Scholar A. Jüngel and D. Matthes, An algorithmic construction of entropies in higher-order nonlinear PDEs, Nonlinearity, 19 (2006), 633-659. doi: 10.1088/0951-7715/19/3/006. Google Scholar A. Jüngel and D. Matthes, The Derrida-Lebowitz-Speer-Spohn equation: Existence, nonuniqueness, and decay rates of the solutions, SIAM Journal on Mathematical Analysis, 39 (2008), 1996-2015. doi: 10.1137/060676878. Google Scholar A. Jüngel and R. Pinnau, Global nonnegative solutions of a nonlinear fourth-order parabolic equation for quantum systems, SIAM Journal on Mathematical Analysis, 32 (2000), 760-777. doi: 10.1137/S0036141099360269. Google Scholar A. Jüngel and R. Pinnau, A positivity-preserving numerical scheme for a nonlinear fourth order parabolic system, SIAM Journal on Numerical Analysis, 39 (2001), 385-406. doi: 10.1137/S0036142900369362. Google Scholar A. Jüngel and G. Toscani, Exponential time decay of solutions to a nonlinear fourth-order parabolic equation, Journal of Applied Mathematics and Physics, 54 (2003), 377-386. doi: 10.1007/s00033-003-1026-y. Google Scholar A. Jüngel and I. Violet, First-order entropies for the Derrida-Lebowitz-Speer-Spohn equation, Discrete and Continuous Dynamical Systems. Series B. A Journal Bridging Mathematics and Sciences, 8 (2007), 861-877. doi: 10.3934/dcdsb.2007.8.861. Google Scholar D. Kinderlehrer and N. J. Walkington, Approximation of parabolic equations using the Wasserstein metric, M2AN. Mathematical Modelling and Numerical Analysis, 33 (1999), 837-852. doi: 10.1051/m2an:1999166. Google Scholar R. C. MacCamy and E. Socolovsky, A numerical procedure for the porous media equation, Computers & Mathematics with Applications. An International Journal, 11 (1985), 315-319. doi: 10.1016/0898-1221(85)90156-7. Google Scholar D. Matthes, R. J. McCann and G. Savaré, A family of nonlinear fourth order equations of gradient flow type, Communications in Partial Differential Equations, 34 (2009), 1352-1397. doi: 10.1080/03605300903296256. Google Scholar D. Matthes and H. Osberger, Convergence of a variational Lagrangian scheme for a nonlinear drift diffusion equation, ESAIM. Mathematical Modelling and Numerical Analysis, 48 (2014), 697-726. doi: 10.1051/m2an/2013126. Google Scholar D. Matthes and H. Osberger, A convergent Lagrangian discretization for a nonlinear fourth-order equation, Foundations of Computational Mathematics, 17 (2015), 1-54. doi: 10.1007/s10208-015-9284-6. Google Scholar A. Oron, S. H. Davis and S. G. Bankoff, Long-scale evolution of thin liquid films, American Physical Society, 69 (1997), 931-980. doi: 10.1103/RevModPhys.69.931. Google Scholar G. Russo, Deterministic diffusion of particles, Communications on Pure and Applied Mathematics, 43 (1990), 697-733. doi: 10.1002/cpa.3160430602. Google Scholar C. Villani, Topics in Optimal Transportation American Mathematical Society, Providence, RI, 2003. doi: 10.1090/gsm/058. Google Scholar Figure 2. Evolution of a discrete solution $u_\Delta$, evaluated at different times $t = 0,0.05,0.1,0.15,0.175,0.25$ (from top left to bottom right). The red line is the corresponding Barenblatt-profile ${{\text{b}}_{\alpha ,\lambda }}$. Figure Options Download as PowerPoint slide Figure 1. Left: Numerically observed decay of $H_{\alpha,\lambda}(t)-{\mathbf{H}_{\alpha ,\lambda }^{\min }}$ and $F_{\alpha,\lambda}(t)-{\mathbf{F}_{\alpha ,\lambda }^{\min }}$ along a time period of $t\in[0,0.8]$, using $K=25,50,100,200$, in comparison to the upper bounds $({\mathcal{H}_{\alpha ,\lambda }}(u^0)-{\mathcal{H}_{\alpha ,\lambda }}({{\text{b}}_{\alpha ,\lambda }}))\exp(-2\lambda t)$ and $({\mathcal{F}_{\alpha ,\lambda }}(u^0)-{\mathcal{F}_{\alpha ,\lambda }}({{\text{b}}_{\alpha ,\lambda }}))\exp(-2\lambda t)$, respectively. Right: Convergence of discrete minimizers $u_{\delta }^{\min }$ with a rate of $K^{-1.5}$. Figure 3. Snapshots of the densities $\text{b}_{\alpha ,0}^{*}(t,\cdot)$ (red lines) and $u_\Delta$ (black lines) for the initial condition $\text{b}_{\alpha ,0}^{*}(0,\cdot)$ at times $t=0$ and $t= 0.1\cdot 10^{i}$, $i=0,\ldots,3$, using $K=50$ grid points and the time step size $\tau=10^{-3}$. Eric A. Carlen, Süleyman Ulusoy. Localization, smoothness, and convergence to equilibrium for a thin film equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4537-4553. doi: 10.3934/dcds.2014.34.4537 Daniel Ginsberg, Gideon Simpson. Analytical and numerical results on the positivity of steady state solutions of a thin film equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1305-1321. doi: 10.3934/dcdsb.2013.18.1305 Marina Chugunova, Roman M. Taranets. New dissipated energy for the unstable thin film equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 613-624. doi: 10.3934/cpaa.2011.10.613 Richard S. Laugesen. New dissipated energies for the thin fluid film equation. Communications on Pure & Applied Analysis, 2005, 4 (3) : 613-634. doi: 10.3934/cpaa.2005.4.613 Changchun Liu, Jingxue Yin, Juan Zhou. Existence of weak solutions for a generalized thin film equation. Communications on Pure & Applied Analysis, 2007, 6 (2) : 465-480. doi: 10.3934/cpaa.2007.6.465 Jian-Guo Liu, Jinhuan Wang. Global existence for a thin film equation with subcritical mass. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1461-1492. doi: 10.3934/dcdsb.2017070 Huiqiang Jiang. Energy minimizers of a thin film equation with born repulsion force. Communications on Pure & Applied Analysis, 2011, 10 (2) : 803-815. doi: 10.3934/cpaa.2011.10.803 Andrey Shishkov. Waiting time of propagation and the backward motion of interfaces in thin-film flow theory. Conference Publications, 2007, 2007 (Special) : 938-945. doi: 10.3934/proc.2007.2007.938 Karoline Disser, Matthias Liero. On gradient structures for Markov chains and the passage to Wasserstein gradient flows. Networks & Heterogeneous Media, 2015, 10 (2) : 233-253. doi: 10.3934/nhm.2015.10.233 Lihua Min, Xiaoping Yang. Finite speed of propagation and algebraic time decay of solutions to a generalized thin film equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 543-566. doi: 10.3934/cpaa.2014.13.543 Sergey Degtyarev. Classical solvability of the multidimensional free boundary problem for the thin film equation with quadratic mobility in the case of partial wetting. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3625-3699. doi: 10.3934/dcds.2017156 Fausto Cavalli, Giovanni Naldi. A Wasserstein approach to the numerical solution of the one-dimensional Cahn-Hilliard equation. Kinetic & Related Models, 2010, 3 (1) : 123-142. doi: 10.3934/krm.2010.3.123 Miguel Escobedo, Minh-Binh Tran. Convergence to equilibrium of a linearized quantum Boltzmann equation for bosons at very low temperature. Kinetic & Related Models, 2015, 8 (3) : 493-531. doi: 10.3934/krm.2015.8.493 P. Álvarez-Caudevilla, J. D. Evans, V. A. Galaktionov. The Cauchy problem for a tenth-order thin film equation II. Oscillatory source-type and fundamental similarity solutions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 807-827. doi: 10.3934/dcds.2015.35.807 Yohan Penel. An explicit stable numerical scheme for the $1D$ transport equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 641-656. doi: 10.3934/dcdss.2012.5.641 Helge Holden, Xavier Raynaud. A convergent numerical scheme for the Camassa--Holm equation based on multipeakons. Discrete & Continuous Dynamical Systems - A, 2006, 14 (3) : 505-523. doi: 10.3934/dcds.2006.14.505 Ronald E. Mickens. A nonstandard finite difference scheme for the drift-diffusion system. Conference Publications, 2009, 2009 (Special) : 558-563. doi: 10.3934/proc.2009.2009.558 José Antonio Carrillo, Yanghong Huang, Francesco Saverio Patacchini, Gershon Wolansky. Numerical study of a particle method for gradient flows. Kinetic & Related Models, 2017, 10 (3) : 613-641. doi: 10.3934/krm.2017025 Helge Dietert, Josephine Evans, Thomas Holding. Contraction in the Wasserstein metric for the kinetic Fokker-Planck equation on the torus. Kinetic & Related Models, 2018, 11 (6) : 1427-1441. doi: 10.3934/krm.2018056 HTML views (20)
CommonCrawl
Characterization and Mapping of the Bovine FBP1 Gene Guo, H.;Liu, W-S.;Takasuga, A.;Eyer, K.;Landrito, E.;Xu, Shang-zhong;Gao, X.;Ren, H-Y. 1319 https://doi.org/10.5713/ajas.2007.1319 PDF KSCI Fructose-1,6-bisphosphatase (FBP1) is a key regulatory enzyme of gluconeogenesis that catalyzes the hydrolysis of fructose-1,6-bisphosphate to generate fructose-6-phosphate and inorganic phosphate. Deficiency of fructose-1, 6-bisphosphatase is associated with fasting hypoglycemia and metabolic acidosis. The enzyme has been shown to occur in bacteria, fungi, plants and animals. The bovine FBP1 gene was cloned and characterized in this study. The full length (1,241 bp) FBP1 mRNA contained an open reading frame (ORF) encoding a protein of 338 amino acids, a 63 bp 5' untranslated region (UTR) and a 131 bp 3' UTR. The bovine FBP1 gene was 89%, 85%, 82%, 82% and 74% identical to the orthologs of pig, human, mouse, rat and zebra fish at mRNA level, and 97%, 96%, 94%, 93% and 91% identical at the protein level, respectively. This gene was broadly expressed in cattle with the highest level in testis, and the lowest level in heart. An intronic single nucleotide polymorphism (SNP) (A/G) was identified in the $5^{th}$ intron of the bovine FBP1 gene. Genotyping of 133 animals from four beef breeds revealed that the average frequency for allele A (A-base) was 0.7897 (0.7069-0.9107), while 0.2103 (0.0893-0.2931) for allele B (G-base). Our preliminary association study indicated that this SNP is significantly associated with traits of Average Daily Feed Intake (ADFI) and Carcass Length (CL) (p<0.01). In addition, the FBP1 gene was assigned on BTA8 by a hybrid radiation (RH) mapping method. Polymorphism Identification, RH Mapping and Association of ${\alpha}$-Lactalbumin Gene with Milk Performance Traits in Chinese Holstein Zhang, Jian;Sun, Dongxiao;Womack, J.E.;Zhang, Yi;Wang, Yachun;Zhang, Yuan 1327 Lactose synthase catalyses the formation of lactose which is the major osmole of bovine milk and regulates the milk volume. Alpha-lactalbumin (${\alpha}$-LA) is involved in the synthesis of lactose synthase in the mammary gland. Therefore ${\alpha}$-LA is regarded as a plausible candidate gene for the milk yield trait. To determine whether ${\alpha}$-LA is associated with milk performance traits, 1,028 Chinese Holstein cows were used to detect polymorphisms in the ${\alpha}$-LA by means of single-strand conformation polymorphism (SSCP). Two nucleotide transitions were identified in the 5'flanking region and intron 3 of ${\alpha}$-LA. Associations of such polymorphisms with five milk performance traits were analyzed using a general linear model procedure. No significant associations were observed between these polymorphisms and the five milk performance traits (p>0.05). RH mapping placed ${\alpha}$-LA on BTA5q21, linked most closely to markers U63110, CC537786 and L10347 (LOD>8.3), which is far distant from the region of the quantitative trait locus (QTL) on bovine chromosome 5 for variation in the milk yield trait. In summary, based on our findings, we eliminated these SNPs from having an effect on milk performance traits. Identification of Functional and In silico Positional Differentially Expressed Genes in the Livers of High- and Low-marbled Hanwoo Steers Lee, Seung-Hwan;Park, Eung-Woo;Cho, Yong-Min;Yoon, Duhak;Park, Jun-Hyung;Hong, Seong-Koo;Im, Seok-Ki;Thompson, J.M.;Oh, Sung-Jong 1334 This study identified hepatic differentially expressed genes (DEGs) affecting the marbling of muscle. Most dietary nutrients bypass the liver and produce plasma lipoproteins. These plasma lipoproteins transport free fatty acids to the target tissue, adipose tissue and muscle. We examined hepatic genes differentially expressed in a differential-display reverse transcription-polymerase chain reaction (ddRT-PCR) analysis comparing high- and low-marbled Hanwoo steers. Using 60 arbitrary primers, we found 13 candidate genes that were upregulated and five candidate genes that were downregulated in the livers of high-marbled Hanwoo steers compared to low-marbled individuals. A BLAST search for the 18 DEGs revealed that 14 were well characterized, while four were not annotated. We examined four DEGs: ATP synthase F0, complement component CD, insulin-like growth factor binding protein-3 (IGFBP3) and phosphatidylethanolamine binding protein (PEBP). Of these, only two genes (complement component CD and IGFBP3) were differentially expressed at p<0.05 between the livers of high- and low-marbled individuals. The mean mRNA levels of the PEBP and ATP synthase F0 genes did not differ significantly between the livers of high- and low-marbled individuals. Moreover, these DEGs showed very high inter-individual variation in expression. These informative DEGs were assigned to the bovine chromosome in a BLAST search of MS marker subsets and the bovine genome sequence. Genes related to energy metabolism (ATP synthase F0, ketohexokinase, electron-transfer flavoprotein-ubiquinone oxidoreductase and NADH hydrogenase) were assigned to BTA 1, 11, 17, and 22, respectively. Syntaxin, IGFBP3, decorin, the bax inhibitor gene and the PEBP gene were assigned to BTA 3, 4, 5, 5, and 17, respectively. In this study, the in silico physical maps provided information on the specific location of candidate genes associated with economic traits in cattle. DdeI Polymorphism in Coding Region of Goat POU1F1 Gene and Its Association with Production Traits Lan, X.Y.;Pan, C.Y.;Chen, H.;Lei, C.Z.;Hua, L.S.;Yang, X.B.;Qiu, G.Y.;Zhang, R.F.;Lun, Y.Z. 1342 POU1F1 is a positive regulator for GH, PRL and TSH${\beta}$ and its mutations associate with production traits in ruminant animals. We described a DdeI PCR-RFLP method for detecting a silent allele in the goat POU1F1 gene: TCT (241Ser)>TCG (241Ser). Frequencies of $D_1$ allele varied from 0.600 to 1.000 in Chinese 801 goats. Significant associations of DdeI polymorphism with production traits were found in milk yield (*p<0.05), litter size (*p<0.05) and one-year-old weight (*p<0.05) between different genotypes. Individuals with genotype $D_1D_1$ had a superior performances when compared to those with genotype $D_1D_2$ (*p<0.05). Hence, the POU1F1 gene was suggested to the potential candidate gene for superior milk performance, reproduction trait and weight trait. Genotype $D_1D_1$, characterized by a DdeI PCR-RFLP detection, was recommended to geneticists and breeders as a molecular marker for better performance in the goat industry. Expression Characterization, Polymorphism and Chromosomal Location of the Porcine Calsarcin-3 Gene Wang, Heng;Yang, Shulin;Tang, Zhonglin;Mu, Yulian;Cui, Wentao;Li, Kui 1349 Calcineurin is a calmodulin dependent protein that functions as a regulator of muscle cell growth and function. Agents capable of interacting with calcineurin could have important applications in muscle disease treatment as well as in the improvement of livestock production. Calsarcins comprise a family of muscle-specific calcineurin binding proteins which play an important role in modulating the function of calcineurin in muscle cells. Recently, we described the first two members of the calsarcin family (calsarcin-1 and calsarcin-2) in the pig. Here, we characterized the third member of the calsarcin family, calsarcin-3, which is also expressed specifically in skeletal muscle. However, unlike calsarcin-1 and calsarcin-2, the calsarcin-3 mRNA expression in skeletal muscle kept rising throughout the prenatal and postnatal development periods. In addition, radiation hybrid mapping indicated that porcine calsarcin-3 mapped to the distal end of the q arm of pig chromosome 2 (SSC2). A C/T single nucleotide polymorphism site in exon 5 was genotyped using the denaturing high performance liquid chromatography (DHPLC) method and the allele frequencies at this locus were significantly different among breeds. Somatic Cell Nuclear Transfer of Oocytes Aspirated from Postovulatory Ovarian Follicles of Superovulated Rabbits Shang, Jiang-Hua;Xu, Ru-Xiang;Jiang, Xiao-Dan;Zou, Yu-Xi;Qin, Ling-Sha;Cai, Ying-Qian;Yang, Zhi-Jun;Zheng, Xing;Cui, Sheng 1354 The aim of this study was to evaluate if oocytes, aspirated from postovulatory ovarian follicles of superovulated rabbits 14 h post-hCG administration, could be efficiently used as ooplasm recipients for somatic cell nuclear transfer (SCNT). Within a common SCNT protocol, a comparison between oocytes recovered by direct aspiration (aspirated) from available ovarian follicles and oocytes flushed out from oviducts (flushed) was carried out. The results showed that maturation and enucleation rates of aspirated oocytes were 70.7% and 69.2%, significantly lower than 95.3% (p<0.01) and 83.6% (p<0.05), respectively, from flushed oocytes. However, following enucleation of matured oocytes as ooplasm recipients for SCNT, no difference was recorded in fusion and cleavage rates, as well as blastocyst development from cleaved embryos or hatching of blastocysts between aspirated and flushed groups. Additionally, some matured aspirated and flushed oocytes were also used for immediate parthenogenetic activation and the resulting embryo development was not significantly different. Results from this study show the following: i) the majority of oocytes aspirated from postovulatory ovarian follicles of superovulated rabbits 14 h post-hCG administration are matured and can be used directly as ooplasm recipients for SCNT; ii) the reconstructed embryos derived from these oocytes have similar in vitro developmental ability to those flushed from the oviducts. Effect of Season and Age on the Ovarian Size and Activity of One-Humped Camel (Camelus dromedarius) Ali, Shujait;Ahmad, Nazir;Akhtar, Nafees;Rahman, Zia-ur;Sarwar, M. 1361 In this project, ovarian size and activity during the peak (November-April) and the low (May-October) breeding seasons in young and adult camels were studied. Ovaries of 92 camels (Camelus dromedarius), with clinically normal reproductive tracts, aged 3-15 years and slaughtered at Faisalabad or Lahore abattoirs over a period of 24 months, were collected. Jugular blood was collected from each animal before slaughter; the serum was separated and analyzed for oestradiol concentration. The size (length, width and thickness) and weight of each ovary were measured. Grossly observable Graafian follicles were counted and their diameter was measured using Vernier Calipers. The camels having ovaries presenting follicles more than 5 mm in diameter were taken as having active ovaries. The results showed that ovarian length, width and weight were significantly higher (p<0.05) during the peak than the low breeding season. The percentage of active ovaries was also significantly higher (p<0.01) during the peak than the low breeding season. However, the effect of season on ovarian thickness was non-significant. Similarly, the ovarian length, width, thickness, weight and activity did not vary significantly between young (3-7 years old) and adult (8-15 years old) animals. Serum oestradiol concentrations were significantly higher (p<0.05) during the peak ($67.70{\pm}1.36$ pg/ml) than the low breeding season ($15.25{\pm}1.54$ pg/ml). It was concluded that in Pakistani camels ovarian size and activity were higher during the peak than the low breeding season. However, age of the camel (from 3 to 15 years) had no effect on these parameters. The Effects of Feeding Acacia saligna on Feed Intake, Nitrogen Balance and Rumen Metabolism in Sheep Krebs, G.L.;Howard, D.M.;Dods, K. 1367 The aim of this study was to determine the feeding value to sheep of Acacia saligna grown under temperate conditions. Pen trials were undertaken to determine the effects of feeding A. saligna, which had been grown in a Mediterranean environment, on feed intake, nitrogen balance and rumen metabolism in sheep. Sheep were given ad libitum access to A. saligna with or without supplementation with PEG 4,000 or PEG 6,000. PEG 4000 appears to be the major detannification agent used in trials involving high tannin feed despite the fact that PEG 6000 has been shown to be more effective, in vitro. For this reason it was of interest to compare the two, in vivo. Dry matter intake was greater (p<0.05) in sheep supplemented with either PEG 4,000 or PEG 6,000 compared to the control. There was no difference, however, in intake between those supplemented with either PEG 4,000 or 6,000. Although animals were not weighed throughout the trial, a loss in body condition was obvious, in particular in the control group. Intake of N was greater (p<0.05) in sheep supplemented with either PEG 4,000 or PEG 6,000 than in the control. There was no difference in N intake between those supplemented with either PEG 4,000 or PEG 6,000. There were no significant differences in either the faecal or urinary N output between any of the treatment groups and all treatment groups were in negative N balance. Neither the average nor maximum pH of ruminal fluid of the control group was different to those supplemented with PEG. The minimum pH for the control group, however, was significantly higher (p<0.05) than for either of the PEG treatments. The average and the maximum ammonia levels were lower (p<0.05) in the control group compared with those in either of the PEG treatment groups. For all dietary treatments ruminal ammonia levels were well below the threshold for maximal microbial growth. Feeding A. saligna, without PEG, had a definite defaunating effect on the rumen. For all dietary treatments ruminal ammonia levels were well below the threshold for maximal microbial growth. It was concluded that A. saligna was inadequate as the sole source of nutrients for sheep, even with the addition of PEG 4,000 or PEG 6,000. The anti-nutritional effects on the animals were largely attributed to the excessive biological activity of the phenolics in the A. saligna leaves. There is a need to determine other supplements that may be complimentary with PEG to enhance the nutritive value of A. saligna to maintain a minimum of animal maintenance. Influence of Supplementing Dairy Cows Grazing on Pasture with Feeds Rich in Linoleic Acid on Milk Fat Conjugated Linoleic Acid (CLA) Content Khanal, R.C.;Dhiman, T.R.;Boman, R.L.;McMahon, D.J. 1374 Three experiments were conducted to investigate the hypothesis that cows grazing on pasture produce the highest proportion of c-9 t-11 CLA in milk fat and no further increase can be achieved through supplementation of diets rich in linoleic acid, such as full-fat extruded soybeans or soybean oil. In experiment 1, 18 lactating Holstein cows were used in a randomized complete block design with measurements made from wk 4 to 6 of the experiment. In experiment 2, three cannulated lactating Holstein cows were used in a $3{\times}3$ Latin square design. Each period was 4 wk with measurements made in the final wk of each period. Cows in both experiments were assigned at random to treatments: a, conventional total mixed ration (TMR); b, pasture (PS); or c, PS supplemented with 2.5 kg/cow per day of full-fat extruded soybeans (PES). In both experiments, feed intake, milk yield, milk composition, and fatty acid profile of milk and blood serum were measured, along with fatty acid composition of bacteria harvested from rumen digesta in experiment 2. In experiment 3, 10 cows which had continuously grazed a pasture for six weeks were assigned to two groups, with one group (n = 5) on pasture diet alone (PS) and the other group (n = 5) supplemented with 452 g of soy oil/cow per day for 7 d (OIL). In experiment 1, cows in PS treatment produced 350% more c-9, t-11 CLA compared with cows in TMR treatment (1.70 vs. 0.5% of fat), with no further increase for cows in PES treatment (1.50% of fat). Serum c-9, t-11 CLA increased by 233% in PS treatment compared with TMR treatment (0.21 vs. 0.09% of fat) with no further increase for cows in PES treatment (0.18% of fat). In experiment 2, cows in PS treatment produced 300% more c-9 t-11 CLA in their milk fat compared with cows in TMR treatment (1.77 vs. 0.59% of fat), but no further increase for cows in PES treatment (1.84% of fat) was observed. Serum c-9, t-11 CLA increased by 250% for cows in PS treatment compared with cows in TMR treatment (0.27 vs. 0.11% of fat), with no further increase for cows in PES treatment (0.31% of fat). The c-9, t-11 CLA content of ruminal bacteria for cows in PS treatment was 200% or more of TMR treatment, but no further increase in bacterial c-9, t-11 CLA for cows in PES treatment was observed. Supplementation of soy oil in experiment 3 also did not increase the c-9 t-11 CLA content of milk fat compared with cows fed a full pasture diet (1.60 vs. 1.54% of fat). Based on these findings, it was concluded that supplementing with feeds rich in linoleic acid, such as full-fat extruded soybeans or an equivalent amount of soy oil, to cows grazing perennial ryegrass pasture may not increase milk fat c-9 t-11 CLA contents. A Comparative Study on the Effect of Cassava Hay Supplementation in Swamp Buffaloes (Bubalus bubalis) and Cattle (Bos indicus) Granum, G.;Wanapat, Metha;Pakdee, P.;Wachirapakorn, C.;Toburan, W. 1389 Twelve swamp buffaloes and Brahman cattle heifers (6 animals each) were randomly assigned to two treatments, control (grazing only) and supplementation of cassava hay (CH) at 1-kg dry matter per head per day (DM/hd/d), in a $2{\times}2$ factorial arrangement according to a cross-over design. The cassava hay contained a high level of protein (19.5% of DM) and a strategic amount of condensed tannins (4.0% of DM). As a result it was revealed that supplementation of CH at 1-kg DM/hd/d significantly (p<0.05) improved the nutrition of both swamp buffaloes and Brahman cattle in terms of DM, organic matter (OM), protein and energy intake and digestibility, ruminal NH3-N and rumen ecology. Supplementation significantly (p<0.05) reduced weight losses in both species and improved the health, in terms of reduced number of parasite eggs in feces (p<0.05), of both buffaloes and cattle. There tended to be a difference in term of response to CH between the two species. The DM, OM, protein intake and digestibility and total digestible energy intake tended to be higher for buffaloes as compared to cattle. Moreover, the percentage reduction of parasite eggs tended to be higher for buffaloes as compared to cattle (57.6 and 45.0%, respectively). However, there were no significant interactions between species and treatments. Effects of Tween 80 Pretreatment on Dry Matter Disappearance of Rice Straw and Cellulolytic Bacterial Adhesion Lee, Chan Hee;Sung, Ha Guyn;Eslami, Moosa;Lee, Se Young;Song, Jae Y.;Lee, Sung Sill;Ha, Jong K. 1397 An in situ experiment was conducted to find out whether Tween 80 improves rice straw digestion through increased adhesion of major fibrolytic bacteria. Rice straw was sprayed with various levels of Tween 80 non-ionic surfactant or SDS ionic surfactant 24 h before incubation in the rumen of Holstein steers. Dry matter (DM) disappearance and adhesion of F. succinogenes, R. flavefaciens and R. albus on rice straw after in situ incubation were measured by real-time PCR. Application of Tween 80 increased DM disappearance, which was more noticeable at an application level of 1% compared to lower application levels. Application of SDS resulted in an opposite response in DM disappearance with highest reduction in DM disappearance at 1% level. In a subsequent in situ experiment, higher Tween 80 was applied to rice straw in an attempt to find the optimum application level. Tween 80 at 2.5% gave better DM disappearance than 1% with a similar result at 5%. Therefore, an adhesion study was carried out using rice straw treated with 2.5% Tween 80. Our results indicated that Tween 80 reduced adhesion of all three major rumen fibrolytic bacteria to rice straw. Present data clearly show that improved DM disappearance by Tween 80 is not due to increased bacterial adhesion onto substrates. Feeding Behavior of Pregnant Dairy Heifers during Last Trimester under Loose Housing System Das, Kalyan Sundar;Das, N. 1402 Thirty pregnant heifers (Jersey, Holstein Friesian and Hariana) were divided into three groups (10 animals/group) according to their stage of pregnancy viz. seven-month (181-210 days) pregnancy (SMP), eight-month (211-240 days) pregnancy (EMP) and nine-month (241-280 days) pregnancy (NMP) group. Time spent in various feeding activities (eating fodder, eating concentrate, standing rumination, sitting rumination and drinking) by each animal in the three pregnant groups was recorded in four different sessions (each session of 24 h per week). The time spent eating concentrate, eating fodder, standing rumination, sitting rumination and drinking was 61.4, 271.3, 84.6, 367.6 and 10.6 min/day, respectively in the SMP group; 52.7, 289.5, 103.3, 345.8 and 9.2 min/day, respectively in the EMP group and 65.0, 277.7, 138.1, 291.0 and 9.8 min/day, respectively in the NMP group. The animals in the EMP group spent significantly (p<0.01) more time on eating fodder and concentrate compared to the animals in SMP and NMP groups. The pregnant heifers preferred rumination in standing posture in comparison to sitting posture. The time spent on standing rumination was significantly higher in the NMP group whereas the time spent on sitting rumination was significantly lower in this group. Except for the sitting rumination activity, all the other activities were predominant in daytime compared to night time; the diurnal variation was significant (p<0.01) for all the activities. Effects of Long Term Exogenous Bovine Somatotropin on Nutrients Uptake by the Mammary Gland of Crossbred Holstein Cattle in the Tropics Chaiyabutr, Narongsak;Thammacharoen, S.;Komolvanich, S.;Chanpongsang, S. 1407 Ten, first lactation, 87.5%HF dairy cattle were used to investigate effects of long-term administration of recombinant bovine somatotropin (rbST) on nutrient uptake by the mammary gland at different stages of lactation. Measurements of arterial plasma concentrations and arterial-venous differences of metabolites across the mammary gland were performed in combination with measurment of mammary blood flow to estimate the mammary uptake. Animals in experimental groups were injected subcutaneously every 14 days from day 60 of lactation with a prolonged-release formulation of 500 mg of rbST (POSILAC, Monsanto, USA) or with sterile sesame oil without rbST in the control group. During early lactation, the milk yield of rbST-treated animals was higher than that of the control animals (p<0.05). The peak milk yield in both groups of animals declined from the early period of lactation with progression to mid- and late-lactation. No significant changes were observed in the concentration of milk lactose, while the concentrations of milk protein significantly increased as lactation advanced to mid- and late-lactation in both groups. Milk fat concentrations were significantly higher in rbST-treated animals than in control animals, particularly in early lactation (p<0.05). Mammary blood flow (MBF) markedly increased during rbST administration and was maintained at a high level throughout lactation. The mean arterial plasma concentrations for glucose and acetate of rbST-treated animals were unchanged. The net mammary glucose uptake of rbST-treated animals increased approximately 20% during early lactation, while it significantly decreased (p<0.05), including the arteriovenous differences (A-V differences) and extraction ratio across the mammary gland, as lactation advanced to mid- and late-lactation. A-V differences, mammary extraction and mammary uptake for acetate increased during rbST administration and were significantly higher (p<0.05) than in the control animals in early and mid-lactation. Mean arterial plasma concentrations for ${\beta}$-hydroxybutyrate and free glycerol were unchanged throughout the experimental periods in both groups. A-V differences and extraction ratio of ${\beta}$-hydroxybutyrate across the mammary gland did not alter during rbST administration. Mean arterial plasma concentrations for free fatty acids ($C_{16}$ to $C_{18}$), but not for triacylglycerol, increased in rbST-treated animals and were significantly higher than in control animals during early lactation (p<0.01). These findings suggest that an increase in MBF during rbST administration would not be a major determinant in the mediation of nutrient delivery and uptake by the mammary gland for increased milk production. Local changes in biosynthetic capacity within the mammary gland would be a factor in the utilization of substrates resulting in the rate of decline in milk yield with advancing lactation. Supplementation Effects of $C_{18:2}$ or $C_{18:3}$ Rich-oils on Formations of CLA and TVA, and Lipogenesis in Adipose Tissues of Sheep Choi, S.H.;Lim, K.W.;Lee, H.G.;Kim, Y.J.;Song, Man K. 1417 The present study was conducted to investigate the supplementation effects of $C_{18:2}$ rich-soybean oil or $C_{18:3}$ rich-perilla oil (7% of total diet, DM basis) for 12 weeks on plasma metabolites, fatty acid profile, in vitro lipogenesis, and activities of LPL and FAS in adipose tissue of sheep. The treatments were basal diet (Control), $C_{18:2}$ rich-soybean oil supplemented diet (SO-D) and $C_{18:3}$ rich-perilla oil supplemented diet (PO-D). All the sheep were fed the diets consisting of roughage to concentrate in the ratio of 40:60 (DM basis). Oil supplemented diets (SO-D and PO-D) slightly increased contents of triglyceride (TG) and total cholesterol (TC), proportions of both cis-9 trans-11 and trans-10 cis-12 CLA and TVA, but lowered (p<0.01) those of $C_{18:0}$ compared to the control diet. No differences were observed in the contents of TG and TC and proportions of fatty acids in plasma between supplemented oils. Oil supplemented diets slightly increased the proportions of cis-9 trans-11 and trans-10 cis-12 types of CLA in subcutaneous adipose tissue of sheep compared to the control diet. The rate of lipogenesis with acetate was higher (p<0.01) for intermuscular- and subcutaneous adipose tissues than that for intramuscular adipose tissue, while that with glucose did not differ among fat locations in sheep fed SO-D. No differences were observed in the rate of lipogenesis between substrates in all fat locations. The rates of lipogenesis with glucose increased only in the intermuscular- (p<0.01) and subcutaneous adipose tissue (p<0.005) compared to those with acetate. The rates of lipogenesis with acetate were the highest in the intermuscular and intramuscular adipose tissue of the sheep fed PO-D. Oil supplemented diets slightly increased the rate of lipogenesis with glucose for all fat locations. Supplementation of oils to the diet numerically increased the fatty acid synthase activity but did not affect the lipoprotein lipase activity in subcutaneous adipose tissue. Influence of Sulfur on Fresh Cassava Foliage and Cassava Hay Incubated in Rumen Fluid of Beef Cattle Promkot, C.;Wanapat, M.;Wachirapakorn, C.;Navanukraw, C. 1424 Two male, rumen fistulated crossbred Brahman-Thai native beef cattle (body weight = $400{\pm}50$ kg), fed on rice straw as a source of roughage, were used as rumen fluid sources. The treatments were $2{\times}3$ factorial arrangements; two roughages (fresh cassava foliage and cassava hay) and three sulfur levels (elemental sulfur) at 0.2 (control), 0.5 and 1% of DM, respectively. The experiment revealed that the rates (c) of gas production, ammonia-nitrogen concentration, true digestibility, total concentration or molar proportions of VFA and microbial biomass were not significantly different between cassava hay and fresh cassava foliage. However, all parameters for cassava hay were higher than for fresh cassava foliage. The supplementation of 0.5% sulfur to fresh cassava foliage resulted in a significant increase in the rate of gas production, true digestibility, total concentration of VFA, microbial biomass, rate of HCN disappearance, thiocyanate appearance and cyanide percentage conversion into thiocyanate. However, there were no effects of sulfur supplementation at 0.2, 0.5 and 1% to cassava hay. The finding suggests the utilization of cassava foliage for rumen microorganisms in terms of fermentation and HCN detoxification could be improved by sulfur supplementation of 0.5% of DM. Effect of a Copper, Selenium and Cobalt Soluble Glass Bolus Given to Grazing Yaks Liu, Zongping 1433 Two field trials were carried out to evaluate the performance of a soluble glass copper, cobalt and selenium bolus for maintaining adequate levels of the three trace elements in yaks. Forty yaks were used in trial 1 and 60 yaks were used in trial 2. In each trial two commercial soluble glass boluses were administered to half of the yaks. Blood samples were taken from the jugular vein at day 0, 30, 60, 90 in trial 1 and at day 0, 45, 75 and 105 in trial 2. The samples were analysed for copper status (serum caeruloplasmin activity and copper concentration), cobalt status (serum vitamin $B_{12}$ concentration and cobalt concentration), selenium status (erythrocyte glutathione peroxidase activity and selenium concentration) and serum zinc concentration. The erythrocyte glutathione peroxidase activities, serum caeruloplasmin activities and serum vitamin $B_{12}$ concentrations for trial 1 and 2 were all significantly increased for the bolused yaks (p<0.001 or p<0.01) on all sampling days. The bolused yaks had a significantly higher selenium and copper status in serum than the control yaks on all sampling days in trial 1 and 2 (p<0.05 or p<0.01). There were no significant differences in zinc and cobalt concentrations between the bolused yaks and the controls. Influence of Ligustrum lucidum and Schisandra chinensis Fruits on Antioxidative Metabolism and Immunological Parameters of Layer Chicks Ma, Deying;Liu, Yuqin;Liu, Shengwang;Li, Qundao;Shan, Anshan 1438 The experiment was conducted to evaluate the effects of Ligustrum lucidum (LL) and Schisandra chinensis (SC) on the growth, antioxidative metabolism and immunity of laying strain male chicks. The results showed that diets supplemented with 1% of either LL or SC had no effects on the growth performance of chicks compared with the control. Furthermore, both LL and SC significantly reduced malondialdehyde (MDA) concentration of serum and heart of chicks (p<0.05). In addition, superoxide dismutase (SOD) activity of serum of the birds was significantly elevated by supplementation with SC (p<0.05). Glutathione reductase (GR) activity of heart and serum of the birds was significantly elevated by supplementation with LL or SC (p<0.05). LL supplementation significantly elevated antibody values against Newcastle Disease virus (NDV)(p<0.05) and lymphoblastogenesis (p<0.05) of the birds. The results suggest that diets supplemented with 1% of either LL or SC may improve immune function and antioxidant status of chicks. Differences in Microbial Activities of Faeces from Weaned and Unweaned Pigs in Relation to In vitro Fermentation of Different Sources of Inulin-type Oligofructose and Pig Feed Ingredients Shim, S.B.;Verdonk, J.M.A.J.;Pellikaan, W.F.;Verstegen, W.A. 1444 An in vitro experiment was conducted to evaluate the differences in microbial activity of five faecal inocula from weaned pigs and one faecal inoculum from unweaned pigs in combination with 6 substrates. The substrates tested were negative control diet, corn, soybean meal, oligofructose (OF), ground chicory roots and a mixture (60% chicory pulp and 40% OF). The inocula used were derived from pigs fed either a corn-soy based diet without antibiotics (NCON), the NCON diet supplemented with oligofructose (OF), a mixture of chicory pulp (40%) and OF (60%) (MIX), ground chicory roots (CHR) or the NCON diet supplemented with antibiotics (PCON). The cumulative gas production measured fermentation kinetics and end products, such as total gas production, ammonia and volatile fatty acids, were also determined. Both the substrate and the inoculum significantly affected the fermentation characteristics. The cumulative gas production curve showed that different substrates caused more differences in traits of fermentation kinetics than the different inocula. Inocula of weaned pigs gave a significantly higher VFA production compared to the inoculum from unweaned animals, whilst the rate of fermentation and the total gas produced did not differ. OF showed the highest fermentation kinetics and the lowest $NH_3$, pH and OM loss compared to other substrates. It was concluded that the microbial activity was significantly affected by substrate and inoculum. Inoculum from weaned pigs had more potential for microbial fermentation of the carbohydrate ingredients and oligofructose than that of unweaned pigs. A combination of high and low polymer inulin may be more beneficial to the gut ecosystem than using high- or low-polymer inulin alone. Dietary Supplementation with Acanthopanax senticosus Extract Modulates Cellular and Humoral Immunity in Weaned Piglets Kong, Xiangfeng;Yin, Yulong;Wu, Guoyao;Liu, Hejun;Yin, Fugui;Li, Tiejun;Huang, Ruilin;Ruan, Zheng;Xiong, Hua;Deng, Zeyuan;Xie, Mingyong;Liao, Yiping;Kim, Sungwoo 1453 This study was conducted to test the hypothesis that dietary supplementation with an herbal extract of Acanthopanax senticosus (AS) enhances the immune response in weaned piglets. Sixty piglets weaned at 21 days of age were randomly assigned to 3 treatment groups representing the addition of 0 or 1 g/kg of the AS extract or 0.2 g/kg of colistin (an antibiotic) to maize- and soybean meal-based diets (n = 20 per group). On days 7, 14 and 28 after initiation of the addition, total and differential counts of leucocytes, proliferating activity of peripheral lymphocytes, serum levels of immunoglobulins (Ig) and cytokines and the spleen index were determined. The AS extract decreased (p<0.05) the number of neutrophils on days 7 and 28 in comparison with the control group and reduced (p<0.05) serum interleukin-$1{\beta}$ level on day 28 compared with the other 2 groups. Dietary supplementation with the AS extract increased (p<0.05) the lymphocyte/leukocyte ratio on day 28 compared with the control group and increased the proliferating activity of lymphocytes on days 14 and 28 compared with the other 2 groups. The AS extract increased (p<0.05) the serum content of IgG on day 7 and of IgG and IgM on day 28 compared with the other 2 groups, as well as increasing the serum content of tumor necrosis factor on day 7 and spleen index on days 7 and 28 compared with the control group. Collectively, these findings suggest that the AS extract as a dietary additive enhances the cellular and humoral immune responses of weaned piglets by modulating the production of immunocytes, cytokines and antibodies. Use of Chemical Treatments to Reduce Tannins and Trypsin Inhibitor Contents in Salseed (Shorea robusta) Meal Mahmood, S.;Khan, Ajmal M.;Sarwar, M.;Nisa, M.;Lee, W.S.;Kim, S.B.;Hur, T.Y.;Lee, H.J.;Kim, H.S. 1462 This study investigated the effect of chemical treatments on tannins (condensed and hydrolysable) and on the trypsin inhibitor (TI) activity in salseed meal. Triplicate samples of ground salseed meal (1 kg) were mixed with 820 ml of either distilled water (pH 5.3), 0.67 M acetic acid (pH 2.4), 0.67 M sodium bicarbonate (pH 8.2) or 2% polyvinyl-pyrrolidone (PVP) solution. The material was placed in airtight plastic containers and incubated at $37^{\circ}C$ for 0, 3, 6, 12, 24, 48 and 72 h. Samples of untreated salseed meal which had not been subjected to soaking or incubation were run through the analysis to serve as control. Addition of water, acetic acid, sodium bicarbonate and PVP solutions to salseed meal and subsequent anaerobic incubation at $37^{\circ}C$ significantly reduced chemically detectable tannins. At each incubation time, alkali solution was more effective than its counterparts. The effect of acidic solution on hydrolysable tannin was least among the treatments. All the treatments reduced TI activity of salseed meal. The reduction in TI activity by these treatments was similar and ranged between 80-84%. Treatment time effected a decrease in the contents of antinutritional substances. However, the effect of the treatment with the reagents, even for zero incubation time, was quite pronounced. It may be concluded from the present results that the treatment of salseed meal with sodium bicarbonate (0.67 M) is more effective in reducing hydrolysable and condensed tannin contents than PVP, water and acid solutions. Treatment with sodium bicarbonate solution is more economical and easier to handle than acid and PVP treatments. Incubation of the treated material for 12 h is reasonably effective, economical and safe from any mould growth. Cholesterol Removal from Lard with Crosslinked ${\beta}$-Cyclodextrin Kim, S.H.;Kim, H.Y.;Kwak, H.S. 1468 The present study was carried out to determine the optimum conditions of different factors (ratio of lard to water, ${\beta}$-CD concentrations, mixing temperature, mixing time and mixing speed) on cholesterol reduction from lard by using crosslinked ${\beta}$-CD. Crosslinked ${\beta}$-CD was prepared with adipic acid. When the lard was treated under different conditions, the range of cholesterol removal was 91.2 to 93.0% with 5% crosslinked ${\beta}$-CD, which was not significantly different among treatments. In a recycling study, cholesterol removal with crosslinked ${\beta}$-CD in the first trial was 92.1%, which was similar to that with new crosslinked ${\beta}$-CD. In up to eight time trials, over 90% of cholesterol removal was found. The present study indicated that the optimum conditions for cholesterol removal using crosslinked ${\beta}$-CD were a 1:3 ratio of lard to water, 5% crosslinked ${\beta}$-CD concentration, $40^{\circ}C$ mixing temperature, 1 h mixing time and 150 rpm mixing speed. In addition, crosslinked ${\beta}$-CD made by adipic acid resulted in an effective recycling efficiency.
CommonCrawl
Only show content I have access to (12) Only show open access (8) Last 3 years (42) Over 3 years (56) Earth and Environmental Sciences (98) Physics and Astronomy (98) flm;geophysical and geological flows;atmospheric flows Journal of Fluid Mechanics (98) test society (97) Test Society 2018-05-10 (1) The effect of Prandtl number on turbulent sheared thermal convection Alexander Blass, Pier Tabak, Roberto Verzicco, Richard J.A.M. Stevens, Detlef Lohse Journal: Journal of Fluid Mechanics / Volume 910 / 10 March 2021 Published online by Cambridge University Press: 15 January 2021, A37 Print publication: 10 March 2021 In turbulent wall sheared thermal convection, there are three different flow regimes, depending on the relative relevance of thermal forcing and wall shear. In this paper, we report the results of direct numerical simulations of such sheared Rayleigh–Bénard convection, at fixed Rayleigh number $Ra=10^{6}$, varying the wall Reynolds number in the range $0 \leqslant Re_w \leqslant 4000$ and Prandtl number $0.22 \leqslant Pr \leqslant 4.6$, extending our prior work by Blass et al. (J. Fluid Mech., vol. 897, 2020, A22), where $Pr$ was kept constant at unity and the thermal forcing ( $Ra$) varied. We cover a wide span of bulk Richardson numbers $0.014 \leqslant Ri \leqslant 100$ and show that the Prandtl number strongly influences the morphology and dynamics of the flow structures. In particular, at fixed $Ra$ and $Re_w$, a high Prandtl number causes stronger momentum transport from the walls and therefore yields a greater impact of the wall shear on the flow structures, resulting in an increased effect of $Re_w$ on the Nusselt number. Furthermore, we analyse the thermal and kinetic boundary layer thicknesses and relate their behaviour to the resulting flow regimes. For the largest shear rates and $Pr$ numbers, we observe the emergence of a Prandtl–von Kármán log layer, signalling the onset of turbulent dynamics in the boundary layer. Modelling the flow within forests: the canopy-related terms in the Reynolds-averaged formulation J. M. Viana Parente Lopes, J. M. L. M. Palma, A. Silva Lopes Published online by Cambridge University Press: 08 January 2021, A7 The canopy-related terms in the transport equations for momentum, Reynolds stresses, turbulent kinetic energy and its dissipation rate were described by a perturbative expansion around a velocity scale based on the mean total kinetic energy. The quality of the series and the relative magnitude of the first orders were analysed through comparison with the results of large-eddy simulation of three canopy flows representative of real-life applications. The flows in question were those over a horizontally homogeneous forest, a sequence of forest stands and clearings, and a forested hill. The analysis gave both the highest order required for an accurate evaluation of the canopy effects and a mathematical formulation for the canopy-related terms in a Reynolds-averaged Navier–Stokes formulation. This offers a sounder basis and assured consistency for the turbulence modelling of canopy flows between Reynolds-averaged Navier–Stokes and large-eddy simulation frameworks. Flow organisation in laterally unconfined Rayleigh–Bénard turbulence Alexander Blass, Roberto Verzicco, Detlef Lohse, Richard J. A. M. Stevens, Dominik Krug Journal: Journal of Fluid Mechanics / Volume 906 / 10 January 2021 Published online by Cambridge University Press: 16 November 2020, A26 Print publication: 10 January 2021 We investigate the large-scale circulation (LSC) of turbulent Rayleigh–Bénard convection in a large box of aspect ratio $\varGamma =32$ for Rayleigh numbers up to $Ra=10^9$ and at a fixed Prandtl number $Pr=1$. A conditional averaging technique allows us to extract statistics of the LSC even though the number and the orientation of the structures vary throughout the domain. We find that various properties of the LSC obtained here, such as the wall-shear stress distribution, the boundary layer thicknesses and the wind Reynolds number, do not differ significantly from results in confined domains ( $\varGamma \approx 1$). This is remarkable given that the size of the structures (as measured by the width of a single convection roll) more than doubles at the highest $Ra$ as the confinement is removed. An extrapolation towards the critical shear Reynolds number of $Re_s^{{crit}} \approx 420$, at which the boundary layer (BL) typically becomes turbulent, predicts that the transition to the ultimate regime is expected at $Ra_{{crit}} \approx {O}(10^{15})$ in unconfined geometries. This result is in line with the Göttingen experimental observations (He et al., Phys. Rev. Lett., vol. 108, 2012, 024502; New J. Phys., vol. 17, 2015, 063028). Furthermore, we confirm that the local heat transport close to the wall is highest in the plume impacting region, where the thermal BL is thinnest, and lowest in the plume emitting region, where the thermal BL is thickest. This trend, however, weakens with increasing $Ra$. Reconstruction of turbulent flow fields from lidar measurements using large-eddy simulation Pieter Bauweraerts, Johan Meyers We investigate the reconstruction of a turbulent flow field in the atmospheric boundary layer from a time series of lidar measurements, using large-eddy simulations (LES) and a four-dimensional variational data assimilation algorithm. This leads to an optimisation problem in which the error between measurements and simulations is minimised over an observation time horizon. We also consider reconstruction based on a Taylor's frozen turbulence (TFT) model as a point of comparison. To evaluate the approach, we construct a series of virtual lidar measurements from a fine-grid LES of a pressure-driven boundary layer. The reconstruction uses LES on a coarser mesh and smaller domain, and results are compared to the fine-grid reference. Two lidar scanning modes are considered: a classical plan-position-indicator mode, which swipes the lidar beam in a horizontal plane, and a three-dimensional pattern that is based on a Lissajous curve. We find that normalised errors lie between $15\,\%$ and $25\,\%$ (error variance normalised by background variance) in the scanning region, and increase to $100\,\%$ over a distance that is comparable to the correlation length scale outside this scanning region. Moreover, LES outperforms TFT by 30 %–70 % depending on scanning mode and location. Direct numerical simulation of stratified Ekman layers over a periodic rough surface Sungwon Lee, S. M. Iman Gohari, Sutanu Sarkar Journal: Journal of Fluid Mechanics / Volume 902 / 10 November 2020 Published online by Cambridge University Press: 10 September 2020, A25 Ekman layers over a rough surface are studied using direct numerical simulation. The roughness takes the form of periodic two-dimensional bumps whose non-dimensional amplitude is fixed at a small value ( $h^+=15$ ) and whose mean slope is gentle. The neutral Ekman layer is subjected to a stabilizing cooling flux for approximately one inertial period ( $2{\rm \pi} /f$ ) to impose the stratification. The Ekman boundary layer is in a transitionally rough regime and, without stratification, the effect of roughness is found to be mild in contrast to the stratified case. Roughness, whose effect increases with the slope of the bumps, changes the boundary layer qualitatively from the very stable (Mahrt, Theor. Comput. Fluid Dyn., vol. 11, issue 3–4, 1998, pp. 263–279) regime, which has a strong thermal inversion and a pronounced low-level jet, in the flat case to the stable regime, which has a weaker thermal inversion and stronger surface-layer turbulence, in the rough cases. The flat case exhibits initial collapse of turbulence which eventually recovers, albeit with inertial oscillations in turbulent kinetic energy. The roughness elements interrupt the initial collapse of turbulence. In the quasi-steady state, the thickness of the turbulent stress profiles and of the near-surface region with subcritical gradient Richardson number increase in the rough cases. Analysis of the turbulent kinetic energy (TKE) budget shows that, in the surface layer, roughness counteracts the stability-induced reduction of TKE production. The flow component, coherent with the surface undulations, is extracted by a triple decomposition, and leads to a dispersive component of near-surface turbulent fluxes. The significance of the dispersive component increases in the stratified cases. Two-dimensional partially ionized magnetohydrodynamic turbulence Santiago J. Benavides, Glenn R. Flierl Journal: Journal of Fluid Mechanics / Volume 900 / 10 October 2020 Published online by Cambridge University Press: 11 August 2020, A28 Print publication: 10 October 2020 Ionization occurs in the upper atmospheres of hot Jupiters and in the interiors of gas giant planets, leading to magnetohydrodynamic (MHD) effects that couple the momentum and the magnetic field, thereby significantly altering the dynamics. In regions of moderate temperatures, the gas is only partially ionized, which also leads to interactions with neutral molecules. To explore the turbulent dynamics of these regions, we utilize partially ionized magnetohydrodynamics (PIMHD), a two-fluid model – one neutral and one ionized – coupled by a collision term proportional to the difference in velocities. Motivated by planetary settings where rotation constrains the large-scale motions to be mostly two-dimensional, we perform a suite of simulations to examine the parameter space of two-dimensional PIMHD turbulence and pay particular attention to collisions and their role in the dynamics, dissipation and energy exchange between the two species. We arrive at, and numerically confirm, an expression for the energy loss due to collisions in both the weakly and strongly collisional limits, and show that, in the latter limit, the neutral fluid couples to the ions and behaves as an MHD fluid. Finally, we discuss some implications of our findings to current understanding of gas giant planet atmospheres. Plume or bubble? Mixed-convection flow regimes and city-scale circulations Hamidreza Omidvar, Elie Bou-Zeid, Qi Li, Juan-Pedro Mellado, Petra Klein Journal: Journal of Fluid Mechanics / Volume 897 / 25 August 2020 Published online by Cambridge University Press: 09 June 2020, A5 Large-scale circulations around a city are co-modulated by the urban heat island and by regional wind patterns. Depending on these variables, the circulations fall into different regimes ranging from advection-dominated (plume regime) to convection-driven (bubble regime). Using dimensional analysis and large-eddy simulations, this study investigates how these different circulations scale with urban and rural heat fluxes, as well as upstream wind speed. Two dimensionless parameters are shown to control the dynamics of the flow: (1) the ratio of rural to urban thermal convective velocities that contrasts their respective buoyancy fluxes and (2) the ratio of bulk inflow velocity to the convection velocity in the rural area. Finally, the vertical flow velocities transecting the rural to urban transitions are used to develop a criterion for categorizing different large-scale circulations into plume, bubble or transitional regimes. The findings have implications for city ventilation since bubble regimes are expected to trap pollutants, as well as for scaling analysis in canonical mixed-convection flows. Experimental evidence of a phase transition in the multifractal spectra of turbulent temperature fluctuations at a forest canopy top S. Dupont, F. Argoul, E. Gerasimova-Chechkina, M. R. Irvine, A. Arneodo Published online by Cambridge University Press: 01 June 2020, A15 Ramp–cliff patterns visible in scalar turbulent time series have long been suspected to enhance the fine-scale intermittency of scalar fluctuations compared to longitudinal velocity fluctuations. Here, we use the wavelet transform modulus maxima method to perform a multifractal analysis of air temperature time series collected at a pine forest canopy top for different atmospheric stability regimes. We show that the multifractal spectra exhibit a phase transition as the signature of the presence of strong singularities corresponding to sharp temperature drops (respectively jumps) bordering the so-called ramp (respectively inverted ramp) cliff patterns commonly observed in unstable (respectively stable) atmospheric conditions and previously suspected to contaminate and possibly enhance the internal intermittency of (scalar) temperature fluctuations. Under unstable (respectively stable) atmospheric conditions, these 'cliff' singularities are indeed found to be hierarchically distributed on a 'Cantor-like' set surrounded by singularities of weaker strength typical of intermittent temperature fluctuations observed in homogeneous and isotropic turbulence. Under near-neutral conditions, no such a phase transition is observed in the temperature multifractal spectra, which is a strong indication that the statistical contribution of the 'cliffs' is not important enough to account for the stronger intermittency of temperature fluctuations when compared to corresponding longitudinal velocity fluctuations. Two-scale momentum theory for time-dependent modelling of large wind farms Takafumi Nishino, Thomas D. Dunstan Journal: Journal of Fluid Mechanics / Volume 894 / 10 July 2020 Published online by Cambridge University Press: 28 April 2020, A2 Print publication: 10 July 2020 This paper presents a theory based on the law of momentum conservation to define and help analyse the problem of large wind farm aerodynamics. The theory splits the problem into two subproblems; namely an 'external' (or farm-scale) problem, which is a time-dependent problem considering large-scale motions of the atmospheric boundary layer (ABL) to assess the amount of momentum available to the bottom resistance of the ABL at a certain time; and an 'internal' (or turbine-scale) problem, which is a quasi-steady (in terms of large-scale motions of the ABL) problem describing the breakdown of the bottom resistance of the ABL into wind turbine drag and land/sea surface friction. The two subproblems are coupled to each other through a non-dimensional parameter called 'farm wind-speed reduction factor' or 'farm induction factor,' for which a simple analytic equation is derived that can be solved iteratively using information obtained from both subproblems. This general form of coupling allows us to use the present theory with various types of flow models for each scale, such as a numerical weather prediction model for the external problem and a computational fluid dynamics model for the internal problem. The theory is presented for a simplified wind farm situation first, followed by a discussion on how the theory can be applied (in an approximate manner) to real-world problems; for example, how to estimate the power loss due to the so-called 'wind farm blockage effect' for a given large wind farm under given environmental conditions. Turbulent boundary-layer flow beneath a vortex. Part 2. Power-law swirl David E. Loper Journal: Journal of Fluid Mechanics / Volume 892 / 10 June 2020 Published online by Cambridge University Press: 03 April 2020, A17 Print publication: 10 June 2020 The problem formulated in Part 1 (Loper, J. Fluid Mech., vol. 892, 2020, A16) for flow in the turbulent boundary layer beneath a vortex is solved for a power-law swirl: $v_{\infty }(r)\sim r^{2\unicode[STIX]{x1D703}-1}$ , where $r$ is cylindrical radius and $\unicode[STIX]{x1D703}$ is a constant parameter, with turbulent diffusivity parameterized as $\unicode[STIX]{x1D708}=v_{\infty }L$ and the diffusivity function $L$ either independent of axial distance $z$ from a stationary plane (model A) or constant within a rough layer of thickness $z_{0}$ adjoining the plane and linear in $z$ outside (model B). Model A is not a useful model of vortical flow, whereas model B produces realistic results. As found in Part 1 for $\unicode[STIX]{x1D703}=1.0$ , radial flow consists of a sequence jets having thicknesses that vary nearly linearly with $r$ . A novel structural feature is the turning point $(r_{t},z_{t})$ , where the primary jet has a minimum height. The radius $r_{t}$ is a proxy for the eye radius of a vortex and $z_{t}$ is a proxy for the size of the corner region. As $r$ decreases from $r_{t}$ , the primary jet thickens, axial outflow from the layer increases and axial oscillations become larger, presaging a breakdown of the boundary layer. For small $\unicode[STIX]{x1D703}$ , $r_{t}\sim z_{0}/\unicode[STIX]{x1D716}\unicode[STIX]{x1D703}$ and $z_{t}\sim z_{0}/\unicode[STIX]{x1D703}^{3/2}$ . The lack of existence of the turning point for $\unicode[STIX]{x1D703}\gtrsim 0.42$ and the acceleration of the turning point away from the origin of the meridional plane as $\unicode[STIX]{x1D703}\rightarrow 0$ provide partial explanations why weakly swirling flows do not have eyes, why strongly swirling flows have eyes and why a boundary layer cannot exist beneath a potential vortex. Turbulent boundary-layer flow beneath a vortex. Part 1. Turbulent Bödewadt flow The equations governing the mean fluid motions within a turbulent boundary layer adjoining a stationary plane beneath an axisymmetric circumferential flow $v_{\infty }(r)$ , where $r$ is cylindrical radius, are solved by assuming the eddy diffusivity is proportional to $v_{\infty }$ times a diffusivity function $L(r,z)$ , where $z$ is axial distance from the plane. The boundary-layer shape and structure depend on the dimensionless vorticity $\unicode[STIX]{x1D703}=\text{d}(rv_{\infty })/2v_{\infty }\,\text{d}r$ , but are independent of the strength of the circumferential flow. This problem has been solved using a spectral method in the case of rigid-body motion ( $\unicode[STIX]{x1D703}=1$ and $v_{\infty }\sim r$ ) for two models of $L$ : $L$ constant (model A) and $L$ constant within a rough layer of thickness $z_{0}$ adjoining the boundary and increasing linearly with $z$ outside that layer (model B). The influence of the rough layer is quantified by the dimensionless radial coordinate $\unicode[STIX]{x1D70C}=\unicode[STIX]{x1D716}r/z_{0}$ , where $\unicode[STIX]{x1D716}\ll 1$ . The boundary-layer thickness varies parabolically with $r$ for model A and nearly linearly with $r$ for model B. Inertial stability of the outer flow causes the velocity components to decay with axial distance as exponentially damped oscillations, with the radial flow consisting of a sequence of jets. Axial flow is positive (flowing out of the boundary layer). Outflow from the layer, velocity gradients at the bounding plane, meridional-plane circulation and oscillations all increase as radius decreases. Untangling waves and vortices in the atmospheric kinetic energy spectra Michael L. Waite Journal: Journal of Fluid Mechanics / Volume 888 / 10 April 2020 Published online by Cambridge University Press: 06 February 2020, F1 Print publication: 10 April 2020 The kinetic energy spectrum in the atmospheric mesoscale has a - 5/3 slope, which suggests an energy cascade. But the underlying dynamics of this cascade is still not fully understood. Is it driven by inertia–gravity waves, vortices or something else? To answer these questions, it is necessary to decompose the spectrum into contributions from waves and vortices. Linear decompositions are straightforward, but can lead to ambiguous results. A recent paper by Wang & Bühler (J. Fluid Mech., vol. 882, 2020, A16) addresses this problem by presenting a nonlinear decomposition of the energy spectrum into waves and vortices using the omega equation. They adapt this method for one-dimensional aircraft data and apply it to two datasets. In the lower stratosphere, the results show a mesoscale spectrum dominated by waves. The situation in the upper troposphere is different: here vortices are just as important, or possibly more than important, as waves, although the limitations of the one-dimensional data preclude a definitive answer. On the mixing length eddies and logarithmic mean velocity profile in wall turbulence Michael Heisel, Charitha M. de Silva, Nicholas Hutchins, Ivan Marusic, Michele Guala Published online by Cambridge University Press: 21 January 2020, R1 Since the introduction of the logarithmic law of the wall more than 80 years ago, the equation for the mean velocity profile in turbulent boundary layers has been widely applied to model near-surface processes and parameterize surface drag. Yet the hypothetical turbulent eddies proposed in the original logarithmic law derivation and mixing length theory of Prandtl have never been conclusively linked to physical features in the flow. Here, we present evidence that suggests these eddies correspond to regions of coherent streamwise momentum known as uniform momentum zones (UMZs). The arrangement of UMZs results in a step-like shape for the instantaneous velocity profile, and the smooth mean profile results from the average UMZ properties, which are shown to scale with the friction velocity and wall-normal distance in the logarithmic region. These findings are confirmed across a wide range of Reynolds number and surface roughness conditions from the laboratory scale to the atmospheric surface layer. Linear stability of katabatic Prandtl slope flows with ambient wind forcing Cheng-Nian Xiao, Inanc Senocak We investigate the stability of katabatic slope flows over an infinitely wide and uniformly cooled planar surface subject to a downslope uniform ambient wind aloft. We adopt an extension of Prandtl's original model for slope flows (Lykosov & Gutman, Izv. Acad. Sci. USSR Atmos. Ocean. Phys., vol. 8 (8), 1972, pp. 799–809) to derive the base flow, which constitutes an interesting basic state in stability analysis because it cannot be reduced to a single universal form independent of external parameters. We apply a linear modal analysis to this basic state to demonstrate that for a fixed Prandtl number and slope angle, two independent dimensionless parameters are sufficient to describe the flow stability. One of these parameters is the stratification perturbation number that we have introduced in Xiao & Senocak (J. Fluid Mech., vol. 865, 2019, R2). The second parameter, which we will henceforth designate the wind forcing number, is hitherto uncharted and can be interpreted as the ratio of the kinetic energy of the ambient wind aloft to the damping due to viscosity and the stabilising effect of the background stratification. For a fixed Prandtl number, stationary transverse and travelling longitudinal modes of instabilities can emerge, depending on the value of the slope angle and the aforementioned dimensionless numbers. The influence of ambient wind forcing on the base flow's stability is complicated, as the ambient wind can be both stabilising as well as destabilising for a certain range of the parameters. Our results constitute a strong counterevidence against the current practice of relying solely on the gradient Richardson number to describe the dynamic stability of stratified atmospheric slope flows. Rossby-number effects on columnar eddy formation and the energy dissipation law in homogeneous rotating turbulence T. Pestana, S. Hickel Journal: Journal of Fluid Mechanics / Volume 885 / 25 February 2020 Published online by Cambridge University Press: 18 December 2019, A7 Print publication: 25 February 2020 Two aspects of homogeneous rotating turbulence are quantified through forced direct numerical simulations in an elongated domain, which, in the direction of rotation, is approximately 340 times larger than the typical initial eddy size. First, by following the time evolution of the integral length scale along the axis of rotation $\ell _{\Vert }$ , the growth rate of the columnar eddies and its dependence on the Rossby number $Ro_{\unicode[STIX]{x1D700}}$ is determined as $\unicode[STIX]{x1D6FE}=3.90\exp (-16.72\,Ro_{\unicode[STIX]{x1D700}})$ for $0.06\leqslant Ro_{\unicode[STIX]{x1D700}}\leqslant 0.31$ , where $\unicode[STIX]{x1D6FE}$ is the non-dimensional growth rate. Second, a scaling law for the energy dissipation rate $\unicode[STIX]{x1D700}_{\unicode[STIX]{x1D708}}$ is sought. Comparison with current available scaling laws shows that the relation proposed by Baqui & Davidson (Phys. Fluids, vol. 27(2), 2015, 025107), i.e. $\unicode[STIX]{x1D700}_{\unicode[STIX]{x1D708}}\sim {u^{\prime }}^{3}/\ell _{\Vert }$ , where $u^{\prime }$ is the root-mean-square velocity, approximates well part of our data, more specifically the range $0.39\leqslant Ro_{\unicode[STIX]{x1D700}}\leqslant 1.54$ . However, relations proposed in the literature fail to model the data for the second and most interesting range, i.e. $0.06\leqslant Ro_{\unicode[STIX]{x1D700}}\leqslant 0.31$ , which is marked by the formation of columnar eddies. To find a similarity relation for the latter, we exploit the concept of a spectral transfer time introduced by Kraichnan (Phys. Fluids, vol. 8(7), 1965, p. 1385). Within this framework, the energy dissipation rate is considered to depend on both the nonlinear time scale and the relaxation time scale. Thus, by analysing our data, expressions for these different time scales are obtained that result in $\unicode[STIX]{x1D700}_{\unicode[STIX]{x1D708}}\sim (u^{\prime 4}Ro_{\unicode[STIX]{x1D700}}^{0.62}\unicode[STIX]{x1D70F}_{nl}^{iso})/\ell _{\bot }^{2}$ , where $\ell _{\bot }$ is the integral length scale in the direction normal to the axis of rotation and $\unicode[STIX]{x1D70F}_{nl}^{iso}$ is the nonlinear time scale of the initial homogeneous isotropic field. Stability of the anabatic Prandtl slope flow in a stably stratified medium Published online by Cambridge University Press: 18 December 2019, A13 In the Prandtl model for anabatic slope flows, a uniform positive buoyancy flux at the surface drives an upslope flow against a stable background stratification. In the present study, we conduct linear stability analysis of the anabatic slope flow under this model and contrast it against the katabatic case as presented in Xiao & Senocak (J. Fluid Mech., vol. 865, 2019, R2). We show that the buoyancy component normal to the sloped surface is responsible for the emergence of stationary longitudinal rolls, whereas a generalised Kelvin–Helmholtz (KH) type of mechanism consisting of shear instability modulated by buoyancy results in a streamwise-travelling mode. In the anabatic case, for slope angles larger than $9^{\circ }$ to the horizontal, the travelling KH mode is dominant whereas, at lower inclination angles, the formation of the stationary vortex instability is favoured. The same dynamics holds qualitatively for the katabatic case, but the mode transition appears at slope angles of approximately $62^{\circ }$ . For a fixed slope angle and Prandtl number, we demonstrate through asymptotic analysis of linear growth rates that it is possible to devise a classification scheme that demarcates the stability of Prandtl slope flows into distinct regimes based on the dimensionless stratification perturbation number. We verify the existence of the instability modes with the help of direct numerical simulations, and observe close agreements between simulation results and predictions of linear analysis. For slope angle values in the vicinity of the junction point in the instability map, both longitudinal rolls and travelling waves coexist simultaneously and form complex flow structures. Revisiting inclination of large-scale motions in unstably stratified channel flow Scott T. Salesky, W. Anderson Published online by Cambridge University Press: 17 December 2019, R5 Observational and computational studies of inertia-dominated wall turbulence with unstable thermal stratification have demonstrated that the inclination angle of large-scale motions (LSMs) increases with increasing buoyancy (as characterized by the Monin–Obukhov stability variable $\unicode[STIX]{x1D701}_{z}$ ). The physical implications of this structural steepening have received relatively less attention. Some authors have proposed that LSMs thicken – yet remain attached to the wall – with increasing buoyancy (Salesky & Anderson, J. Fluid Mech., vol. 856, 2018, pp. 135–168), while others have presented evidence that the upstream edge of an LSM remains anchored to the wall while its downstream edge lifts away from the wall (Hommema & Adrian, Boundary-Layer Meteorol., vol. 106, 2003, pp. 147–170). Using a suite of large-eddy simulations (LES) of unstably stratified turbulent channel flow, we demonstrate that buoyancy acts to lift LSMs away from the wall, leaving a wedge of fluid beneath with differing momentum. We develop a prognostic model for LSM inclination angle that accounts for this observed structure, where the LSM inclination angle $\unicode[STIX]{x1D6FE}$ is the sum of the inclination angle observed in a neutrally stratified wall-bounded turbulent flow, $\unicode[STIX]{x1D6FE}_{0}\approx 12^{\circ }{-}15^{\circ }$ , and the stability-dependent inclination angle of the wedge $\unicode[STIX]{x1D6FE}_{w}(\unicode[STIX]{x1D701}_{z})$ . Reported values of $\unicode[STIX]{x1D6FE}(\unicode[STIX]{x1D701}_{z})$ from the literature, LES results and atmospheric surface layer observations are found to be in good agreement with the new model for $\unicode[STIX]{x1D6FE}(\unicode[STIX]{x1D701}_{z})$ . Velocity-defect laws, log law and logarithmic friction law in the convective atmospheric boundary layer Chenning Tong, Mengjie Ding The mean velocity profile in the convective atmospheric boundary layer (CBL) is derived analytically. The shear-stress budget equations and the mean momentum equations are employed in the derivation. The multi-point Monin–Obukhov similarity (MMO) recently proposed and analytically derived by Tong & Nguyen (J. Atmos. Sci., vol. 72, 2015, pp. 4337–4348) and Tong & Ding (J. Fluid Mech., vol. 864, 2019, pp. 640–669) provides the scaling properties of the statistics in the shear-stress budget equations. Our previous and present studies have shown that the CBL is mathematically a singular perturbation problem. Therefore, we obtain the mean velocity profile using the method of matched asymptotic expansions. Three scaling layers are identified: the outer layer, which includes the mixed layer, the inner-outer layer and the inner-inner layer, which includes the roughness layer. There are two overlapping layers, the local-free-convection layer and the log layer, respectively. Two new velocity-defect laws are discovered: the mixed-layer velocity-defect law and the surface-layer velocity-defect law. The local-free-convection mean profile is obtained by asymptotically matching the expansions in the first two layers. The log law is obtained by matching the expansions in the last two layers. The von Kármán constant is obtained using velocity and length scales, and therefore has a physical interpretation. A new friction law, the convective logarithmic friction law, is obtained. The present work provides an analytical derivation of the mean velocity profile hypothesized in the Monin–Obukhov similarity theory, and is part of a comprehensive derivation of the MMO scaling from first principles. Influence of the geostrophic wind direction on the atmospheric boundary layer flow M. F. Howland, A. S. Ghate, S. K. Lele Proper simulation and modelling of geophysical flows is crucial to the study of numerical weather prediction, wind energy and many other applications. When simulating the atmospheric boundary layer, Coriolis forces act as a result of Earth's rotation. The horizontal component of Earth's rotation, which is often neglected, influences the balance of vertical momentum. The horizontal component results in systematic differences in the structure and statistics of stratified atmospheric boundary layers as a function of the direction of the geostrophic velocity. These differences are particularly relevant to atmospheric flows which include inhomogeneous roughness elements such as drag disks or wind turbines since the presence of these drag elements alters the balance between turbulent stresses and the Coriolis contributions in Reynolds stress budgets. Even at latitudes as high as $45^{\circ }$ , changing the geostrophic wind velocity vector direction alone changes the magnitude of shear stress, and therefore vertical transport of kinetic energy, in the conventionally neutral atmospheric boundary layer up to $15\,\%$ . As such, the boundary layer height, shear and veer profiles, surface friction velocity and other key features are affected by the direction of the geostrophic wind. The influence of the horizontal component of Earth's rotation in stable nocturnal boundary layers depends on the strength of the stratification as there is a strong influence in the present study and a weak influence in the GEWEX Atmospheric Boundary Layer Study (GABLS) case. A model of the effect of the horizontal component on the boundary layer shear stress is also proposed and verified with the present simulations. While not studied here, the present observations are also relevant to the oceanic Ekman boundary layer. Assessment of inner–outer interactions in the urban boundary layer using a predictive model Karin Blackman, Laurent Perret, Romain Mathis Journal: Journal of Fluid Mechanics / Volume 875 / 25 September 2019 Published online by Cambridge University Press: 18 July 2019, pp. 44-70 Urban-type rough-wall boundary layers developing over staggered cube arrays with plan area packing density, $\unicode[STIX]{x1D706}_{p}$ , of 6.25 %, 25 % or 44.4 % have been studied at two Reynolds numbers within a wind tunnel using hot-wire anemometry (HWA). A fixed HWA probe is used to capture the outer-layer flow while a second moving probe is used to capture the inner-layer flow at 13 wall-normal positions between $1.25h$ and $4h$ where $h$ is the height of the roughness elements. The synchronized two-point HWA measurements are used to extract the near-canopy large-scale signal using spectral linear stochastic estimation and a predictive model is calibrated in each of the six measurement configurations. Analysis of the predictive model coefficients demonstrates that the canopy geometry has a significant influence on both the superposition and amplitude modulation. The universal signal, the signal that exists in the absence of any large-scale influence, is also modified as a result of local canopy geometry suggesting that although the nonlinear interactions within urban-type rough-wall boundary layers can be modelled using the predictive model as proposed by Mathis et al. (J. Fluid Mech., vol. 681, 2011, pp. 537–566), the model must be however calibrated for each type of canopy flow regime. The Reynolds number does not significantly affect any of the model coefficients, at least over the limited range of Reynolds numbers studied here. Finally, the predictive model is validated using a prediction of the near-canopy signal at a higher Reynolds number and a prediction using reference signals measured in different canopy geometries to run the model. Statistics up to the fourth order and spectra are accurately reproduced demonstrating the capability of the predictive model in an urban-type rough-wall boundary layer.
CommonCrawl
Representations of Lie Algebras Session code: rla Daniel K. Nakano (University of Georgia) Michael Lau (Laval University) Vyacheslav Futorny (University of São Paulo) Thursday, Jul 27 [McGill U., Arts Building, Room 260] 11:45 Kaiming Zhao (Wilfrid Laurier University), Simple Witt modules that are $U(\mathfrak{h})$-free modules of finite rank 12:15 Reimundo Heluani (IMPA, Brazil), Cohomology of vertex algebras 14:15 Alistair Savage (University of Ottawa), An equivalence between truncations of categorified quantum groups and Heisenberg categories 14:45 Dimitar Grantcharov (University of Texas, Arlington), Bounded weight modules of the Lie algebra of vector fields on the affine space 15:45 Ben Cox (College of Charleston), On the universal central extension of certain Krichever-Novikov algebras. 16:15 Adriano Moura (University of Campinas), Tensor Products of Integrable Modules for Affine Algebras, Demazure Flags, and Partition Identities 17:00 Arturo Pianzola (University of Alberta), Lie algebroids arising from infinite dimensional Lie theory Friday, Jul 28 [McGill U., Arts Building, Room 260] 11:45 Mikhail Kotchetov (Memorial University), Graded-simple modules via the loop construction 12:15 Georgia Benkart (University of Wisconsin, Madison), A Tangled Approach to Cross Product Algebras, Their Invariants and Centralizers 14:15 Andrea Solotar (Universidad de Buenos Aires), Some invariants of the super Jordan plane 14:45 Nicolas Libedinsky (Universidad de Chile), The Anti-spherical category 15:45 Vera Serganova (University of California, Berkeley), Representations of direct limits of classical Lie algebras and superalgebras 16:15 Jonathan Kujawa (University of Oklahoma), On Cyclotomic Schur Algebras 17:00 Apoorva Khare (Stanford University), The Weyl-Kac weight formula Unscheduled [McGill U., Arts Building, Room 260] Cornelius Pillen (University of South Alabama), Lifting modules of a finite group of Lie type to its ambient algebraic group Kaiming Zhao Simple Witt modules that are $U(\mathfrak{h})$-free modules of finite rank Let $W_d$ be the Witt algebra, that is, the derivation Lie algebra of the Laurent polynomial algebra $A_d=C[x_1^{\pm1},x_2^{\pm1}, . . .,x_d^{\pm1}]$. Let $\mathfrak{h}$ be the Cartan subalgeba of $W_d$. We will determine simple $W_d$-modules that are $U(\mathfrak{h})$-free modules of finite rank. Location: McGill U., Arts Building, Room 260 Reimundo Heluani IMPA, Brazil Cohomology of vertex algebras Given a vertex algebra V and its module M, I'll introduce a complex that computes the cohomology of V with coefficients in M. This cohomology theory shares some of the properties that the cohomology of Lie algebras with coefficients in modules satisfy (like computing central extensions, extensions of modules, external derivations, etc). This is joint work with B. Bakalov, A. De Sole, and V. Kac. Alistair Savage An equivalence between truncations of categorified quantum groups and Heisenberg categories We will describe a simple diagrammatic 2-category $\mathcal{A}$ that yields a categorification of the principal realization of the basic representation of $\mathfrak{sl}_\infty$. The 2-category $\mathcal{A}$ is equivalent to a truncation of the Khovanov—Lauda categorified quantum group and also to a truncation of Khovanov's Heisenberg 2-category. After describing these results, we will discuss applications to actions of the 2-categories involved, the representation theory of the symmetric group, geometry, and W-algebras. Dimitar Grantcharov University of Texas, Arlington Bounded weight modules of the Lie algebra of vector fields on the affine space In this talk we will discuss weight modules of the Lie algebra $W_n$ of vector fields on ${\mathbb C}^n$. A classification of all simple weight modules of $W_2$ with a uniformly bounded set of weight multiplicities is provided. To achieve this classification we introduce a new family of generalized tensor $W_n$-modules. This is a joint work with A. Cavaness. Ben Cox On the universal central extension of certain Krichever-Novikov algebras. We will describe results on the center of the universal central extension of certain Krichever-Novikov algebras. In particular we will describe how various families of classical and non-classical orthogonal polynomials appear. We will also provide certain new identities of elliptic integrals. This material we will cover was obtained in joint work with V. Futorny, J. Tirao, M. S. Im, X. Gu, R. Luo, and K. Zhao Adriano Moura University of Campinas Tensor Products of Integrable Modules for Affine Algebras, Demazure Flags, and Partition Identities The study of characters and related structural problems of representations of an affine Kac-Moody algebra $\widehat{\mathfrak{g}}$ often leads to proofs of interesting identities of combinatorial nature. In this talk, based on a joint work with D. Jakelic, we discuss the relation between two such structural problems: the one of computing multiplicities of irreducible modules in tensor products of two integrable irreducible modules of $\widehat{\mathfrak{g}}$ and that of computing multiplicities in Demazure flags of a given Demazure module. Our main result expresses the former in terms of the latter in the case that the underlying simple Lie algebra is simply laced. By combining our result in the case of $\widehat{\mathfrak{sl}}_2$ with the existing answers to the first problem, we obtain interesting partition identities. Arturo Pianzola Lie algebroids arising from infinite dimensional Lie theory A classical construction of Atiyah assign to any (real or complex) Lie group G and manifold M a Lie algebroid over M. The spirit behind our work is to put Atiyah's construction within an algebraic context, replace M by a scheme X and G by a "simple" reductive group scheme G over X in the sense of Demazure-Grothendieck (such groups arise naturally in infinite dimensional Lie theory). Lie algebroids in an algebraic sense were also considered by Beilinson and Bernstein. We will explain how the present work relates to theirs. This is joint work with J. Kuttler and F. Quallbrunn. Mikhail Kotchetov Graded-simple modules via the loop construction Twisted loop and multiloop algebras play an important role in the theory of infinite-dimensional Lie algebras. Given a grading by the cyclic group $\mathbb{Z}/m\mathbb{Z}$ on a semisimple Lie algebra, the loop construction produces a $\mathbb{Z}$-graded infinite-dimensional Lie algebra. This construction was generalized by Allison, Berman, Faulkner and Pianzola to arbitrary nonassociative algebras and arbitrary quotients of abelian groups. In particular, their results, together with the recent classification of gradings by abelian groups on finite-dimensional simple Lie algebras over an algebraically closed field of characteristic zero, yield a classification of finite-dimensional graded-simple Lie algebras. Mazorchuk and Zhao have recently applied an analogue of the loop construction to modules. In this talk, we will show how this leads to a classification of finite-dimensional graded-simple modules over semisimple Lie algebras with a grading. This is joint work with Alberto Elduque. Georgia Benkart A Tangled Approach to Cross Product Algebras, Their Invariants and Centralizers An algebra $\mathsf{V}$ with a cross product has dimension 3 or 7. In this talk, we describe how 3-tangles can provide a basis for the space of homomorphisms from $\mathsf{V}^{\otimes n}$ to $\mathsf{V}^{\otimes m}$ which are invariant under the action of the automorphism group $\mathsf{G}$ of $\mathsf{V}$. The group $\mathsf{G}$ is a special orthogonal group when $\mathsf{dimV} = 3$ and a simple algebraic group of type $\mathsf{G}_2$ when $\mathsf{dimV} = 7$. When $m = n$, this gives a graphical description of the centralizer algebra $\mathsf{End_G}(\mathsf{V}^{\otimes n})$, and hence also a graphical realization of the $\mathsf{G}$-invariants in $\mathsf{V}^{\otimes 2n}$ equivalent to the First Fundamental Theorem of Invariant Theory. Our approach using certain properties of the cross product differs from that of Kuperberg, which derives quantum $\mathsf{G}_2$-link invariants from the Jones polynomial starting from its simplest formulation in terms of the Kauffman bracket. The 3-dimensional simple Kaplansky Jordan superalgebra can be interpreted as a cross product (super)algebra, and 3-tangles can be used to obtain a graphical description of its invariants and centralizer algebras relative to the action of the special orthosymplectic group. This is joint work with A. Elduque. Andrea Solotar Some invariants of the super Jordan plane Hochschild cohomology and its Gerstenhaber algebra structure are relevant Morita, tilting and derived invariants. Their computation requires a resolution of the algebra as a bimodule over itself. There is always a canonical resolution available, the bar resolution, useful from a theoretical point of view, but not in practice: its complexity rarely allows explicit calculations. Nichols algebras are generalizations of symmetric algebras in the context of braided tensor categories. They are fundamental objects for the classification of pointed Hopf algebras. Heckenberger classified finite-dimensional Nichols algebras of diagonal type up to isomorphism. The classification separates the Nichols algebras into three different classes. Later, Angiono described the defining relations of the Nichols algebras of Heckenberger's list. In a joint work with Sebastián Reca, we computed the Hochschild (co)homology of $A = k\langle x, y\rangle/(x^2, y^2x - xy^2 - xyx)$ -- the super Jordan plane--, when $char(k) = 0$. This algebra is the Nichols algebra $B(V(-1, 2))$, of Gelfand-Kirillov dimension 2. Our main results are the following: 1) We give explicit bases for the Hochschild (co)homology spaces. 2) We describe the cup product and we thus see that the isomorphism between $H^{2p}(A,A)$ and $H^{2p+2}(A,A)$, where $p > 0$, is given by the multiplication by an element of $H^2(A,A)$, and similarly for the odd degrees. 3) We describe the Lie algebra structure of $H^1(A,A)$, which is isomorphic to a Lie subalgebra of the Virasoro algebra. Nicolas Libedinsky The Anti-spherical category I will introduce the anti-spherical category and explain how a "light leaves" theorem serves to prove that parabolic Kazhdan-Lusztig polynomials have non-negative coefficients. This is joint work with G. Williamson. Vera Serganova Representations of direct limits of classical Lie algebras and superalgebras We define and study symmetric monoidal categories of representations of direct limits of classical Lie algebras and superalgebras, prove that these categories has enough injective objects and compute extension groups between simple objects. Then we discuss applications to construction of universal rigid tensor categories and categorification of translation and parabolic induction functors in the classical representation theory. Jonathan Kujawa On Cyclotomic Schur Algebras The original Schur-Weyl duality is between the general linear group and the symmetric group with the Schur algebra acting as mediator. There is an analagous story when the symmetric group is replaced with the wreath product of a cyclic group and the symmetric group and the Schur algebra is replaced with the cyclotomic Schur algebra. We will discuss a presentation of these algebras in the spirit of Doty-Giaquinto and a categorification in the spirit of Khovanov-Lauda-Rouquier. This is joint work with Jieru Zhu. Apoorva Khare The Weyl-Kac weight formula We provide the first positive formulas (without cancellations) for the weights of non-integrable simple highest weight modules over Kac-Moody algebras. For generic highest weights, we also present a formula for the weights that is similar to the Weyl-Kac character formula. For the remaining highest weights, the formula fails in a striking way, suggesting the existence of "multiplicity-free" Macdonald identities for affine root systems. Cornelius Pillen University of South Alabama Lifting modules of a finite group of Lie type to its ambient algebraic group Let $G$ be a simple simply connected algebraic group over an algebraically closed field $k$ of positive characteristic $p$. Inside $G$, the set of fixed points of the $r$th iterate of the Frobenius map form a subgroup, a finite of Lie type group, denoted by $G(p^r)$. We are interested in the following question: Given a $kG(p^r)$-module $M,$ does there always exist a $G$-module that is isomorphic to $M$ as a $kG(p^r)$-module? A well-known result due to Robert Steinberg says that all the simple modules are obtained via restriction from $G$ to $G(p^r)$. But in general the question has a negative answer. This talk is a survey of known results together with several explicit $\text{SL}_2$ examples.
CommonCrawl
OSA Publishing > Optical Materials Express > Volume 9 > Issue 6 > Page 2573 Alexandra Boltasseva, Editor-in-Chief Photonic micro-structures produced by selective etching of laser-crystallized amorphous silicon G. Martinez-Jimenez, Y. Franz, A. F. J. Runge, M. Ceschia, N. Healy, S. Z. Oo, A. Tarazona, H. M. H. Chong, A. C. Peacock, and S. Mailis G. Martinez-Jimenez,1 Y. Franz,1 A. F. J. Runge,1,2 M. Ceschia,1 N. Healy,3 S. Z. Oo,1,4 A. Tarazona,1 H. M. H. Chong,4 A. C. Peacock,1 and S. Mailis1,5,* 1Optoelectronics Research Centre, University of Southampton, Highfield, Southampton, SO17 1BJ, UK 2Current address: Institute of Photonics and Optical Science (IPOS), School of Physics, University of Sydney, NSW, Australia 3Emerging Technology and Materials Group, Newcastle University, Merz Court, Newcastle, NE1 7RU, UK 4School of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, UK 5Current address: Skolkovo Institute of Science and Technology, Novaya St. 100, Skolkovo 143025, Russia *Corresponding author: [email protected] Y. Franz https://orcid.org/0000-0001-7721-7740 S. Z. Oo https://orcid.org/0000-0001-7146-6009 A. C. Peacock https://orcid.org/0000-0002-1940-7172 G Martinez-Jimenez Y Franz A Runge M Ceschia N Healy S Oo A Tarazona H Chong A Peacock S Mailis Vol. 9, •https://doi.org/10.1364/OME.9.002573 G. Martinez-Jimenez, Y. Franz, A. F. J. Runge, M. Ceschia, N. Healy, S. Z. Oo, A. Tarazona, H. M. H. Chong, A. C. Peacock, and S. Mailis, "Photonic micro-structures produced by selective etching of laser-crystallized amorphous silicon," Opt. Mater. Express 9, 2573-2581 (2019) Laser crystallized low-loss polycrystalline silicon waveguides (OE) Nonlinear properties of laser-processed polycrystalline silicon waveguides for integrated photonics (OE) Hot-wire chemical vapor deposition low-loss hydrogenated amorphous silicon waveguides for silicon photonic devices (PRJ) Laser Materials Processing Chemical vapor deposition Laser irradiation Laser materials Original Manuscript: April 1, 2019 Manuscript Accepted: April 17, 2019 Optical Materials Express Laser Writing (2019) Experiments and discussion References and links We present a method for the production of polycrystalline Si (poly-Si) photonic micro-structures based on laser writing. The method consists of local laser-induced crystallization of amorphous silicon (a-Si) followed by selective etching in chemical agents that act preferentially on the a-Si material, consequently revealing the poly-Si content of the film. We have studied the characteristics of these structures as a function of the laser processing parameters and we demonstrate their potential photonic functionality by fabricating polycrystalline silicon ridge optical waveguides. Preliminary waveguide transmission performance results indicated an optical transmission loss of 9 dB/cm in these unrefined devices. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. Silicon (Si) is a material that is synonymous to microelectronics, yet for several years it has been penetrating the area of photonics and today it forms an important sector of integrated optics. The mature processing technology of Si, which is available in foundries around the world is also applicable to the manufacturing of photonic devices and so many important innovations have been achieved in this rapidly evolving area of technology [1,2]. The bulk of silicon photonics research involves single crystal material, however polycrystalline silicon (poly-Si) has attracted significant interest because of its potential to combine good optical transmission (in the telecom and near IR wavelength range), electronic functionality and low fabrication cost, which makes it an attractive material for commercial applications[3,4]. Poly-Si films can be grown using various techniques. However, the utility of the resulting material is a function of the crystalline grain size, therefore high temperature growth is required [4–6]. Another possibility is the low temperature deposition of a-Si followed by post crystallization using methods such as laser irradiation. It has been shown recently that laser processing of amorphous silicon (a-Si) produces poly-Si with very large domain sizes, resulting in a material with superior optical and electronic performance [7]. C.W. laser-induced crystallization of a-Si on SiO2 has been demonstrated in the literature [8,9] and has been shown to produce crystal domains with very high aspect ratio. Interestingly, laser-induced crystallization of silicon in a confined volume results in significant strain due to the difference in volume between the amorphous and crystalline states of the material, which in turn results in large shifts of the bandgap energy of the crystallized material as compared to regular silicon [9]. In a recent report a laser writing method has been proposed for the production of low loss poly-Si waveguides by laser-induced crystallization of photolithographically pre-defined a-Si ridge structures [7]. The localised nature of this method as well as the low temperature hot-wire chemical vapour deposition (HWCVD) [10,11] that was used to produce the initial a-Si film makes this method suitable for additive manufacturing, in contrast to previous reports, that required high temperature grown poly-Si [4–6]. Here we demonstrate a method where spatially selective laser-induced crystallization of a-Si produces poly-Si super structures with photonic functionality that does not require any pre-structuring step. Selective laser irradiation of planar a-Si films produces areas of poly-Si into the original a-Si film matrix. These areas transform into superstructures after subjecting the film to etching agents, which remove preferentially the a-Si content of the film, thus revealing the poly-Si structure. Such poly-Si ridges can be used as optical waveguides to transmit light. A schematic representation of the processing steps is shown in Fig. 1. Fig. 1. Schematic of the process showing (a) a-Si film deposition by HWCVD on a planar substrate (b) spatially selective laser exposure by scanning of a focussed beam on the surface of the film to produce poly-Si tracks and (c) poly-Si tracks transform into ridge structures after selective etching. Different coloured tracks represent different irradiation conditions. Download Full Size | PPT Slide | PDF 2. Experiments and discussion 2.1. Laser irradiation Laser irradiation experiments were performed on a-Si films that were prepared by low temperature (320oC) HWCVD. The deposition conditions were adjusted to minimise the hydrogen content of the deposited material, to avoid violent hydrogen out-gassing during the irradiation step. Films of a-Si with two different thicknesses 210 nm and 425 nm) were deposited onto fused silica (SiO2) and silica-on silicon substrates. The latter consists of a $\sim$4.8 $\mu$m thick thermally oxidised top layer of bulk single crystal Si . The films were subsequently exposed to a focussed laser beam using a set of a high precision translation stages (Aerotech, ABL1500), which offered excellent positioning control with constant, adjustable speed, thus maintaining identical exposure conditions along each irradiated track. The laser radiation source was a C.W. Ar+ laser operating in multi wavelength mode (predominantly $\lambda$=488/514.5 nm). Linear tracks were irradiated using a range of beam intensities, scanning speeds, and beam spot sizes, which are the processing parameters available. The laser intensity, which can be adjusted by changing the laser power and/or the laser spot size, defines the peak temperature of the irradiated volume. The scanning speed controls the dwell time of the laser beam at a particular area on the sample, which defines the duration of the heating and influences the cooling rate of the laser-heated volume. The laser intensities that are required for laser-induced crystallization can only be accessed by focussing the beam of the laser that was used. In these experiments we used three different microscope objectives: $\times 10$ (NA=0.25), $\times 20$ (NA=0.40), and $\times 40$ (NA=0.65) producing laser spot sizes (beam radius at $1/e^{2}$ intensity level) of: 3 $\mu$m, 1.2 $\mu$m and 0.9 $\mu$m respectively, as measured using a knife-edge method. It has been shown in [9] that temperatures as high as the melting point for a-Si can be achieved as a result of laser irradiation. This is due to the very efficient absorption of visible radiation by a-Si (absorption length $\sim 30nm$). Localised melting and re-solidification produce changes to the texture of the surface as well as a volume change in the transition from a-Si to poly-Si [9]. Both effects contribute into making the tracks visible under optical microscopy investigation. Figure 2 shows optical microscopy images of laser-irradiated tracks on an 400 nm thick a-Si film, deposited on SiO2. This particular set consists of 7 track pairs, with the tracks in each pair irradiated with the same laser intensity using a laser spot size spot size of 3 $\mu$m. The irradiating laser intensity increased from right to left within the range of 5x105W/cm2- 9x105 W/cm2. The difference in colour of the material on the track is indicative of changes in the film thickness and/or refractive index, with the tracks becoming wider with increasing laser intensity. For scans at the high end of the laser intensity, structural damage can be observed along the centreline of the tracks. Fig. 2. Optical microscopy image of laser irradiated tracks on a planar a-Si film. The tracks are arranged in pairs of identical laser irradiation confitions. The laser intensity increases from right to left. 2.2. Raman spectroscopy Figure 3 shows Raman spectra that were acquired from a laser irradiated track and from the non-irradiated surrounding film area illustrating the transformation of the original amorphous material, which produces a broad Raman response (blue line), to crystalline that produces a sharp, narrow Raman resonance located close to 520 cm−1 (orange line), where the single crystal Si Raman peak appears, indicating that crystalline Si is produced as a result of laser irradiation. Fig. 3. Raman spectra obtained from the a-Si film (blue) and on a laser irradiated track (orange). The width and position of the Raman peaks provide information about the a-Si/c-Si ratio and any stress that is locked in the crystallized material [7,12]. The Raman peaks, obtained from the laser-irradiated tracks were fit with a Voigt function [13]. Performing a Voigt curve fitting on the Raman peak obtained from a single crystal sample, used as reference, and by assigning the known FWHM value of 2.7 cm−1 of the Lorentzian component helped us determine the Gaussian component of the Voigt function which is associated with the response of the instrument. Applying a Voigt function fit to the Raman peaks that were obtained from the laser irradiated tracks and by taking into account the fixed Gaussian instrument response, as determined by the Voigt fit to the reference spectrum, we obtained the FWHM values for the Lorentzian component that corresponds to the laser crystallized material. The smallest FWHM value that was obtained in this manner was $\sim 3$ cm−1, (corresponding to a laser intensity of 6 W/cm2), which is decisively wider compared with the single crystal reference. This result indicates that the quality of laser crystallized tracks in a planar film is not expected to be as as good as in laser-induced crystallization of pre-structured a-Si ridges and capillaries [7,9]. 2.3. X-Ray diffraction Micro-focus X-Ray diffraction (XRD) was employed for the analysis of the pre-etched laser tracks. A focussed X-Ray beam with size that is comparable to the laser-crystallized tracks was used to interrogate the crystal content across and along the laser irradiated tracks. This capability was made available to us at the I18 beam line of Diamond (synchrotron) light source in Harwell campus, Oxfordshire. We used the X-Ray beam in a grazing incidence configuration, so that the beam's footprint forms an oval pattern on the sample with a long axis of $\sim$30 $\mu$m and short axis of $\sim$2 $\mu$m. The long axis was aligned along the length of the laser tracks allowing a quick comparison on the crystal content of the laser processed area while maximizing the overlap of the beam with the laser crystallized track, thus making full use of the beam. The spatial resolution of the measurement is compromised in the longitudinal direction but it is maintained on the transverse direction. This configuration allows for a qualitative investigation that is based on the number of a diffraction events, which appear on a sector that contains a single quadrant of the Debye cones. A more detailed description of the micro-focus XRD arrangement and mode of investigation is given in [7]. The focussed X-Ray beam was scanned in the lateral direction to reveal changes of the crystalline content across the laser tracks, where the effect of the intensity variation of the laser beam is expected to be more pronounced. A qualitative investigation based on the number of observed scattering events in the lateral direction showed that the number of scattering events near the edge of the tracks is significantly larger (in some cases producing continuous Debye cone arcs) as compared to the central region, indicating that the peripheral region consists of smaller crystallites. This observation reflects the variation of the temperature profile that is produced by the absorption of the laser beam with a Gaussian intensity profile. Multiple scattering events were recorded in the central sector of all tracks that were investigated, thus confirming the results obtained with Raman spectroscopy. 2.4. Selective etching The selective etching step that reveals the poly-Si superstructures involves the use of Secco-etch, a combination of chemical agents used as a tool for the visualisation of crystalline domains in poly-Si [14]. It consists of potassium dichromate (K2Cr2O7) and Hydrofluoric Acid (HF) diluted in DI water to control the etch rate according to the application specifics. The potassium dichromate component of the mixture oxidises weak Si-Si bonds, found in a-Si and in the Si crystal domain boundaries, producing SiO2, which is subsequently etched by HF. When applied to the laser-crystallized tracks, which are formed within a-Si films, Secco-etching removes the a-Si part of the film, in this way revealing only the poly-Si sections which form ridge structures. The width of the resulting ridges was compared to the width of the laser irradiated tracks as a function of: laser spot size, laser intensity and thickness of the initial a-Si film. The scanning speed was kept constant at 170 $\mu$m/s. The etching duration was set as the time required to remove completely the a-Si planar film. The etch rate for a-Si was estimated to be $\sim$100 nm/s for the dilution of the Secco mixture that was here. The width of the laser tracks was measured before etching using optical microscopy and was compared to that of the resulting ridges. The lateral limits of the tracks before etching are well identifiable and therefore reliably measured as shown in Fig. 4(a). Figure 4(b) shows the same track, after Secco-etching. Initial observation of Fig. 4 indicates that i) the width of the revealed ridge appears to be very similar to the pre-etched track width and ii) the optical microscopy image of the track after etching (b) shows a qualitative difference between the central section and the periphery of the track. Fig. 4. Optical microscopy images of a laser irradiated track (a) before and (b) after Secco-etching. Such images were used to measure the widths of the tracks. The substrate is silica-on-silicon. Figure 5 shows plots of the laser irradiated track widths and the resulting ridge widths as a function of laser intensity and for two different film thicknesses ( 210 nm and 425 nm). The main observation is that the width of the resulting ridge is very similar to the width of the visible laser track with the exception of tracks that correspond to the lower end of intensities. The ridge width appears to scale with the laser intensity (for each spot size ). At low intensity levels it is only in the central part of the focussed laser beam where the temperature is high enough to melt the a-Si film therefore the visible laser tracks and the resulting ridges are narrower than the measured 1/e2 beam size. As the laser intensity increases the temperature in the peripheral sectors of the tracks increases causing the central melt zone to expand towards the periphery thus, resulting in widths which are larger than the $1/e^{2}$ beam size. Fig. 5. Plots of the measured laser track widths before etching and ridge widths after etching as a function of laser intensity for different laser spot sizes (measured at 1/e2), 3 $\mu$m, 1.2 $\mu$m and 0.8 $\mu$m, (using $\times$10, $\times$20 and $\times$40 microscope objectives respectively). Plot (a) corresponds to initial a-Si film thicknesses of 210 nm and plot (b) to 425 nm These results suggest that, within the range of experimental conditions used here, the most significant parameter that influences the width of the resulting ridges is the laser intensity. The thickness of the film does not appear to play an important role in the width of the laser-crystallized ridge. This indicates that the temperature distributions that are formed due to the absorption of the laser radiation is similar for these films as well. If there was any difference in the crystal domain size for the two different thicknesses they were too subtle to register in the Raman spectroscopy and X-Ray diffraction data. Figure 6 shows SEM images of etched poly-Si ridges, which are produced on (a) silica-on silicon and (b) SiO2 using the same irradiation conditions. In both specimens the a-Si film is grown on SiO2. However, in the case of silica-on-silicon the SiO2 layer is just a $\sim$5 $\mu$ m thick layer on top of crystalline Si, which in principle, would have some implications in the heat transport properties of the substrate and consequently affect the laser-induced temperature distribution. Both SEM images show ridges of the same width that consist of a solid central section ($\sim$3 $\mu$m in width) and two peripheral sections either side that appears to be porous. The peripheral porosity is more pronounced in the ridge of Fig. 6(b), which correspond to the pure SiO2 substrate. The difference in appearance however is attributed to the fact that that particular sample was slightly over etched as compared to the silica-on silicon grown sample rather than on any substantial differences in the crystallization quality. As suggested by the qualitative micro-focus XRD investigation, the central sector consist of fewer larger crystals while the periphery consists of Si crystal nano-domains. It is anticipated that in the central sector the temperature is high enough to induce complete melting of the a-Si, which is a requirement for the production of large crystal domains [8,15], while the periphery remains solid throughout the process and therefore contains crystal nano-domains only. The sharp border between the two sectors that can be observed in the SEM images of Fig. 6(a,b), marks the extent of the melt zone. Figure 6(c) shows a cross section of a ridge structure. The cross section was prepared by polishing the edge of a fully processed sample (on a SiO2 substrate) that contained poly-Si ridges to be used for waveguide transmission experiments. The cross section shows that the peripheral sections are partially removed by the etchant, however at a lower rate compared to a-Si, forming a trapezoid cross section. For clarity the profile of the cross section is marked with a dash-line. The debris that is observed in the SEM image corresponds to wax residue from the polishing process. The shape of the poly-Si superstructures corresponds to the laser-irradiated pattern, which suggests that more complex structures can be produced by laser writing. Here we present examples of curved and joint-up structures such as a $20\mu m$ radius ring structure placed next to a straight ridge in Fig. 6(d) and a Y-junction in Fig. 6(e). Fig. 6. SEM images of laser written poly-Si superstructures; top view SEM images of etched ridges on (a) silica-on-silicon and (b) on SiO2 substrate. The two ridges were prepared using the same laser irradiation conditions. (c) Polished end face of a single ridge fabricated on SiO2. The dash-line outlines the trapezoid shape of the ridge cross section. More complex poly-Si ridge structures: (d) ring ($20\mu m$ radius), next to a straight ridge, and (e) a Y-junction. 2.5. Ridge waveguides A set of linear poly-Si ridge structures was produced on a SiO2 substrate to investigate waveguide transmission. The height of the ridges was $\sim 400nm$ and the width $\sim 6\mu m$. The sample was edge-polished, using a standard mechanical polishing protocol, on both ends, along a plane perpendicular to the direction of the ridges to allow for "end-fire coupling" of light and for the detection of light emerging from the ridge structures. The sample was prepared using a laser spot size of 3 $\mu$m ($\times$10 objective) and a narrow range of laser intensities restricted to those that produced the best crystallization results. A free space optical arrangement was employed for the characterization of the waveguide transmission. This arrangement utilised a linearly polarised HeNe laser source emitting at $\lambda$=1523 nm, which was focused using a microscope objective onto one of the polished edges of the sample. A second microscope objective was used to image the output face of the sample and to collect the light that emerged from the output face of the ridge waveguide and to form an image of the near field mode shape onto an IR camera. Only transmission of the TM modes was observed in these ridge structures. The inset of Fig. 7 shows a near field intensity profile of the propagating mode captured using an IR camera suggesting that propagation occurs in the fundamental mode, which is unexpected given the lateral dimension of the structure. The shape of the waveguide ridge could provide some explanation for this. Figure 6(c) indicates that the cross section of the ridge is trapezoidal, with the side walls sloping towards the substrate at an angle of $\sim$75o with respect to the normal, while the etch-resistive core of the ridge waveguide has a width of only $\sim$3 $\mu$m. This sloped sidewall suggests that light propagating in the waveguide experiences a graded index in the lateral direction. Additionally, higher order modes "see" more of the sidewall roughness and therefore should experience higher propagating loss. However full analysis of the waveguide transmission is beyond the scope of this report. Fig. 7. Optical loss (in dB) as a function of the waveguide length. These measurements were obtained by successive polishing of the output face of the sample. The inset shows a near field image of the mode profile, captured on an IR camera. The power transmission through the waveguides was measured by replacing the camera with a power meter head. The power transmission loss through these ridge waveguide structures was estimated by using the "cut-back" method. The output facet of the sample was polished back at specific intervals and transmission measurements were performed for each length. A plot of the raw power loss (in dB) for four waveguide lengths is shown in Fig. 7 obtained from one of the waveguides. Fitting of a straight line indicates a transmission loss of 9.34 dB/cm for this particular waveguide that was produced using a laser intensity of 6.3$\times$105 W/cm2 and has an overall width of $\sim$6 $\mu$m. Waveguides fabricated with intensities within 10$\%$ variation of that laser intensity exhibited similar transmission loss figures. Significant variance of the irradiating laser intensity from that optimum value resulted in much higher transmission loss. We have presented a method for the production of poly-Si photonic microstructures that is based on localised laser-induced crystallization of low temperature deposited a-Si planar films combined with selective etching. This method is suitable for additive manufacturing and is compatible with CMOS processing. The crystalline contents of these superstructures have been discussed and their size has been investigated as a function of laser irradiation conditions. This method, which does not require any preparatory steps such as pre-structuring, has been utilised for the fabrication of ridge waveguides that presented transmission loss of $\sim$9 dB/cm demonstrating the potential of this cost effective method for the production of structures with photonic functionality. Engineering and Physical Sciences Research Council (EPSRC) (EP/M022757/1). The authors thank Dr Ozan Aktas, Stuart MacFarquhar and Dr. K. Ignatyev along with Diamond Light Source staff for their assistance on beamline I18 (SP17304) that contributed to the results presented here. 1. G. T. Reed, "The optical age of silicon," Nature 427(6975), 595–596 (2004). [CrossRef] 2. B. Jalali and S. Fathpour, "Silicon photonics," J. Lightwave Technol. 24(12), 4600–4615 (2006). [CrossRef] 3. Y. H. D. Lee, M. O. Thompson, and M. Lipson, "Deposited low temperature silicon ghz modulator," Opt. Express 21(22), 26688–26692 (2013). [CrossRef] 4. Q. Fang, J. F. Song, S. H. Tao, M. B. Yu, G. Q. Lo, and D. L. Kwong, "Low loss ( 6.45db/cm) sub-micron polycrystalline silicon waveguide integrated with efficient sion waveguide coupler," Opt. Express 16(9), 6425–6432 (2008). [CrossRef] 5. J. S. Orcutt, S. D. Tang, S. Kramer, K. Mehta, H. Li, V. Stojanović, and R. J. Ram, "Low-loss polysilicon waveguides fabricated in an emulated high-volume electronics process," Opt. Express 20(7), 7243–7254 (2012). [CrossRef] 6. M. Douix, C. Baudot, D. Marris-Morini, A. Valéry, D. Fowler, P. Acosta-Alba, S. Kerdilès, C. Euvrard, R. Blanc, R. Beneyton, A. Souhaité, S. Crémer, N. Vulliet, L. Vivien, and F. Boeuf, "Low loss poly-silicon for high performance capacitive silicon modulators," Opt. Express 26(5), 5983–5990 (2018). [CrossRef] 7. Y. Franz, A. F. J. Runge, S. Z. Oo, G. Jimenez-Martinez, N. Healy, A. Khokhar, A. Tarazona, H. M. H. Chong, S. Mailis, and A. C. Peacock, "Laser crystallized low-loss polycrystalline silicon waveguides," Opt. Express 27(4), 4462–4470 (2019). [CrossRef] 8. J. Michaud, R. Rogel, T. Mohammed-Brahim, and M. Sarret, "Cw argon laser crystallization of silicon films: Structural properties," J. Non-Cryst. Solids 352(9-20), 998–1002 (2006). [CrossRef] 9. N. Healy, S. Mailis, N. M. Bulgakova, P. J. A. Sazio, T. D. Day, J. R. Sparks, H. Y. Cheng, J. V. Badding, and A. C. Peacock, "Extreme electronic bandgap modification in laser-crystallized silicon optical fibres," Nat. Mater. 13(12), 1122–1127 (2014). [CrossRef] 10. R. E. Schropp, "Present status of micro- and polycrystalline silicon solar cells made by hot-wire chemical vapor deposition," Thin Solid Films 451-452, 455–465 (2004). [CrossRef] 11. P. Brogueira, J. Conde, S. Arekat, and V. Chu, "Low filament temperature deposition of a-Si:H by hot-wire chemical vapor deposition," J. Appl. Phys. 78(6), 3776–3783 (1995). [CrossRef] 12. K.-Y. Chan, D. Knipp, A. Gordijn, and H. Stiebig, "Influence of crystalline volume fraction on the performance of high mobility microcrystalline silicon thin-film transistors," J. Non-Cryst. Solids 354(19-25), 2505–2508 (2008). [CrossRef] 13. J. Filik, P. May, S. Pearce, R. Wild, and K. Hallam, "XPS and laser Raman analysis of hydrogenated amorphous carbon films," Diamond Relat. Mater. 12(3-7), 974–978 (2003). [CrossRef] 14. A. Abbadie, S. W. Bedell, J. M. Hartmann, D. K. Sadana, F. Brunier, C. Figuet, and I. Cayrefourcq, "Study of HCl and secco defect etching for characterization of thick sSOI," J. Electrochem. Soc. 154(8), H713–H719 (2007). [CrossRef] 15. N. Healy, M. Fokine, Y. Franz, T. Hawkins, M. Jones, J. Ballato, A. C. Peacock, and U. J. Gibson, "Co2 laser-induced directional recrystallization to produce single crystal silicon-core optical fibers with low loss," Adv. Opt. Mater. 4(7), 1004–1008 (2016). [CrossRef] Article Order G. T. Reed, "The optical age of silicon," Nature 427(6975), 595–596 (2004). [Crossref] B. Jalali and S. Fathpour, "Silicon photonics," J. Lightwave Technol. 24(12), 4600–4615 (2006). Y. H. D. Lee, M. O. Thompson, and M. Lipson, "Deposited low temperature silicon ghz modulator," Opt. Express 21(22), 26688–26692 (2013). Q. Fang, J. F. Song, S. H. Tao, M. B. Yu, G. Q. Lo, and D. L. Kwong, "Low loss ( 6.45db/cm) sub-micron polycrystalline silicon waveguide integrated with efficient sion waveguide coupler," Opt. Express 16(9), 6425–6432 (2008). J. S. Orcutt, S. D. Tang, S. Kramer, K. Mehta, H. Li, V. Stojanović, and R. J. Ram, "Low-loss polysilicon waveguides fabricated in an emulated high-volume electronics process," Opt. Express 20(7), 7243–7254 (2012). M. Douix, C. Baudot, D. Marris-Morini, A. Valéry, D. Fowler, P. Acosta-Alba, S. Kerdilès, C. Euvrard, R. Blanc, R. Beneyton, A. Souhaité, S. Crémer, N. Vulliet, L. Vivien, and F. Boeuf, "Low loss poly-silicon for high performance capacitive silicon modulators," Opt. Express 26(5), 5983–5990 (2018). Y. Franz, A. F. J. Runge, S. Z. Oo, G. Jimenez-Martinez, N. Healy, A. Khokhar, A. Tarazona, H. M. H. Chong, S. Mailis, and A. C. Peacock, "Laser crystallized low-loss polycrystalline silicon waveguides," Opt. Express 27(4), 4462–4470 (2019). J. Michaud, R. Rogel, T. Mohammed-Brahim, and M. Sarret, "Cw argon laser crystallization of silicon films: Structural properties," J. Non-Cryst. Solids 352(9-20), 998–1002 (2006). N. Healy, S. Mailis, N. M. Bulgakova, P. J. A. Sazio, T. D. Day, J. R. Sparks, H. Y. Cheng, J. V. Badding, and A. C. Peacock, "Extreme electronic bandgap modification in laser-crystallized silicon optical fibres," Nat. Mater. 13(12), 1122–1127 (2014). R. E. Schropp, "Present status of micro- and polycrystalline silicon solar cells made by hot-wire chemical vapor deposition," Thin Solid Films 451-452, 455–465 (2004). P. Brogueira, J. Conde, S. Arekat, and V. Chu, "Low filament temperature deposition of a-Si:H by hot-wire chemical vapor deposition," J. Appl. Phys. 78(6), 3776–3783 (1995). K.-Y. Chan, D. Knipp, A. Gordijn, and H. Stiebig, "Influence of crystalline volume fraction on the performance of high mobility microcrystalline silicon thin-film transistors," J. Non-Cryst. Solids 354(19-25), 2505–2508 (2008). J. Filik, P. May, S. Pearce, R. Wild, and K. Hallam, "XPS and laser Raman analysis of hydrogenated amorphous carbon films," Diamond Relat. Mater. 12(3-7), 974–978 (2003). A. Abbadie, S. W. Bedell, J. M. Hartmann, D. K. Sadana, F. Brunier, C. Figuet, and I. Cayrefourcq, "Study of HCl and secco defect etching for characterization of thick sSOI," J. Electrochem. Soc. 154(8), H713–H719 (2007). N. Healy, M. Fokine, Y. Franz, T. Hawkins, M. Jones, J. Ballato, A. C. Peacock, and U. J. Gibson, "Co2 laser-induced directional recrystallization to produce single crystal silicon-core optical fibers with low loss," Adv. Opt. Mater. 4(7), 1004–1008 (2016). Abbadie, A. Acosta-Alba, P. Arekat, S. Badding, J. V. Ballato, J. Baudot, C. Bedell, S. W. Beneyton, R. Blanc, R. Boeuf, F. Brogueira, P. Brunier, F. Bulgakova, N. M. Cayrefourcq, I. Chan, K.-Y. Cheng, H. Y. Chong, H. M. H. Chu, V. Conde, J. Crémer, S. Day, T. D. Douix, M. Euvrard, C. Fang, Q. Fathpour, S. Figuet, C. Filik, J. Fokine, M. Fowler, D. Franz, Y. Gibson, U. J. Gordijn, A. Hallam, K. Hartmann, J. M. Hawkins, T. Healy, N. Jalali, B. Jimenez-Martinez, G. Kerdilès, S. Khokhar, A. Knipp, D. Kramer, S. Kwong, D. L. Lee, Y. H. D. Lipson, M. Lo, G. Q. Mailis, S. Marris-Morini, D. May, P. Mehta, K. Michaud, J. Mohammed-Brahim, T. Oo, S. Z. Orcutt, J. S. Peacock, A. C. Pearce, S. Ram, R. J. Reed, G. T. Rogel, R. Runge, A. F. J. Sadana, D. K. Sarret, M. Sazio, P. J. A. Schropp, R. E. Song, J. F. Souhaité, A. Sparks, J. R. Stiebig, H. Stojanovic, V. Tang, S. D. Tao, S. H. Tarazona, A. Thompson, M. O. Valéry, A. Vivien, L. Vulliet, N. Wild, R. Yu, M. B. Adv. Opt. Mater. (1) Diamond Relat. Mater. (1) J. Appl. Phys. (1) J. Electrochem. Soc. (1) J. Lightwave Technol. (1) J. Non-Cryst. Solids (2) Nat. Mater. (1) Opt. Express (5) Thin Solid Films (1) OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF
CommonCrawl
Exploring the effects of increasing underutilized crops on consumers' diets: the case of millet in Uganda Cesar Revoredo-Giha ORCID: orcid.org/0000-0001-9890-41601, Luiza Toma ORCID: orcid.org/0000-0003-1176-51401, Faical Akaichi ORCID: orcid.org/0000-0003-0395-23681 & Ian Dawson ORCID: orcid.org/0000-0003-4245-69732 Agricultural and Food Economics volume 10, Article number: 1 (2022) Cite this article Known in the literature as underutilized, neglected or orphan crops, these crops have been cited as having the potential to improve food and nutritional security. The literature also highlights however that consumers in developing countries are increasingly abandoning their traditional diets that these crops are part of, and are replacing them by western diets. In this context, the purpose of this paper is to investigate the consumption and nutritional implications of expanding the participation of underutilized crops in current diets. This was done using a modified version of the microeconomic consumer problem. This was augmented with a linear constraint using generalized rationing theory that can be found in the economics literature. The method was applied to the case study of the consumption of millet (finger millet, botanical name: Eleusine coracana) by rural, urban-poor and urban-affluent Ugandan socioeconomic groups. The results indicated that millet could contribute to improving the intake of macronutrients and of some micronutrients, though the overall picture is complex. However, under current preferences and given its demand inelasticity, to achieve a substantial increase in the quantity of millet in the diet will require a significant reduction of its price. Otherwise, the net impact on nutrition as measured by the mean adequacy ratio will be only slightly positive for rural and urban-poor households. Our findings indicate that supply-side initiatives aimed at increasing the productivity of underutilized crops (reducing crop price) are likely to produce disappointing results in restoring their importance unless accompanied by specific interventions to expand demand. Green Revolution research focused on optimising the yields of a few staple crops to support the production of sufficient affordable calories for humans (McMullin et al. 2021). This, however, occurred at the expense of research into the yield, quality improvement and resilience of so-called underutilized, neglected or orphan crops, which provide a better supply of particular nutrients such as essential amino acids, minerals and fibre (FAO 2021). Some of these underutilized crops have the capacity to be used in the management/feeding of farmed animals, food processing and the wider food system (e.g., Qaim 1999; Dawson et al. 2009; ATDF 2009). Additionally, they can be produced in more sustainable ways than major staple crops, especially when considering the external costs of production to the environment (Dawson et al. 2019; AOCC 2021). The current strategy on underutilized crops as implemented, for instance, by the African Orphan Crops Consortium (AOCC 2021) focuses on their genetic improvement to improve their productivity and quality and increase their resilience to climate change. This is based on the assumption that these efforts will translate into higher crop production, and then consumption, diversity. However, Sibhatu et al. (2015) found that in developing countries greater production diversity does not necessarily correspond with greater dietary diversity. The connection between production and consumption may be most tenuous in urban areas, due to the adoption of western diets (e.g., Hawkes 2006; Moodie et al. 2013) that displace traditional ones (e.g., Worku et al. 2017). The increased power of multinational food companies, government subsidy patterns that support major staple crop production, farm mechanisation, the consolidation of plant breeding companies, and limited investments in breeding programmes for underutilized crops, have all had a role to play in changing dietary trends toward more staple crop consumption (Khoury et al. 2014). Action on the consumption of underutilized crops or products derived from them (the demand side) is important as well as production interventions. It helps ensure producers a fair and sustainable return for their products by connecting them with markets, which has been shown to be an effective tool against poverty (African Development Bank Group 2016). In addition, responding to consumer preferences can be seen as a useful approach to support healthy diets in situations where consumers face complex choices. This is needed in Africa for example as consumer markets expand with ultra-processed products (Moodie et al. 2013). These products are displacing more traditional dietary elements of a range of fresh and perishable whole or minimally processed foods, a fact that could be associated with increasing levels of non-communicable diseases (Global Nutrition Report 2018). The purpose of this paper is to study two aspects of the consumption of underutilized crops: first, the implications of expanding their presence in the diet by considering consumers' current preferences through demand elasticities; and second, to measure the resulting changes on consumption and nutrition for the entire resulting diet (i.e., not just that impact coming solely from the additional crop). This was done using a version of the microeconomic consumer problem (Jackson 1991) modified by augmentation with a linear constraint (Irz et al. 2015) that enforces a minimum requirement of the underutilized crop in the diet. An interesting methodological output of the work is the computation of the shadow price associated with the underutilized crop constraint. This allows us to understand how much the price of the crop needs to be reduced in order to increase its presence substantially in the diet. If the crop price needs to decrease significantly (comparing the original price with the shadow price), this would indicate that major supply-side measures will be needed to significantly increase crop productivity. In this context, investing in ways to expand demand may achieve greater impact than supply side interventions. Another contribution of the paper is to measure the share of the underutilized crop in the diet. Most support to try and increase underutilized crop consumption has come from "supply side" researchers (e.g., Dawson et al. 2019; Mayes et al. 2011; Cheng et al. 2017) focusing on improving the characteristics and consumption of individual crops in isolation. But increasing the consumption of any one underutilized crop will have broader implications for the composition of diets and nutrient intake broadly. This could happen, for example, through the relationship with other food products and meeting a budget constraint. Therefore, an evaluation of the nutritional advantages of orphan crops must be done in the context of the diet as a whole. Here, the above methods are applied to the case study of the consumption of millet in Uganda. Most of the millet planted in Uganda is finger millet, Eleusine coracana, but for simplicity we use the shorter term 'millet' in this paper. We focus on the crop because it was identified as a priority for research by the African Orphan Crops Consortium (AOCC 2021). In addition, cereals including millet contribute over 40% to total direct human dietary calorie intake in Eastern Africa (Gierend and Orr 2015). Millet also is one among the mandate crops of ICRISAT who promote its production and consumption in this region (Gierend and Orr 2015; Orr et al. 2020). The selection of Uganda was due to the contraction of the apparent consumption of millet over the last decades, which has been 4.7% per year on average since 1968 according to FAOSTAT figures.Footnote 1 The Government of Uganda is nevertheless interested to expand the crop's production and consumption. This is illustrated by work at the Mukono Zonal Agricultural Research Institute (Muzardi) that research millet varieties originating from China as part of a Uganda–China partnership (GlobalFoodMate 2013). In addition, millet is one of the cereals considered in Uganda's National Grain Policy (Uganda Ministry of Trade, Industry and Cooperatives 2015). Our paper starts with a review of the consumption of millet in the Sub-Saharan Africa region, and then briefly describes broader food consumption patterns in Uganda. This is followed by a presentation of the methods and data used to evaluate the implications of increasing the consumption of millet in Uganda, a discussion of our findings, and our conclusions. Consumption of millet in Sub-Saharan Africa Figure 1 shows the apparent per capita consumption of millet, maize and wheat for the time series 1961–2013 in the six Sub-Saharan countries of Ethiopia, Kenya, Malawi, Nigeria, Tanzania and Uganda (a recent review on why to invest in research and development for sorghum and millets in East and Southern Africa can be found in Orr et al. 2020). Source: Based on FAOSTAT data Apparent per capita per day consumption of millet, maize and wheat and their products in selected Sub Saharan African countries. Despite its role in African diets, with the exception of Ethiopia and Malawi, countries show decreasing trends in consumption. Moreover, with the exception of Nigeria, much less millet is now (to 2013) consumed than maize and wheat. The number of studies analysing the actual consumption of millet in Sub-Saharan Africa is limited. Gierend and Orr (2015) focussed on the demand for millet and sorghum in Ethiopia, Kenya, Tanzania and Uganda. They found that while the per capita consumption overall was static, there were differences between countries. In Kenya and Tanzania, consumption per capita did not change between 2000 and 2013; in Ethiopia, annual consumption rose from 4.5 to 8.0 kg/capita; and in Uganda consumption fell from 29 to 5 kg/capita. Gierend and Orr (2015) also found that, across countries, consumption averaged 7.2 kg/capita in rural areas and 3.7 kg/capita in urban ones. The rural bias was strongest in Ethiopia, at 10.6 and 3.3 kg/capita for rural and urban areas, respectively. But urban demand remained considerable in each country, in absolute terms amounting to 46,000 tonnes in Tanzania, 43,000 tonnes in Uganda, 42,000 tonnes in Ethiopia, and 21,000 tonnes in Kenya, on an annual basis. Gierend and Orr (2015) found that the consumer demand for millet rose with income group in all four countries. They attributed this to consumers' broad appreciation of millet's taste and nutritional value, with millet not considered an inferior good. The evidence was strongest in Tanzania, where millet consumption averaged 10 kg/capita in a high-income group compared to 3.2 kg/capita in a low-income group. In Kenya, the difference in consumption between high—and low-income groups was smallest, at 1.7 and 1.6 kg/capita, respectively. Consumption of millet in Uganda's diet This section starts with a brief description of Uganda's broader food consumption patterns. It then presents the methods and data used to analyse an increase of millet consumption in the nation's diet. Uganda food consumption patterns National Panel Household Surveys (five waves conducted between 2009/10 and 2015/16), Demographic and Health Surveys, and data from the Agricultural Technology and Agribusiness Advisory Services, provide information on Ugandan food consumption. From these sources, the Ugandan National Planning Authority (2017) calculated that four out of ten Ugandans do not meet the minimum required energy intake of 2200 kcal per person per day, but instead consumed 1860 kcal per day on average. Despite an overall reduction in child and adult undernutrition in recent years (World Food Programme 2013), 16% of Ugandan households as a whole remained food insecure in 2015/16 (compared to 20% in 2009/2010; Ugandan National Planning Authority 2017). Eastern Uganda however showed an increase in the prevalence of food insecurity between the earlier and later dates, rising from 33 to 46%. This was likely triggered by expansion in the production of crops such as sugar cane and rice as cash crops, along with decreasing farm sizes (Ugandan National Planning Authority 2017). Northern Uganda presented lower caloric intakes at the later of the two above dates, due primarily to seasonal food deficits aggravated by drought (Ugandan National Planning Authority 2017; Uganda Bureau of Statistics and World Food Programme 2013). Despite Eastern and Northern Uganda regions' worsening food security, they have higher dietary diversity than other parts of the country in terms of household consumption of food groups. At a national level, diet diversity for most Ugandans is below that recommended, though it did increase between 2009/10 and 2015/16 (Ugandan National Planning Authority 2017). The most important staples in terms of caloric intake in Uganda are matooke, cassava and maize, followed by sweet potatoes and beans, with rice and wheat gaining prominence for urban high-income households. These staples are complemented by groundnut, sorghum, millet, Irish potatoes, peas, simsim (sesame) and green leafy vegetables; fruits, meat and milk are consumed on average twice a week (Ugandan National Planning Authority 2017). Overall, 69% of Ugandans' food energy is derived from staples (Uganda Bureau of Statistics and World Food Programme 2013). The proportion varies between rural and urban areas, being 71% and 59%, respectively. In Uganda, food insecurity and undernutrition are strongly correlated with monetary poverty; rural low-income households are therefore more likely to be food energy deficient, experience low dietary diversity, and depend on staples for energy (Uganda Bureau of Statistics and World Food Programme 2013). This synopsis of the main facets of the Ugandan diet highlights the differentiation by rural/urban location and income as key factors, and this is reflected in our empirical analysis. To evaluate the effects of increasing the consumption of millet in the Ugandan diet we applied the model of Irz et al. (2015, 2016). They adapted the work by Jackson (1991) on generalized rationing theory, applying it to the case of linear constraints and extending it by deriving the comparative statistics necessary to empirically estimate healthy diets compatible with consumer preferences. For interested readers, Irz et al. (2015) made a comparison of the approach with other methods (e.g., nutrition linear programming models, demand systems) and explored the limitations of (e.g., nutrition linear programming models, demand systems) (pp. 189–90). The starting point of the model is the neoclassical consumer theory that assumes that an individual chooses the consumption of \(\mathrm{H}\) goods in quantities \(\mathrm{x}=\left({\mathrm{x}}_{1},\ldots,{\mathrm{x}}_{\mathrm{H}}\right)\) to maximize a strictly increasing, strictly quasi-concave, twice differentiable utility function \(\mathrm{U}=\mathrm{U}\left({\mathrm{x}}_{1},\ldots,{\mathrm{x}}_{\mathrm{H}}\right)\), subject to a linear budget constraint \(\mathrm{p}\cdot \mathrm{x}\le \mathrm{M}\), where \(\mathrm{p}\) is a price vector and \(\mathrm{M}\) denotes income. In addition, departing from the standard model, the consumer faces \(\mathrm{N}\) additional linear constraints. These \(\mathrm{N}\) constraints could for instance be maximum dietary intakes of nutrients such as salt, total fat, saturated fat or free sugars. Their linearity implies an assumption of constant nutritional coefficients for any food \(\mathrm{i}\) and nutrient \(\mathrm{n}\), the value of which is known from food composition tables. The constraints could also be food-based, such as recommendations on the consumption of fruit and vegetables. In this paper, an additional constraint related to the quantity of millet in the diet is considered. The additional \(N\) constraints are expressed as in Eq. (1): $${\sum }_{\mathrm{i}=1}^{\mathrm{H}}{\mathrm{a}}_{\mathrm{i}}^{\mathrm{n}}{\mathrm{x}}_{\mathrm{i}}\le {\mathrm{r}}_{\mathrm{n}} ;\mathrm{ n}=1,\ldots,\mathrm{N}$$ The method to solve the modified utility maximisation problem relies on the notion of shadow prices, i.e., prices that would have to prevail for the unconstrained individual to choose the same bundle of goods as chosen when adding the constraints of Eq. (1). Duality theory is used to relate constrained and unconstrained problems in order to identify the properties of demand functions under additional constraints. Let the compensated (Hicksian) demand functions of the standard problem be \({\mathrm{h}}_{\mathrm{i}}\left(\mathrm{p},\mathrm{U}\right)\), and those of the constrained model \({\stackrel{\sim }{\mathrm{h}}}_{\mathrm{i}}\left(\mathrm{p},\mathrm{U},\mathrm{A},\mathrm{r}\right)\), where \(\mathrm{A}\) is the \(\left(\mathrm{N}\times \mathrm{H}\right)\) matrix of coefficients in (1), and \(\mathrm{r}\) the N-vector of maximum amounts. By definition of the vector of shadow prices \(\stackrel{\sim }{\mathrm{p}}\), the equality holds as in Eq. (2): $${\stackrel{\sim }{\mathrm{h}}}_{\mathrm{i}}\left(\mathrm{p},\mathrm{U},\mathrm{A},\mathrm{r}\right)={\mathrm{h}}_{\mathrm{i}}\left(\stackrel{\sim }{\mathrm{p}},\mathrm{U}\right)$$ The minimum expenditure function of the constrained problem \(\stackrel{\sim }{\mathrm{C}}\left(\mathrm{p},\mathrm{U},\mathrm{A},\mathrm{r}\right)\) can be related to the ordinary expenditure function \(\mathrm{C}\left(\mathrm{p},\mathrm{U}\right)\) using Eq. (3): $$\stackrel{\sim }{\mathrm{C}}\left(\mathrm{p},\mathrm{U},\mathrm{A},\mathrm{r}\right)={\sum }_{\mathrm{j}=1}^{\mathrm{H}}{\mathrm{p}}_{\mathrm{j}}{\stackrel{\sim }{\mathrm{h}}}_{\mathrm{j}}\left(\mathrm{p},\mathrm{U},\mathrm{A},\mathrm{r}\right)=\mathrm{C}\left(\stackrel{\sim }{\mathrm{p}},\mathrm{U}\right)+{\sum }_{\mathrm{j}=1}^{\mathrm{H}}\left({\mathrm{p}}_{\mathrm{j}}-{\stackrel{\sim }{\mathrm{p}}}_{\mathrm{j}}\right){\mathrm{h}}_{\mathrm{j}}\left(\stackrel{\sim }{\mathrm{p}},\mathrm{U}\right)$$ The constrained regime is fully characterised by the combination of the unconstrained demand functions, unconstrained expenditure function and the shadow prices. The shadow prices can be calculated based on the principle that they minimise the expenditure subject to the additional constraints. Thus, using Eq. (3), the Lagrange function of the constrained problem is given by Eq. (4): $$\mathrm{L}=\mathrm{C}\left(\stackrel{\sim }{\mathrm{p}},\mathrm{U}\right)+{\sum }_{\mathrm{j}=1}^{\mathrm{H}}\left({\mathrm{p}}_{\mathrm{j}}-{\stackrel{\sim }{\mathrm{p}}}_{\mathrm{j}}\right){\mathrm{h}}_{\mathrm{j}}\left(\stackrel{\sim }{\mathrm{p}},\mathrm{U}\right)+{\sum }_{\mathrm{n}=1}^{\mathrm{N}}{\upmu }_{\mathrm{n}}\left({\mathrm{r}}_{\mathrm{n}}-{\sum }_{\mathrm{j}=1}^{\mathrm{H}}{\mathrm{a}}_{\mathrm{j}}^{\mathrm{n}}{\mathrm{h}}_{\mathrm{j}}\left(\stackrel{\sim }{\mathrm{p}},\mathrm{U}\right)\right)$$ Assuming non-satiation so that all the shadow prices are positive, the Kuhn-Tucker conditions derived from Eq. (4) are in Eqs. (5) and (6): $$\frac{{\partial {\text{L}}}}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} = \frac{{\partial {\text{C}}}}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} - {\text{h}}_{{\text{i}}} + \mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} \left( {{\text{p}}_{{\text{j}}} - {\tilde{\text{p}}}_{{\text{j}}} } \right)\frac{{\partial {\text{h}}_{{\text{j}}} }}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} - \mathop \sum \limits_{{{\text{n}} = 1}}^{{\text{N}}} {\upmu }_{{\text{n}}} \left( {\mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} {\text{a}}_{{\text{j}}}^{{\text{n}}} \frac{{\partial {\text{h}}_{{\text{j}}} }}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }}} \right) = 0,{\text{ i}} = 1, \ldots ,{\text{H}}\frac{{\partial {\text{L}}}}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} = \frac{{\partial {\text{C}}}}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} - {\text{h}}_{{\text{i}}} + \mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} \left( {{\text{p}}_{{\text{j}}} - {\tilde{\text{p}}}_{{\text{j}}} } \right)\frac{{\partial {\text{h}}_{{\text{j}}} }}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} - \mathop \sum \limits_{{{\text{n}} = 1}}^{{\text{N}}} {\upmu }_{{\text{n}}} \left( {\mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} {\text{a}}_{{\text{j}}}^{{\text{n}}} \frac{{\partial {\text{h}}_{{\text{j}}} }}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }}} \right) = 0,{\text{ i}} = 1, \ldots ,{\text{H}}\frac{{\partial {\text{L}}}}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} = \frac{{\partial {\text{C}}}}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} - {\text{h}}_{{\text{i}}} + \mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} \left( {{\text{p}}_{{\text{j}}} - {\tilde{\text{p}}}_{{\text{j}}} } \right)\frac{{\partial {\text{h}}_{{\text{j}}} }}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }} - \mathop \sum \limits_{{{\text{n}} = 1}}^{{\text{N}}} {\upmu }_{{\text{n}}} \left( {\mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} {\text{a}}_{{\text{j}}}^{{\text{n}}} \frac{{\partial {\text{h}}_{{\text{j}}} }}{{\partial {\tilde{\text{p}}}_{{\text{i}}} }}} \right) = 0,{\text{ i}} = 1, \ldots ,{\text{H}}$$ $$\begin{aligned} \frac{{\partial {\text{L}}}}{{\partial {\upmu }_{{\text{n}}} }} & = {\upmu }_{{\text{n}}} \left( {{\text{r}}_{{\text{n}}} - \mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} {\text{a}}_{{\text{j}}}^{{\text{n}}} {\text{h}}_{{\text{j}}} } \right) = 0,{\text{ n}} = 1,\ldots,{\text{N}} \\ & \quad \quad \quad \quad {\upmu }_{{\text{n}}} \ge 0,{\text{n}} = 1,\ldots,{\text{N}} \\ \end{aligned}\begin{aligned} \frac{{\partial {\text{L}}}}{{\partial {\upmu }_{{\text{n}}} }} & = {\upmu }_{{\text{n}}} \left( {{\text{r}}_{{\text{n}}} - \mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} {\text{a}}_{{\text{j}}}^{{\text{n}}} {\text{h}}_{{\text{j}}} } \right) = 0,{\text{ n}} = 1,\ldots,{\text{N}} \\ & \quad \quad \quad \quad {\upmu }_{{\text{n}}} \ge 0,{\text{n}} = 1,\ldots,{\text{N}} \\ \end{aligned}\begin{aligned} \frac{{\partial {\text{L}}}}{{\partial {\upmu }_{{\text{n}}} }} & = {\upmu }_{{\text{n}}} \left( {{\text{r}}_{{\text{n}}} - \mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{H}}} {\text{a}}_{{\text{j}}}^{{\text{n}}} {\text{h}}_{{\text{j}}} } \right) = 0,{\text{ n}} = 1,\ldots,{\text{N}} \\ & \quad \quad \quad \quad {\upmu }_{{\text{n}}} \ge 0,{\text{n}} = 1,\ldots,{\text{N}} \\ \end{aligned}$$ Using Shephard's lemma and denoting \(\partial {\mathrm{h}}_{\mathrm{i}}/\partial {\mathrm{p}}_{\mathrm{j}}\) (i.e., the Slutsky term) by \({\mathrm{s}}_{\mathrm{ij}}\), Eq. (5) becomes (7): $${\sum }_{\mathrm{j}=1}^{\mathrm{H}}\left[\left({\mathrm{p}}_{\mathrm{j}}-{\stackrel{\sim }{\mathrm{p}}}_{\mathrm{j}}\right)-{\sum }_{\mathrm{n}=1}^{\mathrm{N}}{\upmu }_{\mathrm{n}}{\mathrm{a}}_{\mathrm{j}}^{\mathrm{n}}\right]{\mathrm{s}}_{\mathrm{ji}}=0,\mathrm{ i}=1,\ldots,\mathrm{H}$$ For Eq. (7) to hold it is necessary for the term in brackets to be equal to zero. Assuming all the \(\mathrm{N}\) constraints are binding, the shadow price problem in Eq. (6) reduces to Eqs. (8): $$\begin{array}{c}{\stackrel{\sim }{\mathrm{p}}}_{\mathrm{i}}={\mathrm{p}}_{\mathrm{i}}-{\sum }_{\mathrm{n}=1}^{\mathrm{N}}{\upmu }_{\mathrm{n}}{\mathrm{a}}_{\mathrm{i}}^{\mathrm{n}},\mathrm{i}=1,\ldots,\mathrm{H}\\ {\sum }_{\mathrm{i}=1}^{\mathrm{H}}{\mathrm{a}}_{\mathrm{i}}^{\mathrm{n}}{\mathrm{h}}_{\mathrm{i}}\left(\stackrel{\sim }{\mathrm{p}},\mathrm{U}\right)={\mathrm{r}}_{\mathrm{n}},\mathrm{n}=1,\ldots,\mathrm{N}\end{array}$$ Due to its nonlinear nature, Eq. (8) cannot be solved analytically; however, Irz et al. (2015) provide a method where the solution can be computed iteratively. In fact, they simulate the impact of adopting recommendations in a Marshallian context, i.e., holding income (or total expenditure) and prices constant. The structure of the solution procedure is as follows: Given a percentage change in the level of the additional constraints (i.e., \({\mathrm{r}}_{\mathrm{n}}\)), the changes in Hicksian demands are calculated (\(\partial \mathrm{h}/\partial \mathrm{r}\)). The quantities (i.e., Hicksian quantities) thus are obtained and the original prices are then combined to calculate the compensating variation (CV) associated with the imposition of the additional constraints. The CV is given by \(\mathrm{C}\left(\mathrm{p},\mathrm{U}\right)-\stackrel{\sim }{\mathrm{C}}\left(\mathrm{p},\mathrm{U},\mathrm{A},\mathrm{r}\right)=-{\sum }_{\mathrm{n}=1}^{\mathrm{N}}{\sum }_{\mathrm{i}=1}^{\mathrm{H}}{\mathrm{p}}_{\mathrm{i}}\left(\frac{\partial {\mathrm{h}}_{\mathrm{i}}}{\partial {\mathrm{r}}_{\mathrm{n}}}\right)\). The CV, which hypothetically would allow the consumer to maintain their utility level, is then removed to calculate the corresponding changes in the Marshallian demand (i.e., \(\mathrm{\Delta x}\)), as in equation \(\mathrm{\Delta x}=\mathrm{\Delta h}=\stackrel{\sim }{\mathrm{h}}\cdot\upeta \cdot \frac{\mathrm{CV}}{\mathrm{p}\cdot \stackrel{\sim }{\mathrm{h}}}\). Note that because the additional constraint is directly imposed on the Hicksian demands rather than the Marshallian ones, there is no guarantee that the diets calculated in step c will satisfy the constraints. Therefore, there is the need to evaluate the constraints using the resulting Marshallian demands. If this Marshallian solution satisfies the constraints, then the procedure is completed. If the solution does not satisfy the recommendation, the changes in the constraints need to be adjusted in the first step and the procedure run again. In addition to solving the consumer problem of including higher quantities of millet in the diet (through increasing the recommended quantity of millet), this study also estimated the change in the nutritional value of the diet (in contrast to the nutritional value of millet alone). This was done in two steps: first, once the new consumption was computed, it was transformed into its nutritional components using nutritional coefficients. Second, in order to summarise the nutritional results, the Mean Adequacy Ratio (MAR) was computed. This estimates the percentage of mean daily intake of beneficial nutrients, with 100% representing a diet which would conform to all nutritional requirements (Vieux et al. 2013). The nutrients used here were chosen based on data availability; they included calcium, iron, zinc, vitamin C, thiamine (vitamin B1), riboflavin (vitamin B2), niacin (vitamin B3), vitamin B6, folate and vitamin A. Note that the components of the MAR are truncated to 100, so surpluses of any nutrient cannot compensate the lack of another. The formula of the MAR is given by (9), where \({\mathrm{c}}_{\mathrm{i}}\) is the intake of nutrient i and \({\mathrm{R}}_{\mathrm{i}}\) its recommended intake.Footnote 2 $$\mathrm{MAR}=\frac{1}{10}\times \sum_{\mathrm{i}=1}^{10}\frac{{\mathrm{c}}_{\mathrm{i}}}{{\mathrm{R}}_{\mathrm{i}}}\times 100$$ To summarise, the method extends the theory of the consumer under rationing and shows that adjustments in consumption can be estimated by combining data on food consumption, price (Hicksian and Marshallian) and expenditure elasticities, as well as food composition data. The next section presents the application of this method to the case of increasing the consumption of millet in the Ugandan diet. Data and implementation Given the varying consumption characteristics of Uganda's population, the use of average elasticities, quantities, prices and food composition information is not the best approach; we therefore considered three differentiated cases: rural consumers, poor urban consumers (urban-poor) and affluent urban consumers (urban-affluent). Most of the required information (demand elasticities and expenditure) for the three aforementioned groups was obtained from Boysen (2016), who estimated unconditional (Marshallian and Hicksian) own price elasticities and income (expenditure) elasticities for Uganda by expenditure quintile using a two-stage budgeting demand system including one non-food and 14 food items based on the 2012/2013 Ugandan National Household Survey (UNHS), a nationally representative survey of 6887 households.Footnote 3 In the first stage households allocate their consumption budget to food and non-food items. In the second budgeting stage, households allocate the food budget to 14 different item groups. Then, the three aforementioned consumer groups were established using the average for the rural consumers' group, average of information for the lower three urban quintiles for the poor urban consumers group, and the average of the top two quintiles for the affluent urban consumers group). The classification used in Boysen (2016) does not fully fit the purpose of this study due to the fact that millet is aggregated with other cereals in the groups of "other cereals". To address this, an additional budget stage was added by estimating several conditional demand systems and using the method in Carpentier and Guyomard (2001) to compute the unconditional budgetary third stage.Footnote 4 The structure of the final demand system considers a total of 28 categories is presented in Fig. 2. Source: Based on Boysen (2016) Uganda augmented demand system. The nutritional analysis (e.g., for the computation of the MAR) requires actual quantities and prices. A limitation of the Living Standard Measurement Surveys for Uganda and many other countries is that the recorded quantities are not uniform (e.g., quantities are recorded in the measurement scale provided by the interviewee and this can be small, medium or large buckets, heaps or clusters among others), whilst the nutritional information is provided for specific weights (e.g., per 100 g).Footnote 5 The approach adopted was to use the retail prices recorded by product provided by the Uganda Bureau of Statistics (UBOS) (2013), which cover the period analysed by Boysen (2016) and six price collection points in the country. These prices, expressed in Uganda Shillings (UGX) per metric unit, were used to obtain the quantities consumed within each one of the three groups analysed (Table 1). Table 1 Uganda—consumption by socioeconomic group. Sources: Boysen (2016), UBOS (2013) The method by Irz et al. (2015) requires the full matrices of price elasticities, while Boysen (2016) provides only own-price and expenditure elasticities. Cross-price elasticities were calibrated using Beghin et al. (2004) approach, which allows their computation within a demand system that is theoretically consistent with consumer theory. Beghin et al. (2004) method is a flexible calibration technique for partial demand systems, combining the developments in incomplete demand systems (LaFrance and Hanemann 1989; LaFrance 1998) with a set of restrictions conditioned on the available elasticity estimates. The technique accommodates various degrees of knowledge on cross-price elasticities, satisfies curvature restrictions, and allows the recovery of an exact welfare measure for policy analysis (i.e., the equivalent variation). An overview of the calibration procedure is provided below, more detail is available in Beghin et al. (2004). The calibration approach builds on the Linquad structure (LaFrance et al. 2002) as the foundation for the partial demand system. The Linquad demand system is generated from the following expenditure function,\(\mathrm{C}\left(\mathrm{p},{\mathrm{p}}_{\mathrm{z}},\uptheta \right)\)Footnote 6 (10): $$\mathrm{C}\left(\mathrm{p},{\mathrm{p}}_{\mathrm{z}},\uptheta \right)=\mathrm{p{^{\prime}}}\upvarepsilon +\frac{1}{2}\mathrm{p{^{\prime}}}\mathrm{Vp}+\updelta \left({\mathrm{p}}_{\mathrm{z}}\right)+\uptheta \left({\mathrm{p}}_{\mathrm{z}},\mathrm{U}\right){\mathrm{e}}^{{\upchi }^{\mathrm{i}}\mathrm{p}}$$ where \(\updelta \left({\mathrm{p}}_{\mathrm{z}}\right)\) is an arbitrary real value function of \({\mathrm{p}}_{\mathrm{z}}\), i.e. the prices of all the other goods not considered on the incomplete demand system; \(\uptheta \left({\mathrm{p}}_{\mathrm{z}},\mathrm{U}\right)\) is the constant of integration, which is increasing in U; and \(\upchi ,\upvarepsilon\) and \(VVV\) are the vectors and respectively the matrix of parameters to be recovered in the calibration. Applying Shepherd's lemma to Eq. (10), the Hicksian demand function is obtained (11): $$\mathrm{h}=\upvarepsilon +\mathrm{Vp}+\upchi \left[\uptheta \left({\mathrm{p}}_{\mathrm{z}},\mathrm{U}\right){\mathrm{e}}^{{\upchi }^{\mathrm{i}}\mathrm{p}}\right]$$ The integrating factor, \(\uptheta \left({\mathrm{p}}_{\mathrm{z}},\mathrm{U}\right){\mathrm{e}}^{{\upchi }^{\mathrm{i}}\mathrm{p}}\), makes the demand system an exact system of partial differential equations. The Linquad expenditure function (10) provides a complete solution class to this system of differentials and represents the exhaustive class of expenditure functions generating demands for quantities x that are linear in total income (M) and linear and quadratic in prices for x. Solving the expenditure function (10) for \(\uptheta \left({\mathrm{p}}_{\mathrm{z}},\mathrm{U}\right){\mathrm{e}}^{{\upchi }^{\mathrm{i}}\mathrm{p}}\) and replacing expenditure with M for income yields the Linquad Marshallian demand functions (12): $$\mathrm{x}=\upvarepsilon +\mathrm{Vp}+\upchi \left(\mathrm{M}-\mathrm{\varepsilon {^{\prime}}}\mathrm{p}-\frac{1}{2}\mathrm{p{^{\prime}}}\mathrm{Vp}-\updelta \left({\mathrm{p}}_{\mathrm{z}}\right)\right)$$ Then, the Marshallian price elasticities (\(\varepsilon_{ij}\varepsilon_{ij}\varepsilon_{ij}\)) are given by (13): $${\upvarepsilon }_{\mathrm{ij}}=\left[{\upnu }_{\mathrm{ij}}-{\upchi }_{\mathrm{i}}\left({\upvarepsilon }_{\mathrm{j}}+{\sum }_{\mathrm{k}=1}^{\mathrm{H}}{\upnu }_{\mathrm{jk}}{\mathrm{p}}_{\mathrm{k}}\right)\right]{\mathrm{p}}_{\mathrm{j}}/{\mathrm{x}}_{\mathrm{i}}\mathrm{ i}=1,\ldots,\mathrm{H};\mathrm{j}=1,\ldots,\mathrm{H}$$ The Slutsky matrix (S) is given by (14): $$\mathrm{S}=\mathrm{V}+\left(\mathrm{M}-\mathrm{\varepsilon {^{\prime}}}\mathrm{p}-\frac{1}{2}\mathrm{p{^{\prime}}}\mathrm{Vp}-\updelta \left({\mathrm{p}}_{\mathrm{z}}\right)\right)\mathrm{\chi \chi {^{\prime}}}$$ The Hicksian elasticities (\({\upvarepsilon }_{\mathrm{ij}}^{\mathrm{h}}\)) are given by (15): $${\upvarepsilon }_{\mathrm{ij}}^{\mathrm{h}}=\left[{\upnu }_{\mathrm{ij}}-{\upchi }_{\mathrm{i}}{\upchi }_{\mathrm{j}}\left(\mathrm{M}-\mathrm{\varepsilon {^{\prime}}}\mathrm{p}-\frac{1}{2}\mathrm{p{^{\prime}}}\mathrm{Vp}-\updelta \left({\mathrm{p}}_{\mathrm{z}}\right)\right)\right]{\mathrm{p}}_{\mathrm{j}}/{\mathrm{x}}_{\mathrm{i}}\mathrm{i}=1,\ldots,\mathrm{H};\mathrm{j}=1,\ldots,\mathrm{H}$$ The necessary information set for the calibration is as follows: income and own-price elasticity estimates, levels of Marshallian demands x, the total income (or expenditure M) and prices. This assumes that no cross-price elasticities are available and thus need to be computed. The calibration involves the recovery of elements of the H-vectors, \(\upchi\) and \(\upvarepsilon\), together with the elements of the \(\mathrm{H}\times \mathrm{H}\) matrix (i.e., \({\upnu }_{\mathrm{ij}}\)). The calibration imposes symmetry and negative semi-definiteness of the Hessian of the expenditure function. In the demand system, homogeneity degree 1 in prices for the expenditure function is imposed by deflating prices by a consumer price index although homogeneity in prices plays no role in the recovery of parameters in the calibration procedure. The calibration is done sequentially. First, point estimates of derivatives of demand with respect to income \(\upchi\) are obtained from the known income elasticity estimates such as \({\upchi }_{\mathrm{i}}={\mathrm{x}}_{\mathrm{i}}{\upeta }_{\mathrm{i}/\mathrm{M}}\). Then, income response parameters are substituted into (12) and (13). Next, price responses are recovered from the point estimates corresponding to the available price elasticities, evaluated at the reference level of the data. Then, all price responses together with restrictions on S from integrability, and the observed demanded quantities are used to estimate the remaining parameters of the demand system. Both procedures, the augmented microeconomic consumer problem and the calibration of elasticities were implemented in MS Excel by means of routines written in Visual Basic for Application (VBA). The calibrated Marshallian and Hicksian price elasticity matrices by consumer group (Additional file 1: Tables A1 to A3), and the food composition information (Additional file 1: Table A4) are presented in the Annex. The implemented simulation model was used for two simulations that aimed to increase the amount of millet in the diets of the three consumer groups by 50 per cent and, respectively, by 100% (i.e., double it). Note that on the one hand, although these percentages might appear large, the quantities of millet in the diet are small, so the actual increase is realistic. On the other hand, large increases in the demand for millet are needed to motivate expansion of the supply of millet. Figure 3a, b present the simulation results for the rural group, 4a and 4b for the poor urban group and 5a and 5b for the affluent urban groups. They present the changes in quantities in the diets and changes in nutrients. a Rural group—simulation of annual consumption (in 100 g). b Rural group—change in daily nutrients by scenario The changes in quantities (i.e., Figs. 3a, 4a, 5a) show the rise of millet, which, despite the high simulated increases, still represents a small percentage in the diet in comparison with other staples such as matooke or maize. Note that the higher quantity of millet in the diet brings dietary changes due to two reasons, namely preferences for different foods and the fact that they compete within the consumer budget. a Urban lower quintiles—simulation of annual consumption (in 100 g). b Urban lower quintiles—change in daily nutrients by scenario a Urban upper quintiles—simulation of annual consumption (100 g). b Urban upper quintiles—change in daily nutrients by scenario Figures 3a, 4a and 5a show that most of the changes occur in the quantities of staples (maize, other cereals, potatoes and matooke), although the other foods are also slightly affected. This is common for all the groups, although, as shown in Fig. 5a, the changes are less significant. Figures 3b, 4b, 5b show the changes in nutrients in terms of indices (with respect to baseline equal to 100). All figures show that the introduction of millet increases the amount of calories in the diet with respect to the baseline. In terms of macronutrients (i.e., proteins, lipids and carbohydrates) the results indicate that with the exception of lipids, millet contributes positively to the diet of all population groups. These are deficiencies that have been mentioned in the literature, particularly regarding women (e.g., Bachou and Labadarios 2002). With respect to micronutrients, the results indicate that the expansion of millet in the diet improves the quantities of iron, zinc, riboflavin and niacin in all groups. It, however, has negative effects on calcium, vitamin C, and vitamin A in all groups. The remaining micronutrients (i.e., thiamine, vitamin B6 and folate) showed differences by group. Thus, thiamine increased in the rural and affluent urban group, whilst decreasing on the poor urban group; vitamin B6 decreased in the rural and poor urban group and increased in the affluent urban group, and folate decreased in the urban groups and increased in the rural group. USAID (2014) points out micronutrient deficiencies in Uganda, particularly vitamin A and iron, which are highly prevalent in women and children. The results indicate that, whilst iron may increase with more millet, vitamin A decreases for all groups. The latter finding might be explained by the reduction of matooke in the diet, which, as shown by the food composition table (Additional file 1: Table A4 in the Annex), brings vitamin A to the diet, whilst millet does not. Figure 6 summarises simulations for the three consumer groups by presenting the relationship between the change in the price of millet required to increase the quantity demanded, as simulated, and the MAR as a measure of nutritional quality. As the increase in the quantity of millet relates to a decrease in price, the current baseline is located at the right-hand side of the figure; results for the 50% increase in the quantity of millet are in the middle; and results for the 100% increase in the quantity of millet are at the left-hand side. As millet consumption expands (move from right to left of the figure), the rural poor benefit more than other consumers according to the MAR indicator, though for all consumer groups the changes are only marginal; for the urban-affluent group, the MAR slightly decreases with the increases in millet consumption. Decrease in price and MAR due to increase of millet in the diet In regard to the changes in the millet prices required by the simulations, the rural group needed the least reduction to increase their consumption. Nevertheless, the simulations indicated that to increase the quantity of millet in the diet under current consumers' preferences would require substantive reductions in price. Below, we discuss the implications of these findings further. The first point is that our simulations are a reminder that the evaluation of the nutritional benefits of the expansion of the consumption of any food product needs to be placed within the context of the broader diet because of displacement effects. In our study this effect explained the low MAR indicators and some of the conflicting nutritional results presented in Figs. 3, 4 and 5. This indicates that blanket recommendations to increase millet in the diet based only on its nutritional characteristics and in comparison only with major staples is inappropriate. The second point relates to the significant price reductions (comparing the shadow price with the original price) that would be needed to markedly increase millet consumption. This would require expanding the productivity of supply considerably, which is unrealistic. Likely more important therefore is the need to expand actual demand by measures such as improving consumers' appreciation for millet (i.e., their willingness to pay). This could involve enhancing the quality of millet products and the introduction of well-targeted new products incorporating millet, that reflect consumers' tastes and are made accessible to them. One way could be to promote the crop as an ingredient replacing or complementing other cereals in the preparation of foods such as breakfast cereals or biscuits.Footnote 7 It is important to stress however that only after considering consumers' views of millet will it be possible to envisage a strategy to restore it to its previous importance in the Ugandan diet. This is akin to collecting ethnographic evidence related to millet use in India (Chera 2017). It is already known that subnational cultural and social factors are important determinants of millet consumption in Uganda (USAID 2014), so different approaches to stimulate consumption may be required in different parts of the country. In Fig. 7 we further illustrate the relationship between price and the quantity of consumption. In the diagram, the original equilibrium point is given by P0, Q0. Given current preferences (i.e., the demand schedule measuring the total demand of the crop), a supply policy increasing the productivity of the crop that reduces the marginal costs of production would move supply from S0 to S1. This would require a substantive reduction in the marginal costs to obtain an increase in the equilibrium quantity from Q0 to Q'0 (almost twice the demanded quantity) and a substantive reduction in market price from P0 to P'0. Instead, a more effective way to achieve the target of expanding the consumption of the crop would be to consider way to expand the demand. This is by increasing the appreciation of consumers for the crop as measured by willingness to pay (the rotation outward of the demand curve) could keep the original price but expand demand to Q1. Need of a multidisciplinary approach to expand the consumption of millet Underutilized, neglected or orphan crops have been cited as having the potential to play a number of roles in the improvement of food and nutritional security; however, consumers in developing countries are also increasingly abandoning their traditional diets and replacing them with 'western' ones based on a small set of major staples. In this paper, we have investigated the implications of expanding the consumption of underutilized crops on current diets. We did so by considering consumers' preferences in the form of price and income elasticities, using a modified version of the microeconomic consumer problem, augmented with linear constraints based on the generalized rationing theory. The method was applied to the case study of the consumption of millet in Uganda by three socioeconomic groups. Our results show complex impacts on dietary quality. They also show that the introduction of millet increases the calories in the diet and the macronutrients for all socioeconomic groups. With respect to micronutrients, the expansion of millet in the diet improves the quantities of iron, zinc, riboflavin and niacin in all groups. It, however, has negative effects on calcium, vitamin C, and vitamin A in all groups. The remaining micronutrients (i.e., thiamine, vitamin B6 and folate) showed differences by group and the reduction of vitamin A might be explained by the partial substitution of matooke in the diet. Moreover, the implication of the results indicate that under current preferences, substantially increasing the quantity of millet in consumers' diet will require well targeted incentives to motivate consumers' appreciation for millet. These findings remind us, as Chera (2017) remarks, that appeals to consumers cannot be a mere afterthought, nor can they simply be framed in terms of development policy or agricultural advantages. The potential nutritional, environmental and economic benefits of embracing agricultural biodiversity are not likely to be enough to change consumers' preferences for crops such as millets. There is the need rather to bring underutilized crops closer to consumers' tastes and preferences. This will be an interdisciplinary process, with roles for both natural and social scientists. Only in this way will the millet value chain in Uganda (or similarly, for other orphan crops here and elsewhere) bring sustainable economic, environmental, food security and nutritional benefits. The data used on the paper is available from the corresponding author upon request. This significant decrease happened despite millet being described as a staple food for many communities in different parts of Uganda; besides being grown for food and income, it is considered central to many cultural practices, such as child naming, traditional marriage ceremonies, and welcoming special guests (GlobalFoodMate 2013). Gierend and Orr (2015) indicate that millets in Uganda have experienced a decline in availability and consumption, with the latter apparently reducing from 6 kg/year in 2000 to 5.7 kg/year in 2013. They suggest this was due to a decline in millet cultivation after civil unrest in 2007. Gierend et al. (2014) also mention area cultivated, total production and sector value declines between 1992 and 2012, with a sharp drop in 2008 due to political unrest. As data are in per capita terms (not gender differentiated), we estimated the maximum nutritional requirement for male and female populations (see Omiat and Shively 2007). It should also be noted that, due to the lack of nutritional information, it was not possible to compute the Mean Excess Ratio (MER). This is an indicator of low nutritional quality that is likely important for more affluent population groups and is calculated as the mean daily percentage of maximum recommended values for three harmful nutrients, namely, saturated fats, sodium and free sugars (Vieux et al. 2013). As noted by one of the reviewers because the estimation of the demand model was done using a cross section dataset, there is the possibility that the results (i.e., the elasticities) could be biased due to an endogeneity problem coming from measurement error in prices and household heterogeneity. However, the estimates produced by Boysen (2016) took into consideration both issues. The former problem was controlled applying the Deaton (1997) procedure and the latter by including detailed socioeconomic characteristics into the demand estimation step. Conditional demand figures are not presented here as they are intermediate results required to compute the unconditional elasticities. They are however available from the authors on request. Note that Boysen (2016) only provided relative prices and did not report quantities. \(\mathrm{C}\left(\mathrm{p},{\mathrm{p}}_{\mathrm{z}},\uptheta \right)\) is a quasi-expenditure function since it relates to an incomplete demand system (i.e., it represents only a part of total expenditure, such as food expenditure). From here onwards however it is referred to simply as the 'expenditure function' for simplicity. According to Mintel's Global New Product Development (GNPD) database, over 10,000 products sold in South Africa and Nigeria are made from wheat and potato. In contrast, only a few products incorporate underutilized crops. Yet academic research has explored the behaviour of doughs formed from underutilized crop flours (e.g., Angilioni et al. 2013), revealing there is potential for their substitution of staple crop flours. ADBG: AOCC: African Orphan Crops Consortium ATDF: African Technology Development Forum FAO: United Nations Food and Agriculture Organization FAOSTAT: FAO Statistical Office GNPD: Mintel's Global New Product Development ICRISAT: LSMS: Living Standard Measurement Survey Mean Adequacy Ratio MER: Mean Excess Ratio UBOS: Uganda Bureau of Statistics UNHS: Ugandan National Household Survey UNPA: Ugandan National Planning Authority USAID: VBA: Visual Basic for Application WFP: World Food Programme African Development Bank Group (2016) Feed Africa: strategy for agricultural transformation in Africa 2016–2025. Cote d'Ivoire African Orphan Crop Consortium (2021) The African orphan crop consortium. http://africanorphancrops.org/ African Technology Development Forum (ATDF) (2009) African Orphan Crops: their significance and prospects for improvement, 6(3). http://www.atdforum.org/journal/html/2009-34/0/ Angilioni A, Collar C (2013) Suitability of oat, millet and sorghum in breadmaking. Food Bioprocess Technol 6(6):1486–1493 Bachou H, Labadarios D (2002) The nutrition situation in Uganda. Nutrition 18(4):356–358 Banks J, Blundell R, Lewbel A (1997) Quadratic Engel curves and consumer demand. Rev Econ Stat 79(4):527–539 Beghin JC, Bureau JC, Drogué S (2004) Calibration of incomplete demand systems in quantitative analysis. Appl Econ 36(8):839–847 Boysen O (2012) A food demand system estimation for Uganda. IIIS Discussion paper no.396/ March 2012. Trinity College Boysen O (2016) Food demand characteristics in Uganda: estimation and policy relevance. S Afr J Econ 84(2):260–293 Carletto G, Ruel M, Winters P, Zezza A (2015) Farm-level pathways to improved nutritional status: introduction to the special issue. J Dev Stud 51(8):945–957 Carpentier A, Guyomard H (2001) Unconditional elasticities in two-stage demand systems: an approximate solution. Am J Agric Econ 83(1):222–229 Cheng A, Mayes S, Dalle G, Demissew S, Massawe F (2017) Diversifying crops for food and nutrition security—a case of teff. Biol Rev 92(1):188–198 Chera M (2017) Transforming millets: strategies and struggles in changing taste in Madurai. Food Cult Soc 20(2):303–324 Dawson IK, Hedley P, Guarino L, Jaenicke H (2009) Does biotechnology have a role in the promotion of underutilised crops? Food Policy 34:319–328 Dawson IK, Powell W, Hendre P, Bančič J, Hickey JM, Kindt R, Hoad S, Hale I, Jamnadass R (2019) The role of genetics in mainstreaming the production of new and orphan crops to diversify food systems and support human nutrition. New Phytol 224:37–54 Deaton A (1997) The analysis of household surveys: a microeconometric approach to development policy. World Bank Publications FANTA-2 (2010) The analysis of the nutrition situation in Uganda. Food and Nutrition Technical Assistance II Project (FANTA-2), FHI 360, Washington, DC Gierend A, Ojulong HF, Wanyera N (2014) A combined ex-post/ex-ante impact analysis for improved sorghum and finger millet varieties in Uganda. Socioeconomics Discussion Paper Series Number 19. Gierend A, Orr A (2015) Consumer demand for sorghum and millets in eastern and southern Africa: priorities for the CGIAR Research Programme for Dryland Cereals; Socioeconomics Discussion Paper Series 35 GlobalFoodMate (2013) Introducing foxtail millet in Uganda. http://news.foodmate.com/201307/news_21387.html Global Nutrition Report (2018) Global Nutrition Report. https://globalnutritionreport.org/reports/global-nutrition-report-2018/ Haggblade S, Dewina R (2010) Staple food prices in Uganda. Prepared for the Comesa policy seminar on "Variation in staple food prices: causes, consequence, and policy options," Maputo, Mozambique, pp 25–26 Hawkes C (2006) Uneven dietary development: linking the policies and processes of globalization with the nutrition transition, obesity and diet-related chronic diseases. Glob Health 2(1):4 Hotz C, Abdelrahman L, Sison C, Moursi M, Loechl C (2012) A food composition table for Central and Eastern Uganda. International Food Policy Research Institute and International Center for Tropical Agriculture, Washington, DC Irz X, Leroy P, Réquillart V, Soler LG (2015) Economic assessment of nutritional recommendations. J Health Econ 39:188–210 Irz X, Leroy P, Réquillart V, Soler LG (2016) Welfare and sustainability effects of dietary recommendations. Ecol Econ 130:139–155 Jackson WA (1991) Generalized rationing theory. Scot J Polit Econ 38(4):335–342 Kennedy E, Reardon T (1994) Shift to non-traditional grains in the diets of East and West Africa: role of women's opportunity cost of time. Food Policy 19(1):45–56 Khoury CK, Bjorkman AD, Dempewolf H, Ramirez-Villegas J, Guarino L, Jarvis A, Rieseburg LH, Struik PC (2014) Increasing homogeneity in global food supplies and the implications for food security. Proc Natl Acad Sci USA 111:4001–4006 LaFrance JT, Hanemann WM (1989) The dual structure of incomplete demand systems. Am J Agric Econ 71:262–274 LaFrance JT (1998) The LINQUAD incomplete demand model. Working Paper, Department of Agricultural and Resource Economics, University of California, Berkeley LaFrance JT, Beatty TKM, Pope RD, Agnew GK (2002) The U.S. distribution of income and Gorman Engel curves for food. J Econom 107:235–257 Larochelle C, Katungi E, Cheng Z (2017) Pulse consumption and demand by different population subgroups in Uganda and Tanzania. CGIAR Working paper. Available online at: Pulse_Demand_Uganda_Tanzania_Final.pdf (cgiar.org) McMullin S, Stadlmayr B, Mausch K, Revoredo-Giha C, Burnett F, Guarino L, Brouwer ID, Jamnadass R, Graudal L, Powell W, Dawson IK (2021) Determining appropriate interventions to mainstream nutritious orphan crops into African food systems. Global Food Secur. https://doi.org/10.1016/j.gfs.2020.100465 Monteiro CA, Moubarac JC, Cannon G, Ng SW, Popkin B (2013) Ultra-processed products are becoming dominant in the global food system. Obes Rev 14(S2):21–28 Mayes S, Massawe FJ, Alderson PG, Roberts JA, Azam-Ali SN, Hermann M (2011) The potential for underutilized crops to improve security of food production. J Exp Bot 63(3):1075–1079 Moodie R, Stuckler D, Monteiro C, Sheron N, Neal B, Thamarangsi T, Lincoln P, Casswell S, Lancet NCD Action Group (2013) Profits and pandemics: prevention of harmful effects of tobacco, alcohol, and ultra-processed food and drink industries. Lancet 381(9867):670–679 Omiat G, Shively G (2017) Charting the cost of nutritionally-adequate diets in Uganda, 2000–2011. Afr J Food Agric Nutr Dev 17(1):11571–11591 Orr A, Schipmann-Schwarze C, Gierend A, Nedumaran S, Mwema C, Muange E, Manyasa E, Ojulong H (2020) Why invest in Research & Development for sorghum and millets? The business case for East and Southern Africa. Global Food Secur. https://doi.org/10.1016/j.gfs.2020.100458 Qaim M (1999) The economic effects of genetically modified orphan commodities: projections for sweet potato in Kenya. ISAAA Sibhatu KT, Krishna VV, Qaim M (2015) Production diversity and dietary diversity in smallholder farm households. Proc Natl Acad Sci 112(34):10657–10662 Varshney RK, Ribaut JM, Buckler ES, Tuberosa R, Rafalski JA, Langridge P (2012) Can genomics boost productivity of orphan crops? Nat Biotechnol 30(12):1172–1176 Vieux F, Soler LG, Touazi D, Darmon N (2013) High nutritional quality is not associated with low greenhouse gas emissions in self-selected diets of French adults. Am J Clin Nutr 97(3):569–583 Walker T. Implications from Phase I for a Consolidated Grain Legumes and Dryland Cereals CRP for Phase II. http//crp-gldc.icrisat.org/PrioritySettingProductLinesand2520ProspectiveTechnologies.docx Worku IH, Dereje M, Minten B, Hirvonen K (2017) Diet transformation in Africa: the case of Ethiopia. Agric Econ 48(S1):73–86 Uganda Bureau of Statistics (UBOS) (2013) Consumer price index (several issues). http://www.ubos.org/onlinefiles/uploads/ubos/cpi/cpijan2013/FINALCPIreleaseDEC.pdf Uganda Bureau of Statistics (UBOS) and World Food Programme (2013) Comprehensive Food Security & Vulnerability Analysis (CFSVA). VAM Food Security Analysis. United Nations World Food Programme, Rome Uganda Ministry of Trade, Industry and Cooperatives (2015) National Grain Trade Policy. http://mtic.go.ug/wp-content/uploads/2019/08/National-Grain-Trade-Policy.pdf Ugandan National Planning Authority (2017) Towards zero hunger. A strategic review of sustainable development goal 2 in Uganda. http://npa.go.ug/wp-content/uploads/2017/08/SDG2-Strategic-Review-Report-Final.pdf United Nations – Food and Agriculture Organisation (FAO) (2021) Promoting neglected and underutilized crop species. http://www.fao.org/news/story/en/item/1032516/icode/ USAID (2014) Uganda: Nutrition Profile. https://www.Usaid.gov/sites/default/files/documents/1864/USAID-Uganda-Profile.pdf World Food Programme (2013) The cost of HUNGER in Uganda: implications on national development and prosperity social and economic impacts of child undernutrition in Uganda. United Nations Economic Commission for Africa (UNECA), Addis Ababa. http://home.wfp.org/libcat/docs//ENGLISH/WFPel%20014.pdf Part of the methods used derived from work as part of the 2016–2020 Scottish Government Strategic Research Programme, Theme 3: Food and Health and the ERANET Project "Implementing sustainable diets in Europe" (SUSDIET). This paper derives from the project 2017–2019 project "Formulating Value Chains for Orphan Crops in Africa" funded by the UK Biotechnology and Biological Sciences Research Council—Global Challenges Research Fund (BBSRC-GCRF), Foundation Awards for Global Agriculture and Food Systems. Rural Economy, Environment and Society Department, Scotland's Rural College (SRUC), King's Buildings, West Mains Road, Edinburgh, EH9 3JG, UK Cesar Revoredo-Giha, Luiza Toma & Faical Akaichi Knowledge and Innovation Hub, Scotland's Rural College (SRUC), Edinburgh, UK Ian Dawson Cesar Revoredo-Giha Luiza Toma Faical Akaichi CRG designed the study, co-wrote the draft and wrote the computer routines, LT co-wrote the draft and collaborated on the data curation, FA co-estimated the elasticities and commented the draft, ID collaborated on the data curation and commented on the draft. Correspondence to Cesar Revoredo-Giha. Additional file 1 . Table A1. Calibrated elasticities - Group rural. Table A2. Calibrated elasticities - Group urban-lower quintiles. Table A3. Calibrated elasticities - Group urban-upper quintiles. Table A4. Food composition (based on 100 grams). Revoredo-Giha, C., Toma, L., Akaichi, F. et al. Exploring the effects of increasing underutilized crops on consumers' diets: the case of millet in Uganda. Agric Econ 10, 1 (2022). https://doi.org/10.1186/s40100-021-00206-3 Revised: 15 September 2021 Underutilized crops Generalized rationing theory
CommonCrawl