text
stringlengths
100
500k
subset
stringclasses
4 values
Last edited by Aralabar 2 edition of On the quasimonotonicity of a square linear operator with respect to a nonnegative cone found in the catalog. On the quasimonotonicity of a square linear operator with respect to a nonnegative cone Published 1998 by Naval Postgraduate School, Available from National Technical Information Service in Monterey, Calif, Springfield, Va . The question of when a square, linear operator is quasimonotone nondecreasing with respect to a nonnegative cone was posed for the application of vector Lyapunov functions in 1974. Necessary conditions were given in 1980, which were based on the spectrum and the first eigenvector. This dissertation gives necessary and sufficient conditions for the case of the real spectrum when the first eigenvector is in the nonnegative orthant, and when the first eigenvector is in the boundary of the nonnegative orthant, it gives conditions based on the reducibility of the matrix. For the complex spectrum, in the presence of a positive first eigenvector the problem is shown to be equivalent to the irreducible nonnegative inverse eigenvalue problem. Statement Philip Beaver Pagination x, 96 p. ; The first aim in this paper is to deal with asymptotic behaviors of Green-Sch potentials in a cylinder. As an application we prove the integral representation of nonnegative weak solutions of the stationary Schrödinger equation in a cylinder. Next we give asymptotic behaviors of them outside an exceptional : Lei Qiao. Stack Exchange network consists of Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange. Solutions for Math Assignment #2 1 (1)Determine whether W is a subspace of V and justify your an- A system Ax = b of linear equations has at least one solution if and only if b 2Col(A). Proof. True. LetA = v 1 v v n So every square matrix is the sum of a symmetric matrix and a skew-symmetric matrix. 4File Size: KB. The radius of a sphere is 6 units. Which expression represents the volume of the sphere, in cubic units? π(6)2 π(6)3 π(12)2 π(12)3 2 See answers Answer Expert Verified /5 carlosego +57 florianmanteyw and 57 others learned from this answer By definition, the volume of a . A cone of radius a and height h is joined to a hemisphere of radius a to make a surface S that resembles an ice cream cone. Use Pappus's formula and the results in . Melissa. Preparation for life? Nelsons Quick-reference Guide For Christian Counselors Project Integra The world of the burglar The psychedelic journey of Marlene Dobkin de Rios Remarks of Mr. R. Hawley The wisdom of the Blackfoot, the Bloods and the Peigans of Canada Other fires Agency after postmodernism Bicycles from China Storm in a teacup On the quasimonotonicity of a square linear operator with respect to a nonnegative cone by Philip Beaver Download PDF EPUB FB2 Calhoun: The NPS Institutional Archive Theses and Dissertations Thesis Collection On the quasimonotonicity of a square linear operator with respect to a nonnegative cone. Enter the password to open this PDF file: Cancel OK. File name:. Approved for public release; distribution is unlimitedThe question of when a square, linear operator is quasimonotone nondecreasing with respect to a nonnegative cone was posed for the application of vector Lyapunov functions in Necessary conditions were given inwhich were based on the spectrum and the first : Philip Beaver. the quasimonotonicity of linear differential systems 36 d. stability through linear comparison systems 40 iv. the quasimonotonicity of a square, linear operator with respect to a nonnegative cone: the real spectrum 53 a matrices with a positive ftrst eigenvector 53 b. reducible matrices with a nonnegattve first eigenvector The method of vector Lyapunov functions to determine stability in dynamical systems requires that the comparison system be quasimonotone nondecreasing with respect to a cone contained in the. The question of when a square, linear operator is quasimonotone nondecreasing with respect to a nonnegative cone was posed for the application of vector Lyapunov functions Author: Luka Grubisic. The question of when a square, linear operator is quasimonotone nondecreasing with respect to a nonnegative cone was posed for the application of vector Lyapunov functions Author: Kazuo Ishihara. Use the On the quasimonotonicity of a square linear operator with respect to a nonnegative cone book in 2, to determine the copositivity of a matrix with respect to the nonnegative orthant and other cones. Use the outcome of 2. to calculate the scalar derivative along cones. The Dirichlet problem in a cone for second order elliptic quasi-linear equation We study the behavior near the boundary conical point of weak solutions to the Dirichlet problem for elliptic quasi-linear second-order equation with the p-Laplacian and the strong Construction of the positive supersolution for operator A in cone On the quasimonotonicity of a square linear operator with respect to a nonnegative cone book 2. operator equations in a Banach space m if the operators satisfy prescribed posi-tivity requirements with respect to a cone J. In § 4 this general theory will be used to prove eigenvalue comparison theorems for problems () — (). How. A nonnegative form t on a complex linear space is decomposed with respect to another nonnegative form w: it has a Lebesgue decomposition into an almost dominated form and a. SIAM Journal on Numerical Analysis > Volume 7, Issue 4 > / The quasimonotonicity of linear differential systems — the complex spectrum. Applicable AnalysisOn the duality operator of a convex cone. Linear Cited by: Definition. A subset C of a vector space V is a cone (or sometimes called On the quasimonotonicity of a square linear operator with respect to a nonnegative cone book linear cone) if for each x in C and positive scalars α, the product αx is in C. Note that some authors define cone with the scalar α ranging over all non-negative reals (rather than all positive reals, which does not include 0). A cone C is a convex cone if αx + βy belongs to C, for any positive scalars α, β. Let H be a complex Hilbert space, and let (H) be the algebra of all bounded linear operators on H. For a given subset of (H), we are interested in the characterization of operators in (H) which are expressible as a product of finitely many operators in and, for each such operator, the minimal number of factors in a by: SIAM Journal on Numerical Analysis > Volume 7, Issue 4 > () The quasimonotonicity of linear differential systems — the complex spectrum. Applicable Analysis() On the duality operator of a convex cone. Linear Algebra and its Applicati Cited by: As a byproduct, the spectral radius of a cone-preserving convolution operator turns out to be the same as that of the static gain matrix. Moreover, dual results based on the cone max norm are presented. Finally, the theoretical results are illustrated via linear systems over polyhedral cones or second-order cones, Cited by: 8. () Linear cone-invariant control systems and their equivalence. International Journal of Control() Zeno chattering of rigid bodies with multiple point by: every symmetric cone arises as the cone of squares of some Euclidean Jordan algebra [3, Theorems III and III]. Moreover, we can assume that \mathcal{E} has an unit element e satisfying yoe=y for all y\in \mathcal{E}. The map 0, which is also called Jordan product, is associated to the linear operator L_{y} defined by. In mathematics, a self-adjoint operator (or Hermitian operator) on a finite-dimensional complex vector space V with inner product ⋅, ⋅ is a linear map A (from V to itself) that is its own adjoint: =, for all vectors v and w. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its. linear preorder), then W = X+ is a wedge in X and ≤=≤W. Consequently, there is a perfect correspondence between linear preorders on a vector space X and wedges in X and so any property in an ordered vector space can be formulated in terms of the preorder or of the wedge. A cone K is a wedge satisfying the condition () (C3) K ∩(−K) = {0}. We consider a class of square MIMO transfer functions that map a proper cone in the space of L 2 input signals to the same cone in the space of output signals. Transfer functions in this class have the "DC-dominant" property: the maximum radius of the operator spectrum is attained by a DC input signal and, hence, the dynamic stability of the feedback interconnection of such Cited by: The Lebesgue spaces appear in many natural settings. The spaces L 2 (ℝ) and L 2 ([0,1]) of square-integrable functions with respect to the Lebesgue measure on the real line and unit interval, respectively, are natural domains on which to define the Fourier transform and Fourier series. In other situations, the measure may be something other than the ordinary Lebesgue. of positive linear operators (in nite dimensions), which is concerned with the study of the classical Perron-Frobenius theory of a (square, entrywise) nonnegative matrix and its generalizations from the geometric cone-theoretic viewpoint. For reviews on the subject, see [26] and [31]. For a rami cation of the theory in the study of. SIAM Journal on Applied Mathematics > Vol Issue 3 > Comparison theorems for weak splittings in respect to a proper cone of nonsingular matrices. Linear Algebra and its ApplicationsOn the Collatz-Wielandt numbers and the local spectral radius of a nonnegative operator. Linear Algebra and its ApplicationsCited by: Given a proper cone K x ⊂ R n and suppose that A is cross-positive on cone K x and A d is K x-nonnegative. Then the linear system (3) with time-varying delays is asymptotically stable for any delay d (t) satisfying 0 ≤ d (t) ≤ d if and only if A + A d is by: 86 CHAPTER 5. LINEAR TRANSFORMATIONS AND OPERATORS That is, sv 1 +v 2 is the unique vector in Vthat maps to sw 1 +w 2 under T. It follows that T 1 (sw 1 + w 2) = sv 1 + v 2 = s T 1w 1 + T 1w 2 and T 1 is a linear transformation. A homomorphism is a mapping between algebraic structures which preservesFile Size: KB. In mathematics (in particular, functional analysis) convolution is a mathematical operation on two functions (f and g) to produce a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and. All that remains is to prove that the semilinear map Tis either linear- or conjugate linear, and bounded. It is immediate that T and T 1 map one-codimensional linear manifolds into one-codimensional ones. Furthermore, a nite codimensional subspace of H is an operator range if and only if it is closed, so we infer that Tmaps Lat 1(H) onto Lat 1. Note that since distance is always non-negative, we take only the positive square root. So, the distance between the points P(x 1, y 1) and Q(x 2, y 2) is PQ = 22 xx y y21 2 1–+–, which is called the distance formula. Remarks: 1. In particular, the distance of a point P(x, y) from the origin O(0, 0) is given by OP = x22 y. Size: KB. $\begingroup$ No, its not. There may be some steps in between this u can see but I can't. If u derive the matrix of T for $2\times2$ case. U will see the matrix of T does directly depend upon the elements of A but not directly as the structure of the matrix A. Convex Analysis is the calculus of inequalities while Convex Optimization is its application. Analysis is inherently the domain of the mathematician while Optimization belongs to the engineer. In layman's terms, the mathematical science of Optimization is the study of how to make a good choice when confronted with conflicting requirements. be written as a linear combination of vectors in B, say v= v1e1 + v2e2 + + vnen ≡ n k= 1 vkek, we look for an explicit expression for the coefficients vk in this linear combination. By the linearity in the "first slot" of the inner product, we have v,ej = n k= 1 vkek,ej = n k= 1 vk ek,ej. 3File Size: KB. Here we are interested in linear operators or matrices that leave invariant a proper cone. By a proper cone we mean a nonempty subset if in a finite-dimensional real vector space V, which is a convex cone (i.e., x, y 0 imply ax + ßy G K), is pointed (i.e., K P)(- K) - {0}), closed (with respect to the usual. In geometry, a convex set or a convex region is a subset of a Euclidean space, or more generally an affine space over the reals, that intersects every line into a line segment (possibly empty). Equivalently, this is a subset that is closed under convex combinations. For example, a solid cube is a convex set. Given an order 2 tensor (i.e. a matrix), one always has that row rank is equal to the column rank, so that its multilinear rank or n -ranks is always equal to (R,R) for some R. Theorem: Let T be a linear mapping on a vector space V, and let λ be an eigenvalue of T. A vector v∈V is an eigenvector of T corresponding to λ if and only if v≠0. Abstract. We consider the solution operator for the wave equation on the flat Euclidean cone over the circle of radius, ρ>0, the manifold equipped with the metric g(r,θ)=dr 2 +r 2 dθ explicit representations of the solution operator in regions related to flat wave propagation and diffraction by the cone point, we prove dispersive estimates and hence scale Cited by: 5. as you have read, axioms are mathematical statements that are assumed to be true and taken without proof. use complete sentences to describe why. C* is always a convex cone, even if C is neither convex nor a cone. Alternatively, many authors define the dual cone in the context of a real Hilbert space (such as Rn equipped with the Euclidean inner product) to be what is sometimes called the internal dual cone. A partial ordering can be introduced in a Banach space with a cone by de-fining u operator M is greater than the operator L and write L linear operator L is uo-positive if. The linear inequality is a generalized inequality with respect to a proper convex cone. It may include componentwise vector inequalities, second-order cone inequalities, and linear matrix inequalities. The main solvers are conelp and coneqp, described in the sections Linear Cone Programs and Quadratic Cone Programs. Adjoint, Classical. Affine Transformation. Aleph Null (א‎ 0) Algebraic Numbers. Alternate Angles. Pdf Exterior Angles. Alternate Interior Angles. Alternating Series. Alternating Series Remainder. Alternating Series Test. Altitude of a Cone. Altitude of a Cylinder. Altitude of a Parallelogram. Altitude of a Prism. Altitude of a Pyramid.ij] ← download pdf is a linear operator. Regarding A as a linear operator, AT is its adjoint. A−T matrix transpose of inverse; and vice versa, ¡ A−1 ¢T = ¡ AT ¢−1 AT 1 first of various transpositions of a cubix or quartix A (p, p) skinny a skinny matrix; meaning, more rows than columns.If you like this Site about Solving Math Problems, please let Ebook know by ebook the +1 button. If you like this Page, please click that +1 button, too. Note: If a +1 button is dark blue, you have already +1'd it. Thank you for your support! (If you are not logged into your Google account (ex., gMail, Docs), a login window opens when you click on +1. atheizm.com - On the quasimonotonicity of a square linear operator with respect to a nonnegative cone book © 2020
CommonCrawl
Sex-biased prevalence in infections with heterosexual, direct, and vector-mediated transmission: a theoretical analysis Mathematical analysis of a weather-driven model for the population ecology of mosquitoes February 2018, 15(1): 95-123. doi: 10.3934/mbe.2018004 Numerical solution of a spatio-temporal gender-structured model for hantavirus infection in rodents Raimund BÜrger 1, , Gerardo Chowell 2,3,4, , Elvis GavilÁn 1, , Pep Mulet 5, and Luis M. Villada 6,7, CI2MA and Departamento de Ingeniería Matemática, Universidad de Concepción, Casilla 160-C, Concepción, Chile School of Public Health, Georgia State University, Atlanta, Georgia, USA Simon A. Levin Mathematical and Computational Modeling Sciences Center, Arizona State University, Tempe, AZ 85287, USA Division of International Epidemiology and Population Studies, Fogarty International Center, National Institutes of Health, Bethesda, MD 20892, USA Departament de Matemàtiques, Universitat de València, Av. Dr. Moliner 50, E-46100 Burjassot, Spain GIMNAP-Departamento de Matemáticas, Universidad del Bío-Bío, Casilla 5-C, Concepción, Chile CI2MA, Universidad de Concepción, Casilla 160-C, Concepción, Chile Received September 29, 2016 Accepted January 14, 2017 Published May 2017 Figure(12) / Table(1) In this article we describe the transmission dynamics of hantavirus in rodents using a spatio-temporal susceptible-exposed-infective-recovered (SEIR) compartmental model that distinguishes between male and female subpopulations [L.J.S. Allen, R.K. McCormack and C.B. Jonsson, Bull. Math. Biol. 68 (2006), 511-524]. Both subpopulations are assumed to differ in their movement with respect to local variations in the densities of their own and the opposite gender group. Three alternative models for the movement of the male individuals are examined. In some cases the movement is not only directed by the gradient of a density (as in the standard diffusive case), but also by a non-local convolution of density values as proposed, in another context, in [R.M. Colombo and E. Rossi, Commun. Math. Sci., 13 (2015), 369-400]. An efficient numerical method for the resulting convection-diffusion-reaction system of partial differential equations is proposed. This method involves techniques of weighted essentially non-oscillatory (WENO) reconstructions in combination with implicit-explicit Runge-Kutta (IMEX-RK) methods for time stepping. The numerical results demonstrate significant differences in the spatio-temporal behavior predicted by the different models, which suggest future research directions. Keywords: Spatial-temporal SEIR model, hantavirus infection, gender-structured model, convection-diffusion-reaction system, implicit-explicit Runge-Kutta scheme, weighted essentially non-oscillatory reconstruction. Mathematics Subject Classification: Primary: 92D30; Secondary: 97M60, 65L06, 65M20. Citation: Raimund BÜrger, Gerardo Chowell, Elvis GavilÁn, Pep Mulet, Luis M. Villada. Numerical solution of a spatio-temporal gender-structured model for hantavirus infection in rodents. Mathematical Biosciences & Engineering, 2018, 15 (1) : 95-123. doi: 10.3934/mbe.2018004 G. Abramson and V. M. Kenkre, Spatiotemporal patterns in the Hantavirus infection Phys. Rev. E 66 (2002), 011912 (5pp). doi: 10.1103/PhysRevE.66.011912. Google Scholar M. A. Aguirre, G. Abramson, A. R. Bishop and V. M. Kenkre, Simulations in the mathematical modeling of the spread of the Hantavirus Phys. Rev. E 66 (2002), 041908 (5pp). doi: 10.1103/PhysRevE.66.041908. Google Scholar L. J. S. Allen, B. M. Bolker, Y. Lou and A. L. Nevai, Asymptotic of the steady states for an SIS epidemic patch model, SIAM J. Appl. Math., 67 (2007), 1283-1309. doi: 10.1137/060672522. Google Scholar L. J. S. Allen, R. K. McCormack and C. B. Jonsson, Mathematical models for hantavirus infection in rodents, Bull. Math. Biol., 68 (2006), 511-524. doi: 10.1007/s11538-005-9034-4. Google Scholar [5] R. M. Anderson and R. M. May, Infectious Diseases of Humans: Dynamics and Control, Oxford Science Publications, 1991. Google Scholar J. Arino, Diseases in metapopulations. In Z. Ma, Y. Zhou and J. Wu (Eds. ), Modeling and Dynamics of Infectious Diseases, Higher Education Press, Beijing, 11 (2009), 64-122. Google Scholar J. Arino, J. R. Davis, D. Hartley, R. Jordan, J. M. Miller and P. van den Driessche, A multi-species epidemic model with spatial dynamics, Mathematical Medicine and Biology, 22 (2005), 129-142. Google Scholar U. Ascher, S. Ruuth and J. Spiteri, Implicit-explicit Runge-Kutta methods for time dependent partial differential equations, Appl. Numer. Math., 25 (1997), 151-167. doi: 10.1016/S0168-9274(97)00056-1. Google Scholar P. Bi, X. Wu, F. Zhang, K. A. Parton and S. Tong, Seasonal rainfall variability, the incidence of hemorrhagic fever with renal syndrome, and prediction of the disease in low-lying areas of China, Amer. J. Epidemiol., 148 (1998), 276-281. doi: 10.1093/oxfordjournals.aje.a009636. Google Scholar S. Boscarino, R. Bürger, P. Mulet, G. Russo and L. M. Villada, Linearly implicit IMEX Runge-Kutta methods for a class of degenerate convection-diffusion problems, SIAM J. Sci. Comput., 37 (2015), B305-B331. doi: 10.1137/140967544. Google Scholar S. Boscarino, F. Filbet and G. Russo, High order semi-implicit schemes for time dependent partial differential equations, J. Sci. Comput., 68 (2016), 975-1001. doi: 10.1007/s10915-016-0168-y. Google Scholar S. Boscarino, P. G. LeFloch and G. Russo, High-order asymptotic-preserving methods for fully nonlinear relaxation problems, SIAM J. Sci. Comput., 36 (2014), A377-A395. doi: 10.1137/120893136. Google Scholar S. Boscarino and G. Russo, On a class of uniformly accurate IMEX Runge-Kutta schemes and applications to hyperbolic systems with relaxation, SIAM J. Sci. Comput., 31 (2009), 1926-1945. doi: 10.1137/080713562. Google Scholar S. Boscarino and G. Russo, Flux-explicit IMEX Runge-Kutta schemes for hyperbolic to parabolic relaxation problems, SIAM J. Numer. Anal., 51 (2013), 163-190. doi: 10.1137/110850803. Google Scholar [15] F. Brauer and C. Castillo-Chavez, Mathematical Models in Population Biology and Epidemiology, Second Ed., Springer, New York, 2012. doi: 10.1007/978-1-4614-1686-9. Google Scholar M. Brummer-Korvenkontio, A. Vaheri, T. Hovi, C. H. von Bonsdorff, J. Vuorimies, T. Manni, K. Penttinen, N. Oker-Blom and J. Lähdevirta, Nephropathia epidemica: Detection of antigen in bank voles and serologic diagnosis of human infection, J. Infect. Dis., 141 (1980), 131-134. doi: 10.1093/infdis/141.2.131. Google Scholar J. Buceta, C. Escudero, F. J. de la Rubia and K. Lindenberg, Outbreaks of Hantavirus induced by seasonality Phys. Rev. E 69 (2004), 021908 (9pp). doi: 10.1103/PhysRevE.69.021908. Google Scholar R. Bürger, G. Chowell, P. Mulet and L. M. Villada, Modelling the spatial-temporal progression of the 2009 A/H1N1 influenza pandemic in Chile, Math. Biosci. Eng., 13 (2016), 43-65. doi: 10.3934/mbe.2016.13.43. Google Scholar R. Bürger, R. Ruiz-Baier and C. Tian, Stability analysis and finite volume element discretization for delay-driven spatio-temporal patterns in a predator-prey model, Math. Comput. Simulation, 132 (2017), 28-52. doi: 10.1016/j.matcom.2016.06.002. Google Scholar R. M. Colombo and E. Rossi, Hyperbolic predators versus parabolic preys, Commun. Math. Sci., 13 (2015), 369-400. doi: 10.4310/CMS.2015.v13.n2.a6. Google Scholar M. Crouzeix, Une méthode multipas implicite-explicite pour l'approximation des équations d'évolution paraboliques, Numer. Math., 35 (1980), 257-276. doi: 10.1007/BF01396412. Google Scholar O. Diekmann, H. Heesterbeek and T. Britton, Mathematical Tools for Understanding Infectious Disease Dynamics Princeton Series in Theoretical and Computational Biology, Princeton University Press, Princeton, NJ, 2013. Google Scholar R. Donat and I. Higueras, On stability issues for IMEX schemes applied to 1D scalar hyperbolic equations with stiff reaction terms, Math. Comp., 80 (2011), 2097-2126. doi: 10.1090/S0025-5718-2011-02463-4. Google Scholar C. Escudero, J. Buceta, F. J. de la Rubia and K. Lindenberg, Effects of internal fluctuations on the spreading of Hantavirus Phys. Rev. E 70 (2004), 061907 (7pp). doi: 10.1103/PhysRevE.70.061907. Google Scholar S. de Franciscis and A. d'Onofrio, Spatiotemporal bounded noises and transitions induced by them in solutions of the real Ginzburg-Landau model Phys. Rev. E 86 (2012), 021118 (9pp); Erratum, Phys. Rev. E 94 (2016), 0599005(E) (1p). doi: 10.1103/PhysRevE.86.021118. Google Scholar M. Garavello and B. Piccoli, Traffic Flow on Networks. Conservation Laws Models Amer. Inst. Math. Sci. , Springfield, MO, USA, 2006. Google Scholar G. S. Jiang and C.-W. Shu, Efficient implementation of weighted ENO schemes, J. Comput. Phys., 126 (1996), 202-228. doi: 10.1006/jcph.1996.0130. Google Scholar [28] P. Kachroo, S. J. Al-Nasur, S. A. Wadoo and A. Shende, Pedestrian Dynamics, Springer-Verlag, Berlin, 2008. Google Scholar A. Källén, Thresholds and travelling waves in an epidemic model for rabies, Nonlin. Anal. Theor. Meth. Appl., 8 (1984), 851-856. doi: 10.1016/0362-546X(84)90107-X. Google Scholar A. Källén, P. Arcuri and J. D. Murray, A simple model for the spatial spread and control of rabies, J. Theor. Biol., 116 (1985), 377-393. doi: 10.1016/S0022-5193(85)80276-9. Google Scholar [31] Y. Katznelson, An Introduction to Harmonic Analysis, Third Ed., Cambridge University Press, Cambridge, UK, 2004. doi: 10.1017/CBO9781139165372. Google Scholar C. A. Kennedy and M. H. Carpenter, Additive Runge-Kutta schemes for convection-diffusion-reaction equations, Appl. Numer. Math., 44 (2003), 139-181. doi: 10.1016/S0168-9274(02)00138-1. Google Scholar W. O. Kermack and A. G. McKendrick, A contribution to the mathematical theory of epidemics, Proc. Roy. Soc. A, 115 (1927), 700-721. Google Scholar N. Kumar, R. R. Parmenter and V. M. Kenkre, Extinction of refugia of hantavirus infection in a spatially heterogeneous environment Phys. Rev. E 82 (2010), 011920 (8pp). doi: 10.1103/PhysRevE.82.011920. Google Scholar T. Kuniya, Y. Muroya and Y. Enatsu, Threshold dynamics of an SIR epidemic model with hybrid and multigroup of patch structures, Math. Biosci. Eng., 11 (2014), 1375-1393. doi: 10.3934/mbe.2014.11.1375. Google Scholar H. N. Liu, L. D. Gao, G. Chowell, S. X. Hu, X. L. Lin, X. J. Li, G. H. Ma, R. Huang, H. S. Yang, H. Tian and H. Xiao, Time-specific ecologic niche models forecast the risk of hemorrhagic fever with renal syndrome in Dongting Lake district, China, 2005-2010, PLoS One, 9 (2014), e106839 (8pp). doi: 10.1371/journal.pone.0106839. Google Scholar X.-D. Liu, S. Osher and T. Chan, Weighted essentially non-oscillatory schemes, J. Comput. Phys., 115 (1994), 200-212. doi: 10.1006/jcph.1994.1187. Google Scholar [38] H. Malchow, S. V. Petrovskii and E. Venturino, Spatial Patterns in Ecology and Epidemiology: Theory, Models, and Simulation, Chapman & Hall/CRC, Boca Raton, FL, USA, 2008. Google Scholar J. N. Mills, B. A. Ellis, K. T. McKee, J. I. Maiztegui and J. E. Childs, Habitat associations and relative densities of rodent populations in cultivated areas of central Argentina, J. Mammal., 72 (1991), 470-479. doi: 10.2307/1382129. Google Scholar P. A. P. Moran, Notes on continuous stochastic phenomena, Biometrika, 37 (1950), 17-23. doi: 10.1093/biomet/37.1-2.17. Google Scholar [41] J. D. Murray, Mathematical Biology Ⅱ: Spatial Models and Biomedical Applications, Third Edition, Springer, New York, 2003. Google Scholar J. D. Murray, E. A. Stanley and D. L. Brown, On the spatial spread of rabies among foxes, Proc. Roy. Soc. London B, 229 (1986), 111-150. doi: 10.1098/rspb.1986.0078. Google Scholar [43] A. Okubo and S. A. Levin, Diffusion and Ecological Problems: Modern Perspectives, Second Edition, Springer-Verlag, New York, 2001. doi: 10.1007/978-1-4757-4978-6. Google Scholar O. Ovaskainen and E. E. Crone, Modeling animal movement with diffusion, in S. Cantrell, C. Cosner and S. Ruan (Eds. ), Spatial Ecology, Chapman & Hall/CRC, Boca Raton, FL, USA, 2009, 63-83. doi: 10.1201/9781420059861.ch4. Google Scholar L. Pareschi and G. Russo, Implicit-Explicit Runge-Kutta schemes and applications to hyperbolic systems with relaxation, J. Sci. Comput., 25 (2005), 129-155. doi: 10.1007/s10915-004-4636-4. Google Scholar J. A. Reinoso and F. J. de la Rubia, Stage-dependent model for the Hantavirus infection: The effect of the initial infection-free period Phys. Rev. E 87 (2013), 042706 (6pp). doi: 10.1103/PhysRevE.91.032703. Google Scholar J. A. Reinoso and F. J. de la Rubia, Spatial spread of the Hantavirus infection Phys. Rev. E 91 (2015), 032703 (5pp). doi: 10.1103/PhysRevE.91.032703. Google Scholar R. Riquelme, M. L. Rioseco, L. Bastidas, D. Trincado, M. Riquelme, H. Loyola and F. Valdivieso, Hantavirus pulmonary syndrome, southern chile, 1995-2012, Emerg. Infect. Dis., 21 (2015), 562-568. doi: 10.3201/eid2104.141437. Google Scholar C. Robertson, C. Mazzetta and A. d'Onofrio, Regional variation and spatial correlation, Chapter 5 in P. Boyle and M. Smans (Eds. ), Atlas of Cancer Mortality in the European Union and the European Economic Area 1993-1997, IARC Scientific Publication, WHO Press, Geneva, Switzerland, 159 (2008), 91-113. Google Scholar E. Rossi and V. Schleper, Convergence of a numerical scheme for a mixed hyperbolic-parabolic system in two space dimensions, ESAIM Math. Modelling Numer. Anal., 50 (2016), 475-497. doi: 10.1051/m2an/2015050. Google Scholar S. Ruan and J. Wu, Modeling spatial spread of communicable diseases involving animal hosts, in S. Cantrell, C. Cosner and S. Ruan (Eds. ), Spatial Ecology, Chapman & Hall/CRC, Boca Raton, FL, USA, 2010,293-316. Google Scholar L. Sattenspiel, The Geographic Spread of Infectious Diseases: Models and Applications Princeton Series in Theoretical and Computational Biology, Princeton University Press, 2009. doi: 10.1515/9781400831708. Google Scholar C.-W. Shu and S. Osher, Efficient implementation of essentially non-oscillatory shock-capturing schemes, Ⅱ, J. Comput. Phys., 83 (1988), 32-78. doi: 10.1016/0021-9991(89)90222-2. Google Scholar S. W. Smith, Digital Signal Processing: A Practical Guide for Engineers and Scientists. Demystifying technology series: by engineers, for engineers. Newnes, 2003. Google Scholar H. Y. Tian, P. B. Yu, A. D. Luis, P. Bi, B. Cazelles, M. Laine, S. Q. Huang, C. F. Ma, S. Zhou, J. Wei, S. Li, X. L. Lu, J. H. Qu, J. H. Dong, S. L. Tong, J. J. Wang, B. Grenfell and B. Xu, Changes in rodent abundance and weather conditions potentially drive hemorrhagic fever with renal syndrome outbreaks in Xi'an, China, 2005-2012, PLoS Negl. Trop. Dis. , 9 (2015), paper e0003530 (13pp). Google Scholar [56] M. Treiber and A. Kesting, Traffic Flow Dynamics, Springer-Verlag, Berlin, 2013. doi: 10.1007/978-3-642-32460-4. Google Scholar P. van den Driessche, Deterministic compartmental models: Extensions of basic models, In F. Brauer, P. van den Driessche and J. Wu (Eds. ), Mathematical Epidemiology, SpringerVerlag, Berlin, 1945 (2008), 147-157. doi: 10.1007/978-3-540-78911-6_5. Google Scholar P. van den Driessche, Spatial structure: Patch models, In F. Brauer, P. van den Driessche and J. Wu (Eds. ), Mathematical Epidemiology, Springer-Verlag, Berlin, 1945 (2008), 179-189. doi: 10.1007/978-3-540-78911-6_7. Google Scholar P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar [60] E. Vynnycky and R. E. White, An Introduction to Infectious Disease Modelling, Oxford University Press, 2010. Google Scholar J. Wu, Spatial structure: Partial differential equations models, In F. Brauer, P. van den Driessche and J. Wu (Eds. ), Mathematical Epidemiology, Springer-Verlag, Berlin, 2008,191-203. Google Scholar H. Xiao, X. L. Lin, L. D. Gao, X. Y. Dai, X. G. He and B. Y. Chen, Environmental factors contributing to the spread of hemorrhagic fever with renal syndrome and potential risk areas prediction in midstream and downstream of the Xiangjiang River [in Chinese], Scientia Geographica Sinica, 33 (2013), 123-128. Google Scholar C. J. Yahnke, P. L. Meserve, T. G. Ksiazek and J. N. Mills, Patterns of infection with Laguna Negra virus in wild populations of Calomys laucha in the central Paraguayan chaco, Am. J. Trop. Med. Hyg., 65 (2001), 768-776. Google Scholar W. Y. Zhang, L. Q. Fang, J. F. Jiang, F. M. Hui, G. E. Glass, L. Yan, Y. F. Xu, W. J. Zhao, H. Yang and W. Liu, Predicting the risk of hantavirus infection in Beijing, People's Republic of China, Am. J. Trop. Med. Hyg., 80 (2010), 678-683. Google Scholar W. Y. Zhang, W. D. Guo, L. Q. Fang, C. P. Li, P. Bi, G. E. Glass, J. F. Jiang, S. H. Sun, Q. Qian, W. Liu, L. Yan, H. Yang, S. L. Tong and W. C. Cao, Climate variability and hemorrhagic fever with renal syndrome transmission in Northeastern China, Environ. Health Perspect, 118 (2010), 915-920. doi: 10.1289/ehp.0901504. Google Scholar X. Zhong, Additive semi-implicit Runge-Kutta methods for computing high-speed nonequilibrium reactive flows, J. Comput. Phys., 128 (1996), 19-31. doi: 10.1006/jcph.1996.0193. Google Scholar Figure 1. Numerical solution of the ODE version of (2.1), Model 0, for the initial data (4.1) Figure 2. Case 1 (Model 1, Scenario 1): numerical solution for $N_{\rm{m}}$, $N_{\rm{f}}$, $I_{\rm{m}}$ and $I_{\rm{f}}$ at the indicated times Figure 3. Case 2 (Model 2 with $K=1000$, Scenario 1): numerical solution for $N_{\rm{m}}$, $N_{\rm{f}}$, $I_{\rm{m}}$ and $I_{\rm{f}}$ at the indicated times Figure 5. Cases 1-3: integral quantities $\mathcal{I} (X)$ defined by (4.3) for each compartment obtained by evaluating numerical solutions Figure 10. Cases 4-6: Moran's index $\mathsf{I}$ defined by (4.4), (4.5) for each compartment obtained by evaluating numerical solutions Figure 11. Case 8 (Model 3, Scenario 2, periodic variation of parameters): numerical solution for $N_{\rm{m}}$, $N_{\rm{f}}$, $I_{\rm{m}}$ and $I_{\rm{f}}$ at the indicated times Figure 12. Case 8: (Model 3, Scenario 2, periodic variation of parameters): solutions of Model 0 (left column) and integral quantities $\mathcal{I} (X)$ of Model 3 (right column) defined by (4.3) for each compartment obtained by evaluating numerical solutions Table 1. Case 7 (Model 1, order test with smooth solution): approximate total $L^1$ errors $\smash{e_M^{\smash{tot}}}$ and observed convergence rates $\theta_M$ $M$ 8 16 32 64 128 256 $e_M^{\smash{tot}}\times 10^{3}$ 368.90 383.19 379.01 153.73 34.70 9.14 $\theta_M$ -0.05 0.02 1.30 2.15 1.92 - Sihong Shao, Huazhong Tang. Higher-order accurate Runge-Kutta discontinuous Galerkin methods for a nonlinear Dirac model. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 623-640. doi: 10.3934/dcdsb.2006.6.623 Jan Haskovec, Ioannis Markou. Asymptotic flocking in the Cucker-Smale model with reaction-type delays in the non-oscillatory regime. Kinetic & Related Models, 2020, 13 (4) : 795-813. doi: 10.3934/krm.2020027 Xueying Wang, Drew Posny, Jin Wang. A reaction-convection-diffusion model for cholera spatial dynamics. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2785-2809. doi: 10.3934/dcdsb.2016073 Wenjuan Zhai, Bingzhen Chen. A fourth order implicit symmetric and symplectic exponentially fitted Runge-Kutta-Nyström method for solving oscillatory problems. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 71-84. doi: 10.3934/naco.2019006 Yoon-Sik Cho, Aram Galstyan, P. Jeffrey Brantingham, George Tita. Latent self-exciting point process model for spatial-temporal networks. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1335-1354. doi: 10.3934/dcdsb.2014.19.1335 Elisa Giesecke, Axel Kröner. Classification with Runge-Kutta networks and feature space augmentation. Journal of Computational Dynamics, 2021, 8 (4) : 495-520. doi: 10.3934/jcd.2021018 Cédric Wolf. A mathematical model for the propagation of a hantavirus in structured populations. Discrete & Continuous Dynamical Systems - B, 2004, 4 (4) : 1065-1089. doi: 10.3934/dcdsb.2004.4.1065 Daniil Kazantsev, William M. Thompson, William R. B. Lionheart, Geert Van Eyndhoven, Anders P. Kaestner, Katherine J. Dobson, Philip J. Withers, Peter D. Lee. 4D-CT reconstruction with unified spatial-temporal patch-based regularization. Inverse Problems & Imaging, 2015, 9 (2) : 447-467. doi: 10.3934/ipi.2015.9.447 Aniello Raffaele Patrone, Otmar Scherzer. On a spatial-temporal decomposition of optical flow. Inverse Problems & Imaging, 2017, 11 (4) : 761-781. doi: 10.3934/ipi.2017036 Igor Pažanin, Marcone C. Pereira. On the nonlinear convection-diffusion-reaction problem in a thin domain with a weak boundary absorption. Communications on Pure & Applied Analysis, 2018, 17 (2) : 579-592. doi: 10.3934/cpaa.2018031 ShinJa Jeong, Mi-Young Kim. Computational aspects of the multiscale discontinuous Galerkin method for convection-diffusion-reaction problems. Electronic Research Archive, 2021, 29 (2) : 1991-2006. doi: 10.3934/era.2020101 Da Xu. Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2389-2416. doi: 10.3934/dcdsb.2017122 Xinlong Feng, Huailing Song, Tao Tang, Jiang Yang. Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation. Inverse Problems & Imaging, 2013, 7 (3) : 679-695. doi: 10.3934/ipi.2013.7.679 Angelamaria Cardone, Zdzisław Jackiewicz, Adrian Sandu, Hong Zhang. Construction of highly stable implicit-explicit general linear methods. Conference Publications, 2015, 2015 (special) : 185-194. doi: 10.3934/proc.2015.0185 Xiaoyan Zhang, Yuxiang Zhang. Spatial dynamics of a reaction-diffusion cholera model with spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2625-2640. doi: 10.3934/dcdsb.2018124 Yachun Tong, Inkyung Ahn, Zhigui Lin. Effect of diffusion in a spatial SIS epidemic model with spontaneous infection. Discrete & Continuous Dynamical Systems - B, 2021, 26 (8) : 4045-4057. doi: 10.3934/dcdsb.2020273 Eleonora Messina, Mario Pezzella, Antonia Vecchio. A non-standard numerical scheme for an age-of-infection epidemic model. Journal of Computational Dynamics, 2021 doi: 10.3934/jcd.2021029 Benedetto Bozzini, Deborah Lacitignola, Ivonne Sgura. Morphological spatial patterns in a reaction diffusion model for metal growth. Mathematical Biosciences & Engineering, 2010, 7 (2) : 237-258. doi: 10.3934/mbe.2010.7.237 Chengxia Lei, Jie Xiong, Xinhui Zhou. Qualitative analysis on an SIS epidemic reaction-diffusion model with mass action infection mechanism and spontaneous infection in a heterogeneous environment. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 81-98. doi: 10.3934/dcdsb.2019173 H. M. Srivastava, H. I. Abdel-Gawad, Khaled Mohammed Saad. Oscillatory states and patterns formation in a two-cell cubic autocatalytic reaction-diffusion model subjected to the Dirichlet conditions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3785-3801. doi: 10.3934/dcdss.2020433 HTML views (233) Raimund BÜrger Gerardo Chowell Elvis GavilÁn Pep Mulet Luis M. Villada
CommonCrawl
Raltegravir Is a Substrate for SLC22A6: a Putative Mechanism for the Interaction between Raltegravir and Tenofovir Darren M. Moss, Wai San Kwan, Neill J. Liptrott, Darren L. Smith, Marco Siccardi, Saye H. Khoo, David J. Back, Andrew Owen Darren M. Moss Department of Pharmacology and Therapeutics, University of Liverpool, Liverpool, United Kingdom Wai San Kwan Neill J. Liptrott NIHR Biomedical Research Centre, Royal Liverpool & Broadgreen University Hospitals Trust, Liverpool, United Kingdom Darren L. Smith Marco Siccardi Saye H. Khoo David J. Back Andrew Owen For correspondence: [email protected] DOI: 10.1128/AAC.00623-10 The identification of transporters of the HIV integrase inhibitor raltegravir could be a factor in an understanding of the pharmacokinetic-pharmacodynamic relationship and reported drug interactions of raltegravir. Here we determined whether raltegravir was a substrate for ABCB1 or the influx transporters SLCO1A2, SLCO1B1, SLCO1B3, SLC22A1, SLC22A6, SLC10A1, SLC15A1, and SLC15A2. Raltegravir transport by ABCB1 was studied with CEM, CEMVBL100, and Caco-2 cells. Transport by uptake transporters was assessed by using a Xenopus laevis oocyte expression system, peripheral blood mononuclear cells, and primary renal cells. The kinetics of raltegravir transport and competition between raltegravir and tenofovir were also investigated using SLC22A6-expressing oocytes. Raltegravir was confirmed to be an ABCB1 substrate in CEM, CEMVBL100, and Caco-2 cells. Raltegravir was also transported by SLC22A6 and SLC15A1 in oocyte expression systems but not by other transporters studied. The Km and V max for SLC22A6 transport were 150 μM and 36 pmol/oocyte/h, respectively. Tenofovir and raltegravir competed for SLC22A6 transport in a concentration-dependent manner. Raltegravir inhibited 1 μM tenofovir with a 50% inhibitory concentration (IC50) of 14.0 μM, and tenofovir inhibited 1 μM raltegravir with an IC50 of 27.3 μM. Raltegravir concentrations were not altered by transporter inhibitors in peripheral blood mononuclear cells or primary renal cells. Raltegravir is a substrate for SLC22A6 and SLC15A1 in the oocyte expression system. However, transport was limited compared to endogenous controls, and these transporters are unlikely to have a great impact on raltegravir pharmacokinetics. HIV infection and AIDS continue to be a major cause of worldwide mortality in the 21st century. A UNAIDS/WHO report in 2009 estimated that 33.4 million people worldwide were infected with HIV in 2008, with AIDS-related deaths numbering 2 million. Recent attempts to develop a vaccine for HIV have been largely unsuccessful (18). This, combined with increasing drug resistance, has emphasized the need to develop new drugs with unique mechanisms of action. Raltegravir represents a new class of antiretroviral treatment (8), targeting the HIV-1 integrase enzyme by binding to the active site and preventing viral DNA insertion into the host genome (11). Recent trials have shown raltegravir to have a sustained antiretroviral effect and good tolerability in treatment-experienced HIV-1 patients (33). The primary route of raltegravir metabolism is glucuronidation via UGT1A1, and raltegravir is not a substrate or an inhibitor of the major cytochrome P450 enzymes (19, 24). However, the involvement of human drug transporters in raltegravir absorption, disposition, metabolism, and excretion (ADME) has not been fully investigated. Raltegravir has been described as being an ABCB1 substrate (25), but there are no data yet in the public domain. Raltegravir has shown higher concentrations (1.7-fold) in semen (4) and lower concentrations (0.04- to 0.39-fold) in cerebrospinal fluid (7, 41) than in plasma, which may be facilitated by drug transporters present at membrane barriers. There are important reasons why raltegravir should be screened for potential transport by known drug transporters. First, by regulating intracellular permeation, drug transporters could be an important factor in an understanding of the lack of a clear pharmacokinetic-pharmacodynamic (PK-PD) relationship of raltegravir, i.e., the similar virological response observed for patients given a wide range of raltegravir doses (29). Second, a knowledge of the mechanisms that control raltegravir disposition may help rationalize or even anticipate drug-drug interactions in the clinic. Since raltegravir represents the first member of a new drug class, possessing a unique chemical structure containing a diketo acid derivative (34), class-specific trends in drug transport may be evident, such as those reported previously for protease inhibitors with ABCB1 (26, 37) or nucleoside reverse transcriptase inhibitors with organic anion and cation transporters (35, 36). Finally, knowledge of which transporters are involved in raltegravir ADME will identify candidate genes for future pharmacogenetic studies. There have been a number of studies undertaken to evaluate the pharmacokinetic interactions between raltegravir and coadministered drugs. Most studies have shown raltegravir metabolism and disposition to be largely unaffected. For example, etravirine (1), maraviroc (3), darunavir (2), and rifabutin (6) had no or a relatively modest effect on raltegravir plasma concentrations. In addition, despite ritonavir being an inducer of both ABCB1 (9) and UGT1A1 (12), the drug caused only a minimal reduction in the raltegravir plasma concentration (22). Similarly, tipranavir combined with ritonavir had little impact on raltegravir pharmacokinetics (13). However, there are interactions between raltegravir and coadministered drugs that have a more marked effect on raltegravir disposition. Atazanavir is an inhibitor of UGT1A1 (42), and the coadministration of atazanavir (400 mg once a day [QD]) with raltegravir (100 mg single dose) resulted in increased raltegravir plasma area under the concentration-time curve (AUC), Cmax, and Cmin values of 72%, 53%, and 95%, respectively (20). This interaction was also confirmed in a separate study (43). Efavirenz and rifampin are inducers of UGT1A1 expression (14, 40) and have been shown to decrease raltegravir plasma exposure (22, 39). Other interactions were reported previously for omeprazole (21) and fosamprenavir (28). One intriguing interaction is that of tenofovir, causing a moderate increase in the raltegravir AUC and Cmax by 49% and 64%, respectively (38). Although the increase in the raltegravir plasma concentration is unlikely to have any important clinical significance (i.e., no increase in toxicity), the mechanism of the interaction is currently unexplained. Interestingly, tenofovir is an anionic compound when charged and is therefore a substrate of the organic anion uptake transporters SLC22A6 and SLC22A8 (36). The aim of this study was to confirm the transport of raltegravir by ABCB1 (phosphoglycoprotein) and to investigate potential transport by major human drug influx transporters (25), with the exception of CNTs (concentrative nucleoside transporters) and ENTs (equilibrative nucleoside transporters), since these are specific to nucleotides. The influx transporters characterized were SLCO1A2 (OATPA), SLCO1B1 (OATPC), SLCO1B3 (OATP8), SLC22A1 (OCT1), SLC22A6 (OAT1), SLC10A1 (NTCP), SLC15A1 (PEPT1), and SLC15A2 (PEPT2). Since tenofovir is a substrate for SLC22A6, the putative role for this transporter in the interaction between raltegravir and tenofovir was also investigated. Chemical reagents used.CellFix was obtained from Becton Dickinson (Oxford, United Kingdom). UIC2 (anti-ABCB1) antibody was obtained from Immunotech (Marseilles, France). The IgG2a negative control and goat anti-mouse IgG2a-RPE (R-phycoerythrin) were obtained from Serotech Ltd. (Oxford, United Kingdom). CEM and CEMVBL100 cells were donated by Ross Davey, Bill Walsh Cancer Research Laboratories (St. Leonards, Australia). Caco-2 cells were purchased from the European Collection of Cell Cultures (Salisbury, United Kingdom). Primary renal proximal tubule epithelial cells, renal cell basal medium, and renal cell growth kit components were purchased from the American Type Culture Collection. [3H]raltegravir (specific activity = 32.85 Ci/mmol) and nonradiolabeled raltegravir sodium salt were gifts from Merck. [3H]digoxin (specific activity = 40 Ci/mmol) was purchased from Perkin-Elmer. [3H]saquinavir (specific activity = 0.2 Ci/mmol) and [3H]tenofovir (specific activity = 3.4 Ci/mmol) were purchased from Moravek. [3H]estrone-3-sulfate (specific activity = 50 Ci/mmol), [14C]tetraethyl ammonium (specific activity = 55 mCi/mmol), [3H]taurocholic acid (specific activity = 10 Ci/mmol), [3H]glycyl sarcosine (specific activity = 60 Ci/mmol), and [3H]aminohippuric acid (specific activity = 5 Ci/mmol) were purchased from American Radiolabeled Chemicals. Tariquidar was purchased from Xenova (Sloane, United Kingdom). Nonradiolabeled tenofovir was a gift from Gilead. Image clones of SLC22A6 and SLC22A8 were purchased from Geneservice. Ficoll-Paque Plus was purchased from GE Healthcare (Buckinghamshire, United Kingdom). All other reagents were obtained from Sigma (Poole, United Kingdom). Cell culture.CEM and CEMVBL100 cells were maintained in cell culture medium (RPMI 1640 medium, 10% [wt/vol] fetal calf serum [FCS]) prior to the experiment (37°C in 5% CO2). CEM cells are a wild-type T-lymphoblastoid cell line. CEMVBL100 cells are CEM cells that have greatly increased ABCB1 expression (selected using vinblastine up to a concentration of 100 ng/ml). Caco-2 cells were maintained in cell culture by passaging at 70% confluence using cell culture medium (Dulbecco's modified Eagle's medium [DMEM], 15% [wt/vol] FCS) prior to the experiment (37°C in 5% CO2). Primary renal proximal tubule epithelial cells were maintained in cell culture by passage at 95% confluence using renal epithelial cell basal medium with essential components (0.5% [wt/vol] FCS, 10 nM triiodothyronine, 10 ng/ml recombinant human epidermal growth factor [rhEGF], 100 ng/ml hydrocortisone hemisuccinate, 5 μg/ml rh insulin, 1 μM epinephrine, 5 μg/ml transferrin, 2.4 mM l-alanyl-l-glutamine) prior to the experiment (37°C in 5% CO2). Cytotoxicity testing.CEM and Caco-2 cell lines (100 μl; 2 × 105 cells/ml) were incubated in a 96-well Nunc flat-bottom plate (37°C in 5% CO2 for 120 h) with 0, 0.001, 0.01, 0.1, 1, 10, and 100 μM raltegravir. To validate the study, the cytotoxic control compounds epirubicin and vinblastine were incubated (0.1, 1, 10, and 100 μM) with CEM and Caco-2 cells, respectively. Following incubation, the plates were centrifuged (2,000 rpm for 5 min), and the supernatant was discarded and replaced with a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) solution (1 mg/ml MTT, 100 μl Hanks balanced salt solution [HBSS]). A standard MTT assay was performed, the mean absorption was calculated for each raltegravir concentration (n = 6), and results were expressed as the percent absorbance compared to the vehicle control (30). Confirmation of ABCB1 expression in CEM and CEMVBL100 cells by flow cytometry.The expression of ABCB1 in CEM and CEMVBL100 cells was determined by using a Coulter Epics XL-MCL flow cytometer as previously described (27). Results are given as relative fluorescence units (mean fluorescence minus that of the isotype control; n = 3) ± standard deviations (SD). Cellular accumulation of raltegravir in CEM and CEMVBL100 cells.CEM and CEMVBL100 cells of a constant cell density (1 ml; 2.5 × 106 cells/ml) were incubated (37°C in 5% CO2) for 30 min in RPMI 1640 medium (10% [wt/vol] FCS) containing [3H]raltegravir (1 μM; 0.2 μCi/ml) or the control ABCB1 substrate [3H]saquanavir (1 μM; 0.2 μCi/ml). A separate incubation was undertaken where the cells were preincubated prior to the substrate addition with RPMI 1640 medium (10% [wt/vol] FCS) containing the potent noncompetitive ABCB1 inhibitor tariquidar (300 nM for 30 min), which was also included during the 30 min of substrate incubation. Following incubation, the cells were centrifuged (2,000 rpm at 1°C for 1 min), and 100 μl supernatant aliquots were taken and added to scintillation vials in order to calculate extracellular drug concentrations. The remaining supernatant was discarded, and the cells were washed with ice-cold HBSS and centrifuged (2,000 rpm at 1°C for 1 min). This HBSS wash was repeated a total of 3 times, after which the HBSS was discarded and 100 μl tap water was added to lyse the cells. The incubations were vortexed for 5 min, and 100 μl samples were added to scintillation vials. Four milliliters of scintillation fluid was added to all the scintillation vials, which were then loaded into a liquid scintillation analyzer (Tri-Carb). Using supernatant and intracellular radioactivity readings, cellular accumulation ratios (CARs) (ratio of drug in the cell pellet compared with drug in the supernatant, assuming a 1 pl volume per cell) were calculated for raltegravir and saquinavir in each cell line as described previously (23). Bidirectional transport of raltegravir using Caco-2 cell monolayers.Caco-2 monolayer experiments were performed as previously described (17), with slight modifications. When confluent, Caco-2 cells (passage 30) were seeded onto polycarbonate membrane transwells at a density of 5 × 105 cells/cm2. The medium was replaced every 2 days, and plates were used in the experiments after 21 days. Monolayer integrity was checked by using a MillicellERS instrument (Millipore) to determine the transepithelial electrical resistance (TEER) across the monolayer. A TEER of >600 was deemed acceptable. On the day of the experiment, the TEER was assessed, and the medium in each plate was replaced with warm transport buffer (HBSS containing 25 mM HEPES and 0.1% [wt/vol] bovine serum albumin [pH 7.4]) and allowed to equilibrate (37°C for 30 min). For inhibition studies this transport buffer contained tariquidar (300 nM). The transport buffer in the apical (for apical-to-basolateral transport) and basolateral (for basolateral-to-apical transport) sides was replaced with transport buffer containing either the test substrate [3H]raltegravir or the control ABCB1 substrate [3H]digoxin (1 μM; 0.33 μCi/ml) with or without 300 nM tariquidar. Samples were taken from the receiver compartment at 30, 60, 90, 120, and 180 min and analyzed by using a liquid scintillation counter (Tri-Carb). Results were used to determine the apparent permeability (Papp) (cm/s) for each direction and the efflux ratio (ratio of basolateral-to-apical Papp to the apical-to-basolateral Papp). The Papp was calculated by using the following equation, as described previously (10): $$mathtex$$\[P_{\mathrm{app}}{=}\frac{(dQ/dt){\times}v}{A{\times}C_{0}}\]$$mathtex$$ where dQ/dt is the change in the drug concentration in the receiver chamber over time (nM/s), v is the volume in the receiver compartment (ml), A is the total surface area of the transwell membrane (cm2), C0 is the initial drug concentration in the donor compartment (nM), and Papp is the apparent permeability (cm/s). Production of uptake transporter cRNA for Xenopus laevis oocyte injection.SLCO1A2, SLCO1B1, and SLCO1B3 were cloned from cDNA extracted from Huh-7D12 and A549 cells as described previously (15). IMAGE clones were linearized and used as a template in cRNA production by using an mMessage mMachine RNA transcription kit (Ambion) according to the manufacturer's protocol. SLC10A1, SLC15A1, and SLC15A2 cRNAs were provided by Becton Dickinson (Oxford, United Kingdom). X. laevis oocyte isolation, collagenase treatment, and microinjection.Oocytes were harvested from sacrificed adult female X. laevis frogs and treated with modified Barth's solution minus calcium (88 mM NaCl, 1 mM KCl, 15 mM HEPES, 100 U penicillin, 100 μg streptomycin [pH 7.4]) containing collagenase (1 mg/ml at 22°C in a 60-rpm shaker for 1 h). Cells were washed and transferred into Barth's solution containing calcium (88 mM NaCl, 1 mM KCl, 15 mM HEPES, 0.3 mM CaCNO3·6H2O, 41 μM CaCl2·6H2O, 0.82 mM MgSO4·7H2O, 100 U penicillin, 100 μg streptomycin [pH 7.4]) and stored in a cold room at 8°C. Healthy cells were selected and injected with transporter cRNA (50 ng per oocyte; 1 ng/nl) or sterile water (50 nl) and maintained in Barth's solution containing calcium to allow transporter expression (5 days for SLCO1B3-injected oocytes and 3 days for all other conditions; 18°C). Barth's solution was replaced daily, and damaged oocytes were removed. Drug accumulation in transporter RNA-injected X. laevis oocytes.Drug accumulation studies using X. laevis oocytes were performed as described previously, with slight modifications (15). Unless otherwise stated, radiolabeled drug was incubated in Hanks balanced salt solution (pH 7.4) with ≥4 oocytes per condition in a 48-well Nunc flat-bottom plate (500 μl; 0.33 μCi/ml; 1 μM; room temperature in a 60-rpm shaker for 1 h). Radiolabeled positive-control drugs were tested alongside [3H]raltegravir to ensure successful transporter expression. Positive-control drugs used were [3H]estrone-3-sulfate for SLCO1A2, SLCO1B1, and SLCO1B3; [3H]aminohippuric acid for SLC22A6; [3H]taurocholic acid for SLC10A1; [3H]glycyl sarcosine for SLC15A1 and SLC15A2; and [14C]tetraethyl ammonium for SLC22A1. All incubations were terminated by transferring the oocytes into cell strainers and washing them in ice-cold HBSS to remove extracellular drug. Each oocyte was placed into a separate scintillation vial followed by treatment with 100 μl 10% SDS. After the disintegration of the oocytes by the SDS treatment, 4 ml scintillation fluid was added to all vials, which were then loaded into a liquid scintillation analyzer (Tri-Carb). Results are expressed as the amount of drug per oocyte (pmol/oocyte), assuming that each oocyte had a volume of 1 μl (15). Results were obtained using oocytes from ≥2 X. laevis frogs. Determination of raltegravir and tenofovir Km and Vmax using SLC22A6-injected oocytes.The time-dependent SLC22A6-mediated transport of [3H]raltegravir, [3H]tenofovir, and the control compound [3H]aminohippuric acid was investigated by incubating each drug with SLC22A6- and H2O-injected oocytes at a standard concentration of 1 μM for various lengths of time (1, 2, 5, 10, 15, 30, 60, 120, 180, or 240 min). From these results, an incubation time was chosen that gave a high accumulation rate in SLC22A6-injected oocytes compared to the H2O-injected controls and also allowed enough radiolabeled drug to enter the oocytes for it to be detectable. This time point was then used for kinetic experiments in which [3H]raltegravir and [3H]tenofovir were incubated with SLC22A6- and H2O-injected oocytes (1 h for both drugs) using a range of concentrations (0.1, 0.3, 1, 3, 10, 30, 100, 300, and 1000 μM for raltegravir and 0.3, 1, 3, 10, 30, 100, 200, and 300 μM for tenofovir). Incubations were terminated as described above. The transport of drug by SLC22A6 was determined by subtracting the drug accumulation in H2O-injected oocytes from the drug accumulation in SLC22A6-injected oocytes. The Km and Vmax were calculated by plotting the initial rate of drug transport by SLC22A6 (pmol/oocyte/h) against the drug incubation concentration (μM). The intrinsic clearance (CLint) was calculated by dividing the Vmax by the Km. Determination of competition between raltegravir and tenofovir for SLC22A6 transport.[3H]raltegravir (0.33 μCi/ml; 1 μM) was incubated for 1 h with SLC22A6-injected oocytes with various concentrations of nonradiolabeled tenofovir (1, 3, 10, 30, 100, and 300 μM), and the effect on [3H]raltegravir uptake was determined. [3H]raltegravir (0.33 μCi/ml; 1 μM) was also incubated with H2O-injected cells with and without 300 μM tenofovir in order to confirm the role of SLC22A6. This experiment was repeated using [3H]tenofovir as the substrate and nonradiolabeled raltegravir as the competitor drug using the same concentration range and was terminated as described above. Fifty percent inhibitory concentrations (IC50) were generated for both drugs by using Prism 3.0. Isolation of peripheral blood mononuclear cells from human blood.Venous blood samples (60 ml) were obtained from healthy volunteers via venipuncture. Blood was layered onto Ficoll-Paque Plus, and peripheral blood mononuclear cells (PBMCs) were isolated by density gradient centrifugation according to the manufacturer's protocol. Cells were washed with ice-cold HBSS and centrifuged (800 × g at 1°C for 5 min). This HBSS wash was repeated a total of 3 times, after which the HBSS was discarded and cells were resuspended in RPMI 1640 medium (10% [wt/vol] FCS) for use in experiments. Cellular accumulation of raltegravir in peripheral blood mononuclear cells.The accumulation of raltegravir in peripheral blood mononuclear cells was determined by using the same method as that used with CEM and CEMVBL cells, with slight modifications. Briefly, cells of a constant cell density (1 ml; 5 × 106 cells/ml) were incubated (37°C in 5% CO2) for 30 min in RPMI 1640 medium (10% [wt/vol] FCS) containing [3H]raltegravir (1 μM; 0.2 μCi/ml). Separate incubations were undertaken where cells were preincubated prior to substrate addition with RPMI 1640 medium (10% [wt/vol] FCS) containing either the noncompetitive ABCB1 inhibitor tariquidar (300 nM for 30 min), the competitive SLC22A6 inhibitor probenecid (1 mM for 30 min), or the competitive SLC15A1 inhibitor glycyl sarcosine (1 mM for 30 min), which were also included during the 30 min of substrate incubation. Cells were washed and treated for analysis as described above for CEM and CEMVBL cells. Cellular accumulation of raltegravir in primary renal proximal tubule epithelial cells.When confluent, renal cells (passage 4) were seeded onto polyester membrane transwells at a density of 5 × 105 cells/cm2. The medium was replaced daily, and plates were used for experiments after 5 days. On the day of the experiment, medium was replaced with buffered HBSS (25 mM HEPES [pH 7.4]) containing either [3H]raltegravir (1 μM; 0.6 μCi/ml), [3H]tenofovir (1 μM; 0.6 μCi/ml), or the control SLC22A6 substrate [3H]aminohippuric acid (1 μM; 0.6 μCi/ml) and incubated (3 h at 37°C in 5% CO2). A separate incubation was undertaken where the cells were preincubated prior to substrate addition with buffered HBSS (25 mM HEPES [pH 7.4]) containing the SLC22A6 inhibitor probenecid (1 mM for 30 min), which was also included during the 3 h of substrate incubation. We also investigated the competition between raltegravir and tenofovir by incubating [3H]raltegravir (1 μM; 0.6 μCi/ml) with 100 μM tenofovir and [3H]tenofovir (1 μM; 0.6 μCi/ml) with 100 μM raltegravir (3 h at 37°C in 5% CO2). Once incubation was complete, an extracellular sample was taken, and incubations were terminated by washing each well with cold HBSS (4°C; 3 ml) three times to remove excess drug. Cells were lysed with 0.5 ml tap water, and contents were analyzed by liquid scintillation as described above for the CEM experiments. Determination of transporter mRNA expression in primary renal proximal tubule cells and whole kidney.mRNA from primary renal proximal tubule cells were isolated by using Tri reagent according to the manufacturer's protocol. mRNA was then reverse transcribed by using the TaqMan reverse transcription kit according to the manufacturer's protocol. Real-time PCR using TaqMan array plates was combined with cDNA (40 ng) and used to quantify mRNA expression by standard methodologies. This process was repeated, and mRNA quantification was obtained for whole kidneys purchased from Ambion (United Kingdom), which was created from a pool of three individuals. Transporters were quantified by using the ΔΔCT method and included SLC22A6, SLC22A1, SLC22A2, SLC22A3, ABCC1, ABCC2, ABCC3, ABCC4, and ABCC10. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used as the housekeeping gene. Statistical analysis.Data were analyzed by using SPSS 13.0 for Windows. All data were tested for normality by using the Shapiro-Wilk test. An independent t test was used to determine the significance of the ABCB1 flow cytometry data, Caco-2 transwell data, PBMC accumulation data, and renal cell accumulation data. The Mann-Whitney U test was used for all other data. A two-tailed P value of <0.05 was accepted as being statistically significant. Toxicity of raltegravir in CEM and Caco-2 cell lines.Raltegravir was not cytotoxic in CEM or Caco-2 cell lines at the tested concentrations. Cell viability (percent mean viability compared to the drug-free control ± SD; n = 6) was unaffected by concentrations of up to 100 μM in the Caco-2 (104.9% ± 12.7%) and CEM (113.9% ± 13.0%) cell lines. A concentration of 1 μM raltegravir was used in subsequent experiments with these cells to minimize the risk of transporter saturation, as previously recommended (17). ABCB1 expression levels in CEM and CEMVBL100 cell lines.The expression of ABCB1 in CEM and CEMVBL100 cell lines was determined by using flow cytometry. CEMVBL100 cells had a significantly higher level of ABCB1 cell surface expression (relative fluorescence = 246.7 ± 9.3; P < 0.001) than in CEM cells (relative fluorescence = 1.5 ± 0.5). Effect of the ABCB1 inhibitor tariquidar on the accumulation of raltegravir and saquinavir using CEM and CEMVBL100 cell lines.The cellular accumulation of [3H]raltegravir was determined with CEM and ABCB1-overexpressing CEMVBL100 cells, and the effect of the ABCB1 inhibitor tariquidar on this accumulation was investigated (Fig. 1 A). The level of [3H]raltegravir accumulation was lower in CEMVBL100 cells (CAR = 1.4 ± 0.2) than in CEM cells (CAR = 2.1 ± 0.2; P = 0.02). This difference was reversed when CEMVBL100 cells were treated with tariquidar (CAR = 2.0 ± 0.4). The control ABCB1 substrate [3H]saquinavir had a lower level of accumulation in CEMVBL100 cells (CAR = 19.0 ± 5.6) than in CEM cells (CAR = 37.5 ± 2.1; P = 0.02) (Fig. 1B). This difference was also reversed when CEMVBL100 cells were treated with tariquidar (CAR = 37.8 ± 8.5). (A) [3H]raltegravir (1 μM) accumulation in CEM, CEMVBL100, and CEMVBL100 cells treated with 300 nM tariquidar. (B) [3H]saquinavir (1 μM) accumulation in CEM, CEMVBL100, and CEMVBL100 cells treated with 300 nM tariquidar. Data in A and B are expressed as mean CARs (n = 4 biological replicates; n = 4 experimental replicates per biological replicate) ± SD. *, P < 0.05; **, P < 0.01; ***, P < 0.001. (C) Apparent permeability of [3H]raltegravir and [3H]digoxin in the A-to-B (apical-to-basolateral) and B-to-A (basolateral-to-apical) directions across the Caco-2 transwell membrane, with and without the presence of 300 nM tariquidar. Data are expressed as mean apparent permeabilities (×10−6 cm/s; n = 3 experimental replicates) ± SD. *, P < 0.05; **, P < 0.01; ***, P < 0.001. Effect of the ABCB1 inhibitor tariquidar on the bidirectional transport of raltegravir and digoxin using a Caco-2 monolayer.The Papp values obtained for [3H]raltegravir and [3H]digoxin with and without tariquidar are given in Fig. 1C. All Papp and efflux ratio calculations were made by using the samples taken after 120 min of incubation as sink conditions were maintained. [3H]raltegravir showed significantly higher transport in the B→A direction (Papp = 13.4 × 10−6 ± 2.1 × 10−6) compared to the A→B direction (Papp = 7.3 × 10−6 ± 2.2 × 10−6; P = 0.02). The efflux ratio (B→A/A→B) of [3H]raltegravir at 120 min was 1.9. The presence of tariquidar reduced the efflux ratio of [3H]raltegravir to 1.3 (P = 0.30). The control compound [3H]digoxin showed significantly higher levels of transport in the B→A direction (Papp = 12.9 × 10−6 ± 0.6 × 10−6) compared to the A→B direction (Papp = 2.1 × 10−6 ± 0.3 × 10−6; P < 0.001). The efflux ratio of [3H]digoxin at 120 min was 6.3. The presence of tariquidar reduced the efflux ratio of [3H]digoxin to 0.9 (P = 0.58). Accumulation of raltegravir in oocytes by uptake transporters.[3H]raltegravir transport by the investigated uptake transporters is given in Table 1. The level of [3H]raltegravir accumulation was significantly higher in SLC22A6-injected oocytes (0.44 ± 0.12 pmol/oocyte; n = 18) than in H2O-injected oocytes (0.20 ± 0.05 pmol/oocyte; n = 19; P < 0.001). [3H]raltegravir accumulation was also significantly higher in SLC15A1-injected oocytes (0.26 ± 0.12 pmol/oocyte; n = 19) than in H2O-injected oocytes (0.17 ± 0.03 pmol/oocyte; n = 19; P = 0.003). The positive-control compounds [3H]estrone-3-sulfate, [3H]aminohippuric acid, [3H]-taurocholic acid, [3H]glycyl sarcosine, and [14C]tetraethyl ammonium all showed significant increases in levels of accumulation in transporter RNA-injected oocytes compared to H2O-injected oocytes. Accumulation of 1 μM raltegravir and various positive-control compounds in oocytesa Determination of the time-dependent accumulation of raltegravir, tenofovir, and aminohippuric acid in SLC22A6-injected oocytes.All drugs tested had a greater accumulation rate in SLC22A6-injected oocytes than in H2O-injected oocytes (Fig. 2 A, B, and C). [3H]raltegravir concentrations continued to increase in SLC22A6-injected oocytes throughout the 4-h incubation period, whereas saturation was reached in H2O-injected oocytes after 2 h. Both [3H]tenofovir and [3H]aminohippuric acid showed virtually no accumulation in H2O-injected oocytes. (A) SLC22A6- and H2O-injected oocyte uptake of [3H]raltegravir over a 4-h incubation period. (B) SLC22A6- and H2O-injected oocyte uptake of [3H]tenofovir over a 4-h incubation period. (C) SLC22A6- and H2O-injected oocyte uptake of [3H]aminohippuric acid over a 4-h incubation period. Data in A, B, and C are expressed as mean drug accumulations (pmol/oocyte) (n = 5 experimental replicates from one biological replicate) ± standard errors (SE). (D) Concentration dependency of the uptake of [3H]raltegravir by SLC22A6. (E) Concentration dependency of the uptake of [3H]tenofovir by SLC22A6. Data in D and E are expressed as mean rates of [3H]tenofovir transport (pmol/oocyte/h) (n = 4 experimental replicates from one biological replicate) ± SE. Determination of raltegravir and tenofovir Km and Vmax values using SLC22A6-injected oocytes.An incubation time of 1 h was chosen for subsequent kinetic studies. [3H]raltegravir and [3H]tenofovir kinetics were determined for SLC22A6 in the oocyte expression system (Fig. 2D and E). The raltegravir Km and Vmax were calculated to be 150 μM and 36 pmol/oocyte/h, respectively. The raltegravir CLint (Vmax/Km) was calculated to be 0.2 μl/oocyte/h. The tenofovir Km and Vmax were calculated to be 25 μM and 129 pmol/oocyte/h, respectively. The tenofovir CLint was calculated to be 5.2 μl/oocyte/h. Competition between raltegravir and tenofovir for SLC22A6 transport.When incubated at 1 μM for 1 h, [3H]raltegravir showed a significantly higher level of accumulation in SLC22A6-injected oocytes than in H2O-injected oocytes (1.73 ± 0.46 pmol/oocyte versus 0.54 ± 0.06 pmol/oocyte; n = 5; P = 0.014) (Fig. 3 A). The coincubation of [3H]raltegravir with 1, 3, 10, 30, 100, and 300 μM tenofovir resulted in levels of [3H]raltegravir accumulation in SLC22A6-injected oocytes of 1.78 ± 0.41 pmol/oocyte, 1.63 ± 0.62 pmol/oocyte, 1.48 ± 0.58 pmol/oocyte, 1.28 ± 0.27 pmol/oocyte, 0.96 ± 0.28 pmol/oocyte, and 0.83 ± 0.31 pmol/oocyte, respectively (Fig. 3C). There was a statistically significant decrease (P < 0.05) in the level of [3H]raltegravir accumulation when concentrations of 100 μM tenofovir and higher were added to the incubation mixture. (A) Accumulation of 1 μM [3H]raltegravir in SLC22A6- and H2O-injected oocytes with and without the addition of 300 μM tenofovir. (B) Accumulation of 1 μM [3H]tenofovir in SLC22A6- and H2O-injected oocytes with and without the addition of 300 μM raltegravir. Data in A and B are expressed as mean drug concentrations per oocyte (pmol/oocyte) (n = 5 experimental replicates from one biological replicate) ± SE. *, P < 0.05; **, P < 0.01; ***, P < 0.001. (C) Determination of IC50 for inhibition of 1 μM [3H]raltegravir SLC22A6 transport by tenofovir. Data are expressed as mean [3H]raltegravir oocyte concentrations (pmol/oocyte) (n = 5 experimental replicates from one biological replicate) ± SE. (D) Determination of IC50 for inhibition of 1 μM [3H]tenofovir SLC22A6 transport by raltegravir. Data are expressed as mean [3H]tenofovir oocyte concentrations (pmol/oocyte) (n = 5 experimental replicates from one biological replicate) ± SE. Similar results were seen when [3H]tenofovir accumulation was investigated in the presence of various concentrations of raltegravir. When incubated at 1 μM for 1 h, [3H]tenofovir showed a significantly higher level of accumulation in SLC22A6-injected oocytes than in H2O-injected oocytes (2.11 ± 0.37 pmol/oocyte versus 0.03 ± 0.01 pmol/oocyte; n = 5; P < 0.009) (Fig. 3B). The coincubation of [3H]tenofovir with 1, 3, 10, 30, 100, and 300 μM raltegravir resulted in levels of [3H]tenofovir accumulation in SLC22A6-injected oocytes of 2.11 ± 0.14 pmol/oocyte, 2.07 ± 0.38 pmol/oocyte, 1.47 ± 0.43 pmol/oocyte, 0.78 ± 0.50 pmol/oocyte, 0.49 ± 0.06 pmol/oocyte, and 0.28 ± 0.08 pmol/oocyte, respectively (Fig. 3D). There was a statistically significant decrease (P < 0.05) in levels of [3H]tenofovir accumulation when concentrations of 10 μM raltegravir and higher were added to the incubation mixtures. Effect of transporter inhibitors on the accumulation of raltegravir in peripheral blood mononuclear cells.The cellular accumulation of [3H]raltegravir in peripheral blood mononuclear cells was determined (CAR = 3.02 ± 0.67) (Fig. 4 A). Cellular accumulation was not significantly altered by coincubation with tariquidar (CAR = 3.78 ± 1.56; P = 0.77), probenecid (CAR = 3.26 ± 0.98; P = 0.77), or glycyl sarcosine (CAR = 3.23 ± 0.83; P = 0.56). (A) [3H]raltegravir (1 μM) accumulation in peripheral blood mononuclear cells alone or treated with 300 nM tariquidar, 1 mM probenecid, or 1 mM glycyl sarcosine. (B) [3H]raltegravir (1 μM) accumulation in primary renal proximal tubule cells alone or treated with 1 mM probenecid or 100 μM tenofovir. (C) [3H]tenofovir (1 μM) accumulation in primary renal proximal tubule cells alone or treated with 1 mM probenecid or 100 μM raltegravir. (D) [3H]aminohippuric acid (1 μM) accumulation in primary renal proximal tubule cells alone or treated with 1 mM probenecid. Data in A, B, C, and D are expressed as mean CARs (n = 3 experimental replicates) ± SD. *, P < 0.05; **, P < 0.01; ***, P < 0.001. (E) Relative abundance of transporter RNA in cultured renal proximal tubule cells compared to transporter RNA in whole kidney (log percent) (n = 1 experimental replicate). Interactions in primary renal proximal tubule epithelial cells.The levels of cellular accumulation of [3H]raltegravir (CAR = 2.01 ± 0.20) (Fig. 4B), [3H]tenofovir (CAR = 3.45 ± 0.76) (Fig. 4C), and [3H]aminohippuric acid (CAR = 0.50 ± 0.02) (Fig. 4D) in renal proximal tubule epithelial cells were determined. [3H]raltegravir cellular accumulation was not significantly altered by treatment with 1 mM probenecid (CAR = 2.19 ± 0.45; P = 0.56) or 100 μM tenofovir (CAR = 2.07 ± 0.08; P = 0.66). [3H]tenofovir showed a high level of cellular accumulation, which was not significantly altered by treatment with 1 mM probenecid (CAR = 2.59 ± 0.56; P = 0.19) or 100 μM raltegravir (CAR = 3.15 ± 0.74; P = 0.65). For [3H]aminohippuric acid there was a trend toward a lower level of cellular accumulation when incubated with 1 mM probenecid (CAR = 0.44 ± 0.03; P = 0.08). Transporter expression in primary renal proximal tubule cells versus whole kidneys.All transporters tested showed lower or undetectable levels of expression in primary renal proximal tubule cells compared to whole kidney (Fig. 4E). Importantly, SLC22A6 was undetectable in primary renal cells. The results from CEM/CEMVBL accumulation and Caco-2 bidirectional transport experiments confirm that raltegravir is transported by ABCB1. The extent of raltegravir transport by ABCB1 was small compared to the transport of the positive controls saquinavir and digoxin. Indeed, FDA guidelines recommend that a drug should achieve an efflux ratio of at least 2 in Caco-2 cell monolayers and show greater than a 50% reduction in the efflux ratio when an ABCB1 inhibitor is used in order for ABCB1 transport to be considered relevant in vivo (16). In our Caco-2 experiment raltegravir achieved an efflux ratio of only 1.9 and a reduction of 32% when tariquidar was used to inhibit ABCB1. The low rate of raltegravir transport by ABCB1 may explain the absence of major drug interactions with known potent ABCB1 inhibitors. This is consistent with a previous report that the coadministration of low-dose ritonavir had no major effect on raltegravir pharmacokinetics, and no dose adjustment is required for patients (22). Raltegravir showed significantly increased levels of accumulation in SLC15A1- and SLC22A6-injected oocytes compared to H2O-injected controls, but accumulation was not higher in oocytes expressing SLCO1A2, SLCO1B1, SLCO1B3, SLC15A2, and SLC10A1. In SLC22A6-injected oocytes, both raltegravir and tenofovir inhibited the accumulation of the other in a concentration-dependent manner (Fig. 3C and D). No competition was observed for H2O-injected oocytes, which supports the hypothesis that raltegravir and tenofovir are competing for SLC22A6 transport and are not having nonspecific effects on oocyte membrane permeability. IC50 values of 27.3 μM and 14.0 μM were determined for tenofovir and raltegravir, respectively. The IC50 obtained for raltegravir was much lower than the observed Km for SLC22A6 (IC50 of 14.0 μM versus a Km of 147 μM). These results suggest that raltegravir is a more efficient SLC22A6 inhibitor than it is a substrate. Previous studies indicated that SLC22A6 and SLC15A1 are absent from PBMCs (5), and so these transporters are unlikely to explain the unusual PK-PD relationship for raltegravir. Indeed, our studies of PBMCs with known inhibitors of ABCB1, SLC22A6, and SLC15A1 revealed no significant interaction with raltegravir. Wenning et al. (38) previously studied the interaction of raltegravir (400 mg twice daily) and tenofovir (300 mg once daily). The study showed increased raltegravir AUC (49%) and Cmax (64%) values but no effect on the raltegravir Cmin and a decrease in the tenofovir AUC (10%), Cmax (23%), and Cmin (13%) (38). SLC22A6 is expressed predominantly in the proximal tubule of the kidney on the basolateral (blood-facing) surface, thereby facilitating the removal of drugs out of the blood and into the urine (31). Therefore, a possible mechanism of interaction is the inhibition of SLC22A6-mediated raltegravir transport at the kidney-proximal tubule by tenofovir, resulting in increased raltegravir plasma concentrations. In order to investigate interactions at the level of the kidney, we conducted a number of experiments with primary renal proximal cells. No interaction between tenofovir and raltegravir was observed for these cells, but neither was an interaction with the positive-control substrate and inhibitor. Subsequent analyses revealed the expression of SLC22A6 to be undetectable, unlike kidney tissue. Furthermore, all transporters that were assessed were at lower levels of expression than in kidney tissue, and the absence of a robust primary cell model for these studies is a limitation. Since only a small percentage (around 30%) of raltegravir excreted via the kidney is in the parent form, with the remaining 70% being the glucuronide metabolite (24), it is important to determine if the raltegravir glucuronide is also transported by SLC22A6 and inhibited by tenofovir. It would also be interesting to investigate the transport and inhibitory potential of tenofovir diphosphate for SLC22A6 and whether this affects raltegravir transport to the same degree. The Xenopus laevis oocyte expression system has several advantages when investigating drug transport. The large size and high level of protein production of oocytes provide robust and reliable data. Also, the level of expression of endogenous primary and secondary active xenobiotic transporters in oocytes is low (32). However, there are also disadvantages. The temperature must be maintained at 18°C during protein expression and at around room temperature during any accumulation experiments to avoid degradation, and this may impact transporter kinetics. Also, as in other models, the expression of the investigated transporters is superphysiological. Therefore, although they allow an investigation of low-affinity or high-permeability substrates, this means that caution should be taken when extrapolating the data to in vivo observations. In summary, our studies have shown raltegravir to have minimal interactions with known drug transporters. Raltegravir is transported by ABCB1 in vitro, although the rate of transport is low and the potential for interactions is expected to be small. Raltegravir is a substrate for SLC22A6 and SLC15A1 in X. Laevis oocyte expression systems and competes with tenofovir for SLC22A6 transport. Polymorphisms in SLC22A6 have previously been described and now warrant study in the context of raltegravir. This work was supported in part by a research grant from the Investigator-Initiated Studies Program of Merck Sharp & Dohme Corp. This work was funded by Merck & Co. Inc., (Whitehouse Station, NJ). The opinions expressed in this paper are those of the authors and do not necessarily represent those of Merck Sharp & Dohme Corp. Received 5 May 2010. Returned for modification 7 August 2010. Accepted 2 November 2010. Accepted manuscript posted online 15 November 2010. Copyright © 2011, American Society for Microbiology Anderson, M. S., T. N. Kakuda, W. Hanley, J. Miller, J. T. Kost, R. Stoltz, L. A. Wenning, J. A. Stone, R. M. Hoetelmans, J. A. Wagner, and M. Iwamoto. 2008. Minimal pharmacokinetic interaction between the human immunodeficiency virus nonnucleoside reverse transcriptase inhibitor etravirine and the integrase inhibitor raltegravir in healthy subjects. Antimicrob. Agents Chemother. 52:4228-4232. Anderson, M. S., V. Sekar, F. Tomaka, J. Mabalot, R. Mack, L. Lionti, S. Zajic, L. Wenning, C. Vanden Abeele, M. Zinny, N. M. Lunde, B. Jin, J. A. Wagner, and M. Iwamoto. 2008. Pharmacokinetic (PK) evaluation of darunavir/ritonavir (DRV/r) and raltegravir (RAL) in healthy subjects, abstr. A-962. Abstr. 48th Annu. Intersci. Conf. Antimicrob. Agents Chemother. (ICAAC)-Infect. Dis. Soc. Am. (IDSA) 46th Annu. Meet. American Society for Microbiology and Infectious Diseases Society of America, Washington, DC. Andrews, E., P. Glue, J. Fang, P. Crownover, R. Tressler, and B. Damle. 2008. A pharmacokinetic study to evaluate an interaction between maraviroc and raltegravir in healthy adults, abstr. H-4055. Abstr. 48th Annu. Intersci. Conf. Antimicrob. Agents Chemother. (ICAAC)-Infect. Dis. Soc. Am. (IDSA) 46th Annu. Meet. American Society for Microbiology and Infectious Diseases Society of America, Washington, DC. Barau, C., C. Delaugerre, J. Braun, N. de Castro, V. Furlan, I. Charreau, L. Gerard, C. Lascoux-Combe, J. M. Molina, and A. M. Taburet. 2010. High concentration of raltegravir in semen of HIV-infected men: results from a substudy of the EASIER-ANRS 138 trial. Antimicrob. Agents Chemother. 54:937-939. Bleasby, K., J. C. Castle, C. J. Roberts, C. Cheng, W. J. Bailey, J. F. Sina, A. V. Kulkarni, M. J. Hafey, R. Evers, J. M. Johnson, R. G. Ulrich, and J. G. Slatter. 2006. Expression profiles of 50 xenobiotic transporter genes in humans and pre-clinical species: a resource for investigations into drug disposition. Xenobiotica 36:963-988. Brainard, L. M., A. S. Petry, L. Fang, C. Liu, S. A. Breidinger, E. P. DeNoia, J. A. Stone, J. A. Chodakewitz, L. A. Wenning, J. A. Wagner, and M. Iwamoto. 2009. Lack of a clinically important effect of rifabutin (RFB) on raltegravir (RAL) pharmacokinetics, abstr. A1-1296. Abstr. 49th Annu. Intersci. Conf. Antimicrob. Agents Chemother. American Society for Microbiology, Washington, DC. Calcagno, A., S. Bonora, R. Bertucci, A. Lucchini, A. D'Avolio, and G. Di Perri. 2010. Raltegravir penetration in the cerebrospinal fluid of HIV-positive patients. AIDS (London) 24:931-932. Chirch, L. M., S. Morrison, and R. T. Steigbigel. 2009. Treatment of HIV infection with raltegravir. Expert Opin. Pharmacother. 10:1203-1211. Dixit, V., N. Hariparsad, F. Li, P. Desai, K. E. Thummel, and J. D. Unadkat. 2007. Cytochrome P450 enzymes and transporters induced by anti-human immunodeficiency virus protease inhibitors in human hepatocytes: implications for predicting clinical drug interactions. Drug Metab. Dispos. 35:1853-1859. Elsby, R., D. D. Surry, V. N. Smith, and A. J. Gray. 2008. Validation and application of Caco-2 assays for the in vitro evaluation of development candidate drugs as substrates or inhibitors of P-glycoprotein to support regulatory submissions. Xenobiotica 38:1140-1164. Espeseth, A. S., P. Felock, A. Wolfe, M. Witmer, J. Grobler, N. Anthony, M. Egbertson, J. Y. Melamed, S. Young, T. Hamill, J. L. Cole, and D. J. Hazuda. 2000. HIV-1 integrase inhibitors that compete with the target DNA substrate define a unique strand transfer conformation for integrase. Proc. Natl. Acad. Sci. U. S. A. 97:11244-11249. Foisy, M. M., E. M. Yakiwchuk, and C. A. Hughes. 2008. Induction effects of ritonavir: implications for drug interactions. Ann. Pharmacother. 42:1048-1059. Hanley, W. D., L. A. Wenning, A. Moreau, J. T. Kost, E. Mangin, T. Shamp, J. A. Stone, K. M. Gottesdiener, J. A. Wagner, and M. Iwamoto. 2009. Effect of tipranavir-ritonavir on pharmacokinetics of raltegravir. Antimicrob. Agents Chemother. 53:2752-2755. Hariparsad, N., S. C. Nallani, R. S. Sane, D. J. Buckley, A. R. Buckley, and P. B. Desai. 2004. Induction of CYP3A4 by efavirenz in primary human hepatocytes: comparison with rifampin and phenobarbital. J. Clin. Pharmacol. 44:1273-1281. Hartkoorn, R. C., W. S. Kwan, V. Shallcross, A. Chaikan, N. Liptrott, D. Egan, E. S. Sora, C. E. James, S. Gibbons, P. G. Bray, D. J. Back, S. H. Khoo, and A. Owen. 2010. HIV protease inhibitors are substrates for OATP1A2, OATP1B1 and OATP1B3 and lopinavir plasma concentrations are influenced by SLCO1B1 polymorphisms. Pharmacogenet. Genomics 20:112-120. Huang, S. M., J. M. Strong, L. Zhang, K. S. Reynolds, S. Nallani, R. Temple, S. Abraham, S. A. Habet, R. K. Baweja, G. J. Burckart, S. Chung, P. Colangelo, D. Frucht, M. D. Green, P. Hepp, E. Karnaukhova, H. S. Ko, J. I. Lee, P. J. Marroum, J. M. Norden, W. Qiu, A. Rahman, S. Sobel, T. Stifano, K. Thummel, X. X. Wei, S. Yasuda, J. H. Zheng, H. Zhao, and L. J. Lesko. 2008. New era in drug interaction evaluation: US Food and Drug Administration update on CYP enzymes, transporters, and the guidance process. J. Clin. Pharmacol. 48:662-670. Hubatsch, I., E. G. Ragnarsson, and P. Artursson. 2007. Determination of drug permeability and prediction of drug absorption in Caco-2 monolayers. Nat. Protoc. 2:2111-2119. Iaccino, E., M. Schiavone, G. Fiume, I. Quinto, and G. Scala. 2008. The aftermath of the Merck's HIV vaccine trial. Retrovirology 5:56. Iwamoto, M., K. Kassahun, M. D. Troyer, W. D. Hanley, P. Lu, A. Rhoton, A. S. Petry, K. Ghosh, E. Mangin, E. P. DeNoia, L. A. Wenning, J. A. Stone, K. M. Gottesdiener, and J. A. Wagner. 2008. Lack of a pharmacokinetic effect of raltegravir on midazolam: in vitro/in vivo correlation. J. Clin. Pharmacol. 48:209-214. Iwamoto, M., L. A. Wenning, G. C. Mistry, A. S. Petry, S. Y. Liou, K. Ghosh, S. Breidinger, N. Azrolan, M. J. Gutierrez, W. E. Bridson, J. A. Stone, K. M. Gottesdiener, and J. A. Wagner. 2008. Atazanavir modestly increases plasma levels of raltegravir in healthy subjects. Clin. Infect. Dis. 47:137-140. Iwamoto, M., L. A. Wenning, B. Y. Nguyen, H. Teppler, A. R. Moreau, R. R. Rhodes, W. D. Hanley, B. Jin, C. M. Harvey, S. A. Breidinger, N. Azrolan, H. F. Farmer, Jr., R. D. Isaacs, J. A. Chodakewitz, J. A. Stone, and J. A. Wagner. 2009. Effects of omeprazole on plasma levels of raltegravir. Clin. Infect. Dis. 48:489-492. Iwamoto, M., L. A. Wenning, A. S. Petry, M. Laethem, M. De Smet, J. T. Kost, S. A. Breidinger, E. C. Mangin, N. Azrolan, H. E. Greenberg, W. Haazen, J. A. Stone, K. M. Gottesdiener, and J. A. Wagner. 2008. Minimal effects of ritonavir and efavirenz on the pharmacokinetics of raltegravir. Antimicrob. Agents Chemother. 52:4338-4343. Janneh, O., B. Chandler, R. Hartkoorn, W. S. Kwan, C. Jenkinson, S. Evans, D. J. Back, A. Owen, and S. H. Khoo. 2009. Intracellular accumulation of efavirenz and nevirapine is independent of P-glycoprotein activity in cultured CD4 T cells and primary human lymphocytes. J. Antimicrob. Chemother. 64:1002-1007. Kassahun, K., I. McIntosh, D. Cui, D. Hreniuk, S. Merschman, K. Lasseter, N. Azrolan, M. Iwamoto, J. A. Wagner, and L. A. Wenning. 2007. Metabolism and disposition in humans of raltegravir (MK-0518), an anti-AIDS drug targeting the human immunodeficiency virus 1 integrase enzyme. Drug Metab. Dispos. 35:1657-1663. Kis, O., K. Robillard, G. N. Chan, and R. Bendayan. 2010. The complexities of antiretroviral drug-drug interactions: role of ABC and SLC transporters. Trends Pharmacol. Sci. 31:22-35. Lee, C. G., M. M. Gottesman, C. O. Cardarelli, M. Ramachandra, K. T. Jeang, S. V. Ambudkar, I. Pastan, and S. Dey. 1998. HIV-1 protease inhibitors are substrates for the MDR1 multidrug transporter. Biochemistry 37:3594-3601. Liptrott, N. J., M. Penny, P. G. Bray, J. Sathish, S. H. Khoo, D. J. Back, and A. Owen. 2009. The impact of cytokines on the expression of drug transporters, cytochrome P450 enzymes and chemokine receptors in human PBMC. Br. J. Pharmacol. 156:497-508. Luber, A., P. D. Slowinski, and E. Acosta. 2009. Steady-state pharmacokinetics of fosamprenavir and raltegravir alone and combined with unboosted and ritonavir-boosted fosamprenavir, abstr. A1-1297. Abstr. 49th Annu. Intersci. Conf. Antimicrob. Agents Chemother. American Society for Microbiology, Washington, DC. Markowitz, M., J. O. Morales-Ramirez, B. Y. Nguyen, C. M. Kovacs, R. T. Steigbigel, D. A. Cooper, R. Liporace, R. Schwartz, R. Isaacs, L. R. Gilde, L. Wenning, J. Zhao, and H. Teppler. 2006. Antiretroviral activity, pharmacokinetics, and tolerability of MK-0518, a novel inhibitor of HIV-1 integrase, dosed as monotherapy for 10 days in treatment-naive HIV-1-infected individuals. J. Acquir. Immune Defic. Syndr. 43:509-515. Mosmann, T. 1983. Rapid colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assays. J. Immunol. Methods 65:55-63. Rizwan, A. N., and G. Burckhardt. 2007. Organic anion transporters of the SLC22 family: biopharmaceutical, physiological, and pathological roles. Pharm. Res. 24:450-470. Sobczak, K., N. Bangel-Ruland, G. Leier, and W. M. Weber. 2010. Endogenous transport systems in the Xenopus laevis oocyte plasma membrane. Methods 51:183-189. Steigbigel, R. T., D. A. Cooper, H. Teppler, J. J. Eron, J. M. Gatell, P. N. Kumar, J. K. Rockstroh, M. Schechter, C. Katlama, M. Markowitz, P. Yeni, M. R. Loutfy, A. Lazzarin, J. L. Lennox, B. Clotet, J. Zhao, H. Wan, R. R. Rhodes, K. M. Strohmaier, R. J. Barnard, R. D. Isaacs, and B. Y. Nguyen. 2010. Long-term efficacy and safety of raltegravir combined with optimized background therapy in treatment-experienced patients with drug-resistant HIV infection: week 96 results of the BENCHMRK 1 and 2 phase III trials. Clin. Infect. Dis. 50:605-612. Summa, V., A. Petrocchi, F. Bonelli, B. Crescenzi, M. Donghi, M. Ferrara, F. Fiore, C. Gardelli, O. Gonzalez Paz, D. J. Hazuda, P. Jones, O. Kinzel, R. Laufer, E. Monteagudo, E. Muraglia, E. Nizi, F. Orvieto, P. Pace, G. Pescatore, R. Scarpelli, K. Stillmock, M. V. Witmer, and M. Rowley. 2008. Discovery of raltegravir, a potent, selective orally bioavailable HIV-integrase inhibitor for the treatment of HIV-AIDS infection. J. Med. Chem. 51:5843-5855. Takeda, M., S. Khamdang, S. Narikawa, H. Kimura, Y. Kobayashi, T. Yamamoto, S. H. Cha, T. Sekine, and H. Endou. 2002. Human organic anion transporters and human organic cation transporters mediate renal antiviral transport. J. Pharmacol. Exp. Ther. 300:918-924. Uwai, Y., H. Ida, Y. Tsuji, T. Katsura, and K. Inui. 2007. Renal transport of adefovir, cidofovir, and tenofovir by SLC22A family members (hOAT1, hOAT3, and hOCT2). Pharm. Res. 24:811-815. van der Sandt, I. C., C. M. Vos, L. Nabulsi, M. C. Blom-Roosemalen, H. H. Voorwinden, A. G. de Boer, and D. D. Breimer. 2001. Assessment of active transport of HIV protease inhibitors in various cell lines and the in vitro blood-brain barrier. AIDS (London) 15:483-491. Wenning, L. A., E. J. Friedman, J. T. Kost, S. A. Breidinger, J. E. Stek, K. C. Lasseter, K. M. Gottesdiener, J. Chen, H. Teppler, J. A. Wagner, J. A. Stone, and M. Iwamoto. 2008. Lack of a significant drug interaction between raltegravir and tenofovir. Antimicrob. Agents Chemother. 52:3253-3258. Wenning, L. A., W. D. Hanley, D. M. Brainard, A. S. Petry, K. Ghosh, B. Jin, E. Mangin, T. C. Marbury, J. K. Berg, J. A. Chodakewitz, J. A. Stone, K. M. Gottesdiener, J. A. Wagner, and M. Iwamoto. 2009. Effect of rifampin, a potent inducer of drug-metabolizing enzymes, on the pharmacokinetics of raltegravir. Antimicrob. Agents Chemother. 53:2852-2856. Xie, W., M. F. Yeuh, A. Radominska-Pandya, S. P. Saini, Y. Negishi, B. S. Bottroff, G. Y. Cabrera, R. H. Tukey, and R. M. Evans. 2003. Control of steroid, heme, and carcinogen metabolism by nuclear pregnane X receptor and constitutive androstane receptor. Proc. Natl. Acad. Sci. U. S. A. 100:4150-4155. Yilmaz, A., M. Gisslen, S. Spudich, E. Lee, A. Jayewardene, F. Aweeka, and R. W. Price. 2009. Raltegravir cerebrospinal fluid concentrations in HIV-1 infection. PLoS One 4:e6877. Zhang, D., T. J. Chando, D. W. Everett, C. J. Patten, S. S. Dehal, and W. G. Humphreys. 2005. In vitro inhibition of UDP glucuronosyltransferases by atazanavir and other HIV protease inhibitors and the relationship of this property to in vivo bilirubin glucuronidation. Drug Metab. Dispos. 33:1729-1739. Zhu, L., L. Mahnke, J. Butterton, A. Persson, M. Stonier, W. Comisar, D. Panebianco, S. Breidinger, J. Zhang, and R. Bertz. 2009. Pharmacokinetics and safety of twice daily atazanavir 300 mg and raltegravir 400 mg in healthy subjects, abstr. 693. Abstr. 16th Conf. Retroviruses Opportun. Infect., Montreal, Canada. Antimicrobial Agents and Chemotherapy Jan 2011, 55 (2) 879-887; DOI: 10.1128/AAC.00623-10 You are going to email the following Raltegravir Is a Substrate for SLC22A6: a Putative Mechanism for the Interaction between Raltegravir and Tenofovir
CommonCrawl
Doktorandenseminar des WIAS Numerische Mathematik und Wissenschaftliches Rechnen zurück zum aktuellen Programm Seminar Numerische Mathematik / Numerical mathematics seminars aktuelles Programm / current program Donnerstag, 03. 12. 2015, 14:00 Uhr (WIAS-406) Dr. A. Linke (WIAS Berlin) Towards pressure-robust mixed methods for the incompressible Navier--Stokes equations Mixed methods for the incompressible Navier-Stokes equations are reviewed with respect to the discretization of the divergence constraint. Though the establishment of inf-sup stable mixed methods represents a milestone in the development of discretization theory for flow problems, many important questions are left open, and classical text books usually convey a wrong impression what are they best qualitatively possible results, which are achievable in the field. Especially, it will be shown that the construction of pressure-robust mixed methods, whose velocity error is pressure-independent, is rather easy, though this was thought to be nearly impossible for many years. Numerical examples will show that classical mixed methods deliver poor results, whenever large irrotational forces appear in the Navier-Stokes momentum balance, while pressure-robust mixed methods perform well. Donnerstag, 26. 11. 2015, 14:00 Uhr (WIAS-ESH) Dr. F. Dassi (WIAS Berlin) Achievements and challenges in anisotropic mesh generation Donnerstag, 19. 11. 2015, 10:00 Uhr (ESH) Prof. Ch. Pflaum (Friedrich-Alexander Universität Erlangen-Nürnberg) Discretization of elliptic differential equations with variable coefficients on sparse grids Sparse grids are discretization grids which can be used to reduce the computational amount for solving partial differential equations. Using the Galerkin discretization, one obtains a linear equation system with O(N (log N)d-1) unknowns. The corresponding discretization error is O(N-1 (log N)d-1) in the H1-norm. A major difficulty in using this sparse grid discretization is the complexity of the related stiffness matrix. As a consequence only PDE?s with constant coefficients can be efficiently be discretized using the standard sparse grid discretization with d>2. To reduce the complexity of the sparse grid discretization matrix, we apply prewavelets. This simplifies the implementation of the corresponding algorithms. Furthermore, we present a new sparse grid discretization for the discretization of elliptic differential equations with variable coefficients. This discretization utilizes a semi-othogonality property. The convergence rate and stability of the discretization is proven for arbitrary dimensions d. J. Pellerin (WIAS Berlin) Simultaneous meshing and simplification of complex 3D geometrical models using Voronoi diagrams All meshing methods aim at dividing a physical model that has a potentially very complex geometry into simple elements. Requirements on the resulting mesh might be contradictory as the elements shall conform to the shape of the physical domain and shall satisfy constraints on their shapes, sizes, number etc. In geological modeling to generate robustly volumetric meshes, it is often necessary to adapt the level of detail of the geological domain when meshing their boundary surfaces. In the first part of the talk, I will focus on geological structural models, their geometrical and geological specificities, and the induced challenges for meshing methods. Then, I will detail the key ideas of a meshing method that tackles these challenges: (1) the use of a well-shaped Voronoi diagram to subdivide the model and (2) the combinatorial considerations to build well-shaped mesh elements from the connected components of the intersections of the Voronoi cells/facets/edges/vertices with the model entities. This approach allows modifications of the input model, a crucial point for geomodeling applications. Finally, I will discuss shortly the implementation of the method. Yueyuan Gao (Universite Paris-Sud, France) Finite volume methods for first order stochastic conservation laws We perform Monte-Carlo simulations in the one-dimensional torus for the first order Burgers equation forced by a stochastic source term with zero spatial integral. We suppose that this source term is a white noise in time, and consider various regularities in space. We apply a finite volume scheme combining the Godunov numerical flux with the Euler-Maruyama integrator in time. It turns out that the empirical mean converges to the space-average of the deterministic initial condition as $t\rightarrow\infty$. The empirical variance also stabilizes for large time, towards a limit which depends on the space regularity and on the intensity of the noise. We then study a time explicit finite volume method with an upwind scheme for a conservation law driven by a multiplicative source term involving a $Q$-Wiener process. We present some a priori estimates including a weak BV estimate. After performing a time interpolation, we prove two entropy inequalities for the discrete solution and show that it converges up to a subsequence to a stochastic measure-valued entropy solution of the conservation law in the sense of Young measures. The numerical part is joint work with E. Audusse and S. Boyaval while the convergence proof is joint work with T. Funaki and H. Weber. Freitag, 11. 9. 2015, 14:00 Uhr (ESH) Dr. T. Benacchio (Met Office, UK) Towards scalable numerical weather and climate prediction with mixed finite element discretizations The nature of atmospheric processes presents a unique set of challenges to numerical modelling, as coupled nonlinear phenomena evolve on a wide range of spatial and temporal scales extending across several orders of magnitude.. Timely and reliable forecasts require accurate simulation of transport processes together with robust handling of waves and balanced regimes, with competitive and scalable performance a key requisite on increasingly parallel architectures. Features of the nonhydrostatic compressible dynamical core in operation at the Met Office will be outlined, with particular reference to the scalability bottleneck given by the currently employed latitude-longitude grid structure. The talk will then focus on theoretical and computational aspects and open challenges of the mixed finite element numerical scheme on quasi-uniform spherical grids underpinning the next generation dynamical core. Dr. G. R. Barrenechea (University of Strathclyde, UK) Stabilising some inf-sup stable pairs on anisotropic quadrilateral meshes The finite element solution of the Stokes problem is subject to the well-known inf-sup condition. This condition usually depends on the domain and the degree of the polynomial spaces used in the discretisation. Now, when anisotropic finite elements are used, this inf-sup condition usually degenerates with the aspect ratio. In this talk I will present results identifying the part of the pressure space that is responsible for this degeneration, and present a way to solve that. In the second part of the talk, this technique will be applied to some, non inf-sup stable this time, pairs of spaces for the Stokes and Oseen equations. The work presented in this talk is in collaboration with Mark Ainsworth (Brown) and Andreas Wachtel (Strathclyde). Dienstag, 01. 09. 2015, 13:30 Uhr (ESH) Dr. P. Knobloch (Charles University, Czech Republic) On linearity preservation and discrete maximum principle for algebraic flux correction schemes We consider a general algebraic flux correction scheme for linear boundary value problems in any space dimension and prove its solvability. The properties of the scheme depend on the definition of limiters used to limit fluxes that would otherwise cause spurious oscillations. We give an example of limiters that are often used in computations. For these limiters we prove the discrete maximum principle but we also show that they generally do not lead to a linearity preserving method. Therefore, we propose another limiters for which the linearity preservation is satisfied and the discrete maximum principle holds for arbitrary meshes. Donnerstag, 25. 6. 2015, 14:00 Uhr (ESH) S. Hirsch (Charite - Universitätsmedizin Berlin) Compression-sensitive Magnetic Resonance Elastography and poroelasticity Recent research suggests that the compression modulus of organic tissue is sensitive to changes of tissue pressure. Compression-sensitive Magnetic Resonance Elastography (MRE) provides a means to map the amplitudes of externally stimulated mechanical pressure waves and has proven its potential to differentiate between physiological states of different tissue pressure. The interpretation of the results thus far has relied on an effective medium model. Biot's poroelastic theory can provide an explanation for the interaction between tissue pressure and the mechanical properties of tissue. However, quantification of the poroelastic parameters from MRE data still poses a challenge. This talk will present the status quo of the technique and explore the potential for future improvements towards the ultimate goal of non-invasive pressure measurement. Dienstag, 16. 6. 2015, 13:30 Uhr (ESH) Prof. S. Ganesan (Indian Institute of Science) Finite element algorithms for massively parallel architectures Parallel algorithms with hybrid MPI and OpenMP implementations for a geometric multigrid solver in a nite element scheme will be presented in this talk. In particular, the mesh partitioning, the nite element mappers and communicators on a hierarchy of multigrid mesh will be discussed. Further, a multicolor strategy in the smoothing steps of the multigrid solver, parallel restrictions and prolongations using halo cells will be discussed. Prof. J. Novo (Universidad Autonoma de Madrid, Spain) Local error estimates for the SUPG method applied to evolutionary convection-reaction-diffusion equations Local error estimates for the SUPG method applied to evolutionary convection-reaction-diffusion equations are considered. The steady case is reviewed and local error bounds are obtained for general order finite element methods. For the evolutionary problem, local bounds are obtained when the SUPG method is combined with the backward Euler scheme. The arguments used in the proof lead to estimates for the stabilization parameter that depend on the length on the time step. The numerical experiments show that local bounds seem to hold true both with a stabilization parameter depending only on the spatial mesh grid and with other time integrators. Donnerstag, 4. 6. 2015, 14:00 Uhr (ESH) Dr. K. Schmidt (TU Berlin) On optimal basis functions for thin conducting sheets in electromagnetics and on efficient calculation of the photonic crystal bandstructure The talk consists of two parts. The first part of the talk is dedicated to thin conducting sheets which are commonly used in the protection of electronic devices. The domain of computation is the thin metallic sheet, the surrounding air and other metallic or isolating parts. With their large aspect ratios the shielding sheets become a serious issue for the direct application of the finite element method (FEM) on triangular or tetrahedal cells. In addition the sheets exhibit boundary layers (this is called skin effect) whose size depends on the frequency and may become smaller than the sheet thickness. We propose a semi-discretisation inside the sheet as a tensor product of $H^1(\Gamma)$-bounded functions on the midline $\Gamma$ and a family of $N$ basis functions in thickness direction. The basis functions are defined hierarchically in the spirit of Vogelius and Babuska by an hierarchy of ordinary differential equations in thickness directions. For each basis function a new term on $\Gamma$ enters the variational formulation. We give explicit formulas for the basis functions for straight and curved sheets, prove the well-posedness and estimate for straight sheets the modelling error. If the sheet surfaces and so the trace of the exact solution is analytic then this semi-discretisation leads to an exponential convergence in $N$, whose rate is independent of the frequency. This will be illustrated by numerical experiments for straight and circular sheet. In the second part the calculation of the photonic crystal bandstructure is studied. Photonic crystals are periodic dielectric material, a structure which has high influence on the propagation of light. We are interested on the related parametric eigenvalue problem where the dispersion curves are the functions of the eigenvalues in dependence of the parameter. Based on the Fredholm theory we explicit formulas for the first and higher derivatives of the dispersion curves in terms of eigenvalue and eigenfunction. The derivatives will be used for an adaptive parameter sampling and high-order description of the band structure based on Taylor expansions. The algorithm is able to decide whether two bands cross or not. Dr. P. A. Zegeling (Utrecht University, The Netherlands) Adaptive grids for detecting non-monotone waves and instabilities in a non-equilibrium PDE model from porous media Space-time evolution described by nonlinear PDE models involves patterns and qualitative changes induced by parameters. In this talk I will emphasize the importance of both the analysis and computation in relation to a bifurcation problem in a non-equilibrium Richard's equation from hydrology. The extension of this PDE model for the water saturation $S$ to take into account additional dynamic memory effects was suggested by Hassanizadeh and Gray in the 90's. This gives rise to an extra \emph{third-order mixed} space-time derivative term in the PDE of the form $\tau ~ \nabla \cdot [T(S) \nabla (S_t)]$. In one space dimension traveling wave analysis is able to predict the formation of steep non-monotone waves depending on $\tau$. In 2D, the parameters $\tau$ and the frequency $\omega$ included in a small perturbation term, predict that the waves may become \emph{unstable}, thereby initiating so-called gravity-driven fingering structures. This phenomenon can be analysed with a linear stability analysis and its effects are supported by the numerical experiments of the 2D time-dependent PDE model. For this purpose, we have used a sophisticated adaptive grid r-refinement technique based on a recently developed monitor function. The numerical experiments in one and two space dimension show the effectiveness of the adaptive grid solver. Dr. A. Caiazzo (WIAS Berlin) Stabilization at backflow In computational fluid dynamics, the presence of incoming flow at open boundaries (backflow) often yields to unphysical instabilities. This issue arises due to the incoming convective energy at the open boundary, which - in general cases - cannot be controlled a priori. In this talk, the state-of-the-art of backflow stabilizations will be overviewed, discussing then two recently proposed approaches. The first is based on adding artificial viscosity on the open boundary along the tangential direction, while the second consists in penalizing the weak residual of a Stokes problem on the boundary. The performance of the methods are assessed through several numerical tests, considering analytic solutions, as well as blood and air flows in complex geometries coming from medical images. L. Heltai (SISSA, Italy) Coupling isogeometric analysis and Reduced Basis methods for complex geometrical parametrizations Isogeometric analysis (IGA) emerged as a technology bridging Computer Aided Geometric Design (CAGD), most commonly based on Non-Uniform Rational B-Splines (NURBS) surfaces, and engineering analysis. In finite element and boundary element isogeometric methods (FE-IGA and IGA-BEM), the NURBS basis functions that describe the geometry define also the approximation spaces. The resulting approximation schemes can be used very effectively as a high-fidelity approximation in Reduced Basis (RB) methods, providing a tool for the rapid and reliable evaluation of PDE systems characterized by complex geometrical features. After a brief overview of the various techniques, I will present an application of this technology, where we address the simulation of potential flows past airfoils, parametrized with respect to the angle of attack and the NACA number identifying their shape, to optimize in real time the trimming of the rigid sail of a catamaran, using RB IGA-BEM. Dienstag, 3. 3. 2015, 13:30 Uhr (ESH) Prof. S. Perotto (Politecnico di Milano, Italy) Adaptive Hierarchical Model (HiMod) reduction for initial boundary value problems The construction of surrogate models is a crucial step for bringing computational tools to practical applications within an appropriate timeline. This can be accomplished by taking advantage of specific features of the problem at hand. For instance, when solving flow problems in networks (in the modeling of blood, oil, water, or air dynamics), the local dynamics are expected to develop mainly along a dominant direction. The interaction between the local and network dynamics calls often for appropriate model reduction techniques. This is the case, for instance, of the so-called Hierarchical Model (HiMod) reduction, devised to deal with problems characterized by a prevalent dynamics, even though the presence of transverse dynamics may be locally significant. The main idea of HiMod consists of introducing a modal discretization for the transverse dynamics coupled with a finite element approximation of the mainstream one [1]. Moving from the original approach where we employ the same number of modes on the whole domain, we have successively introduced a more sophisticated approach, by automatically adapting the number of the modal functions according to the local level of complexity of the problem. This goal is pursued by deriving an a posteriori modelling error analysis, first set in a steady framework [2] and more recently extended to an unsteady setting [3]. An adaptive selection of the space and of the space-time mesh is also accomplished. [1] S. Perotto, A. Ern and A. Veneziani, ''Hierarchical local model reduction for elliptic problems: a domain decomposition approach''. Multiscale Model. Simul., 8 (2010), no. 4, 1102-1127. [2] S. Perotto and A. Veneziani, ''Coupled model and grid adaptivity in hierarchical reduction of elliptic problems''. J. Sci. Comput., 60 (2014), no. 3, 505-536. [3] S. Perotto and A. Zilio, ''Space-time adaptive hierarchical model reduction for parabolic equations''. MOX Report no. 06/2015. Prof. R. Masson (Universite de Nice Sophia Antipolis, France) Gradient scheme discretizations of two phase porous media flows in fractured porous media This talk presents the gradient scheme framework for the discretization of two-phase Darcy flows in discrete fracture networks taking into account the mass exchange between the matrix and the fracture. We consider the asymptotic model for which the fractures are represented as interfaces of codimension one immersed in the matrix domain, leading to the so called hybrid dimensional Darcy flow model. The pressures at the interfaces between the matrix and the fracture network are continuous corresponding to a ratio between the normal permeability of the fracture and the width of the fracture assumed to be large compared with the ratio between the permeability of the matrix and the size of the domain. Two type of discretizations matching the gradient scheme framework are discussed including the extensions of the Hybrid Finite Volume (HFV) and of the Vertex Approximate Gradient (VAG) schemes to the case of hybrid dimensional Darcy flow models. Compared with Control Volume Finite Element (CVFE) approaches, the VAG scheme has the advantage to avoid the mixing of the fracture and matrix rock types at the interfaces between the matrix and the fractures, while keeping the low cost of a nodal discretization on unstructured meshes. The convergence of the gradient schemes is obtained for two phase flow models under the assumption that the relative permeabilities are bounded from below by a strictly positive constant. This assumption is needed in the convergence proof only in order to take into account discontinuous capillary pressures in particular at the matrix fracture interfaces. The efficiency of our approach is shown on numerical examples of fracture networks in 2D and 3D. S. Rubino (Universidad de Sevilla, Spain) Finite element approximation of an unsteady projection-based VMS turbulence model with wall laws In this work we present the numerical analysis and study the performance of a finite element projection-based Variational MultiScale (VMS) turbulence model that includes general non-linear wall laws. We introduce Lagrange finite element spaces adapted to approximate the slip condition. The sub-grid effects are modeled by an eddy diffusion term that acts only on a range of small resolved scales. Moreover, high-order stabilization terms are considered, with the double aim to guarantee stability for coarse meshes, and help to counter-balance the accumulation of sub-grid energy together with the sub-grid eddy viscosity term. We prove stability and convergence for solutions that only need to bear the natural minimal regularity, in unsteady regime. We also study the asymptotic energy balance of the system. We finally include some numerical tests to assess the performance of the model described in this work.
CommonCrawl
Why does a steel rod keep increasing its temperature after removing the source of heat? I watched an experiment and I'm not sure why this happened. A metal rod was heated by a flame like this: ======== -> metal bar The metal sensor was on top touching the steel bar and a flame was heating the bar on the bottom side. After heating up to more or less 140º Celsius and dilatation (metal expansion?) rate slowed, the flame was removed. The sensor that was reading the temperature by touching the metal bar (top) kept reading a temperature increase for 15+ sec. Why did this happen? Was the heat being transferred to the cold contact region or something else? Does inertia have anything to do with this? thermodynamics experimental-physics thermal-conductivity WhiteHoleWhiteHole $\begingroup$ Could it be time for heat to be transferred, or time for the sensor to come to equilibrium? $\endgroup$ – BioPhysicist Oct 2 '18 at 20:21 $\begingroup$ What was the distance approximately between the heat source and the temperature sensor, and approximately how long did they hold the flame up to the metal? $\endgroup$ – JMac Oct 2 '18 at 20:23 $\begingroup$ The sensor outputs the temperature of the sensor, not the temperature of the metal. If you have poor heat conduction between the sensor and the metal, the changes in the sensor temperature will lag behind the metal temperature. $\endgroup$ – alephzero Oct 2 '18 at 20:32 $\begingroup$ I think it is just the heat from the hot area of the bar reaching the (cold) area where the sensor is. In this case knowing the dimensions of the bar would be helpful to reach a conclusion. $\endgroup$ – user190081 Oct 2 '18 at 20:52 Temperature doesn't have a mechanical-like inertia (in the sense of following Newton's laws) or momentum; barring a chemical reaction such as combustion, you can't use a flame to heat up any material to a temperature greater than the adiabatic flame temperature, regardless of the heating rate. In other words, temperature never overshoots the temperature of the heating source. One possibility, as you've noted, is that a cooler area of the bar/sensor continued to be heated by a hotter area of the bar even after the heat source was removed. In this case, the temperature would continue to increase—but it wouldn't exceed the flame temperature. ChemomechanicsChemomechanics $\begingroup$ See my comment on the original post. The OP is not recording the temperature of the metal, but the temperature of the sensor. $\endgroup$ – alephzero Oct 2 '18 at 20:34 $\begingroup$ Good point; edited to add. $\endgroup$ – Chemomechanics Oct 2 '18 at 20:39 It probably has to do with the thermal diffusivity $\alpha = \kappa/(\rho c_p)$. It is equal to $\frac{\frac{\partial T}{\partial t}}{\nabla ^2 T}$ when convection and radiation effects are neglected. This latter formula gives some insights. The faster the temperature changes compared to its curvature (or the divergence of the gradient of T), the greater is the thermal diffusivity. So, from your description, all seems to point that the thermal diffusivity of the metal (steel?) is low enough to ensure that the cold side will still have an increase in temperature even about $15\ \mathrm{s}$ after the hot source was removed. Also, I do not agree with the claim that temperature does not have inertia. For example, the heat from the Sun will diffuse through the Earth ground in such a way that it is possible to dig a few meters below the ground surface and be able to spot summer times and winter times. But the amplitude of the temperature variations decays exponentially with depth. At large depths (something like above 20 m or so) only the average surface temperature is still distinguisible. By digging deep enough, it is possible to know the average surface temperature up to hundreds of years or even more (I remember several articles about that, I might try to provide references later). The above example is relevant, because it shows that even if the Sun was suddenly removed, the heat from past summers would still diffuse deeper and deeper into the ground, heating colder parts, well after the heat source (Sun) is removed. The same thing is probably happening from what you describe with the heated bar under a flame suddenly removed. To summarize, you've noticed a peculiarity of conductive heat transfer. This is normal and would happen with any solid. The fact that the temperature increase lasted for about 15 s after the heat source was removed is due to both the geometry of the material and on its thermal properties, in particular its thermal diffusivity. A material with the same geometry than the steel rod but with a higher diffusivity such as copper would display a shorter such time, and conversely: a material with a lower diffusivity such as glass would display this temperature rise at their colder side for longer. AccidentalBismuthTransformAccidentalBismuthTransform $\begingroup$ I can see how one can say that the temperature response at one point in a material due to heating in another part can behave in a manner reminiscent of "inertia". On the other hand, the temperature response at a point due to heating at the exact same point will generally not behave in a manner reminiscent of "inertia" (i.e., heating power turned off means the temperature starts dropping immediately). May be a difficult problem to say to what extent heat diffusion resembles mechanical "inertia" in general. Certainly the diffusion equation is much different from Newton's 2nd law of motion. $\endgroup$ – user93237 Oct 2 '18 at 22:48 $\begingroup$ Indeed the 2 equations are very much different! But thermal inertia is a well defined term. For a point on the surface of the material that was under heat, it quantify the "resistance" to the temperature drop following the removal of the heat source. So it is unfair to claim thermal inertia doesn't exist even though it does not resembles mechanical inertia. $\endgroup$ – AccidentalBismuthTransform Oct 3 '18 at 5:28 $\begingroup$ Good points; edited to clarify. I compare thermal and mechanical inertia in the context of constitutive equations at the bottom of this note. $\endgroup$ – Chemomechanics Oct 6 '18 at 16:21 $\begingroup$ The first definition of inertia I found is "a tendency to do nothing or to remain unchanged". I'm not so sure the term thermal inertia is incorrect at all. We know inertia mostly when it is related to motion, but it could mean anything. $\endgroup$ – Orbit Oct 6 '18 at 20:39 $\begingroup$ @coniferous_smellerULPBG-W8ZgjR Yes, it was a general remark. I suppose that was not the right place, sorry. $\endgroup$ – Orbit Oct 6 '18 at 22:48 After the heat source is removed, the bar starts to cool down immediately. However, this only means the average temperature of the bar is decreasing. As long as there is a temperature difference, heat will continue to flow from the hotter to the colder regions. At some points the indicated temperature starts to decrease, this is the point where the material at the sensor looses more heat to the surrounding cooler material than it receives from the hotter areas. OrbitOrbit $\begingroup$ Other way around: from hotter to colder regions. Because of diffusion, random walks, increasing entropy. $\endgroup$ – Pieter Oct 6 '18 at 16:49 Not the answer you're looking for? Browse other questions tagged thermodynamics experimental-physics thermal-conductivity or ask your own question. Electric heating rod How fast does a surface cool depending on the distance to the source of heat Will an ice cream scoop with oil-filled handle cool down my coffee more effectively than without the oil? Boiling potatoes Why did hot water squirt out of a squirt bottle when I placed it in cold water? Thermal expansion of solid body and calculation of work produced Which explanation of the Seebeck effect is correct? Temperature change of water
CommonCrawl
A flexible framework for sequential estimation of model parameters in computational hemodynamics Christopher J. Arthurs ORCID: orcid.org/0000-0002-0448-61461 na1, Nan Xiao1 na1, Philippe Moireau2,3, Tobias Schaeffter4,5 & C. Alberto Figueroa6 A major challenge in constructing three dimensional patient specific hemodynamic models is the calibration of model parameters to match patient data on flow, pressure, wall motion, etc. acquired in the clinic. Current workflows are manual and time-consuming. This work presents a flexible computational framework for model parameter estimation in cardiovascular flows that relies on the following fundamental contributions. (i) A Reduced-Order Unscented Kalman Filter (ROUKF) model for data assimilation for wall material and simple lumped parameter network (LPN) boundary condition model parameters. (ii) A constrained least squares augmentation (ROUKF-CLS) for more complex LPNs. (iii) A "Netlist" implementation, supporting easy filtering of parameters in such complex LPNs. The ROUKF algorithm is demonstrated using non-invasive patient-specific data on anatomy, flow and pressure from a healthy volunteer. The ROUKF-CLS algorithm is demonstrated using synthetic data on a coronary LPN. The methods described in this paper have been implemented as part of the CRIMSON hemodynamics software package. Computational models of hemodynamics are powerful tools for studying the cardiovascular system in health and disease. In particular, three-dimensional models of blood flow in the vasculature—with or without fluid-structure interaction (FSI)—have applications in non-invasive diagnostics, medical device design, surgical planning, and disease research. Due to the scarcity of direct data on flow and pressure, achieving pathophysiologically accurate results often requires specification of boundary conditions via reduced order models such as lumped parameter networks (LPN). Furthermore, in the case of FSI models, the parameters defining the structural stiffness also affect the solution greatly. It follows then that a primary challenge in constructing patient-specific models is the determination of parameters (LPN or structural stiffness) which make the simulation results agree with clinical data. Due to the high computational demand of such models, an efficient and automatic parameter estimation strategy is highly desirable. Such parameter estimation, when based on observations of a system, is called data assimilation. Broadly, this involves combining different sources of data with a mathematical model of a physical system, in order to estimate that system's true state. This permits the discovery of "hidden" quantities (i.e. model parameters) that may not be directly observable. In our case, the underlying mathematical model is a 3D FSI formulation of the Navier–Stokes equations, describing hemodynamics and vessel wall mechanics in an vascular geometry, and including coupled LPN boundary condition models of the downstream vasculature. Previous studies have described algorithms for automatically identifying outflow boundary conditions and arterial wall material properties. Estimation approaches not involving the Kalman filter include the following. Grinberg and Karniadakis [1] considered a time-dependent RC-type boundary condition for imposing measured flow rates in 3D arterial tree models. Troianowski et al. [2] described an iterative fixed-point procedure for finding total resistances combined with a morphometric approach for tuning three-element Windkessel boundary conditions in 3D pulmonary arterial trees. Spilker and Taylor [3] proposed a procedure based on a quasi-Newton method to determine boundary condition parameters in a 3D model of abdominal aortic hemodynamics to match patient recordings of systolic/diastolic pressures and flow or pressure waveforms. Blanco et al. [4] solved for resistance parameters in a detailed 1D network using Broyden's method to match regional flow information. Using the adjoint method, Ismail et al. [5] proposed an optimization approach for estimating parameters in 3D FSI simulations. Alimohammadi et al. [6] presented an iterative minimization approach for parameter estimation in aortic dissection models. Xiao et al. [7] described a method for iteratively calibrating three-element Windkessel boundary conditions. Perdikaris and Karniadakis [8] demonstrated a Bayesian approach for discovering Windkessel parameters for 3D models, using an efficient global optimization (EGO) approach and two 1D surrogate models. Previous approaches utilizing the Kalman filter include the work of DeVault et al. [9], who calibrated boundary condition parameters in a 1D model of the Circle of Willis using an Ensemble Kalman Filter (EnKF), incorporating velocity and pressure data. Similarly, Lal et al. [10] used the EnKF to determine Young's modulus, together with boundary reflection and viscoelastic coefficients in 1D vessel networks. Pant et al. [11, 12] proposed a method or to determine boundary condition parameters for a 3D model by applying the Unscented Kalman filter (UKF) to a surrogate 0D model. Lombardi [13] applied the UKF to determine arterial stiffness and outflow boundary condition parameters in 1D vascular networks. Müller et al. [14] used the reduced order unscented Kalman filter (ROUKF) [15] to estimate total resistance, total compliance, and vessel wall properties in a 1D arterial network model, performing estimation steps in the frequency domain, and thus updating parameter estimates after each complete cardiac cycle. This approach requires minimal data, but does not independently adjust all model parameters. Similarly, Caiazzo et al. [16] used ROUKF with a 1D model to estimate arterial wall properties and terminal resistances. It is striking that none of the above utilized Kalman filtering techniques in 3D; they instead work in 1D, or utilize a surrogate 0D or 1D model to generate parameter estimates for an associated 3D model. Broadly, all these methods are limited by the computational demand of their application to full-fidelity 3D models, inapplicability to arbitrary boundary condition designs, or both. In the present work, we demonstrate the use of the ROUKF for the estimation of three-element Windkessel model parameters, and of arterial wall stiffness, in subject-specific, 3D Navier–Stokes models of the aorta, and applied to the assimilation of data consisting of pressure waveforms, flow waveforms, and wall motion data. The key feature of ROUKF is that the uncertainty is confined to the parameters we wish to estimate, as opposed to the entire state space, making filtering of 3D PDE problems computationally tractable. A core advantage is that—unlike in many previous approaches—the high-fidelity forward model is used, without reliance on simplified surrogates. This ensures that the effect of complex, pathophysiological geometries can be retained, and is critical when spatial localisation of the observation data is important. We also introduce a ROUKF method augmented by constrained least squares (ROUKF-CLS), which enables filtering when the LPNs are more complex than the basic three-element Windkessel. All the methods described in this paper are implemented as part of the CRIMSON (Cardiovascular Integrated Modelling and Simulation) software package [17], which comprises both a GUI for vessel segmentation from medical images, finite element mesh generation and boundary condition design and specification; and a stabilized, massively-parallel incompressible Navier–Stokes flowsolver. Of particular relevance is CRIMSON's unique system for designing arbitrary LPN boundary condition circuits—the so-called Netlist boundary condition system—which integrates the ROUKF-CLS method introduced herein. The core ROUKF functionality is provided via integration with the Verdandi data assimilation library [18]. The outline of this paper is as follows: we first describe the methodology for data acquisition, followed by the formulation for the 3D FSI computational model and the ROUKF estimation algorithm. We then describe the augmented ROUKF-CLS method for filtering LPNs more complex than the three-element Windkessel. Finally, the functionality of these methods is demonstrated in a number of cases. The key results are a determination of Windkessel parameters in a subject-specific model of the whole aorta and main branches, and a demonstration of parameter recovery in a synthetic coronary LPN model using ROUKF-CLS. In this section, we present the novel algorithms and the data acquisition strategies for both synthetic and subject-specific data. In "Results" section, these data will be used to evaluate the efficacy of the algorithms for estimating LPN and wall material property parameters in a number of cases, including simplified tubes with either a Windkessel or a coronary-specific LPN structure, and multi-outlet Windkessel cases in a subject-specific aortic geometry. Synthetically-generated pressure, velocity and wall-motion data are used to validate the methods when we are in possession of the ground-truth, in a number of models. In some cases, our synthetic data generation uses idealised vascular geometries (tubes), and others use subject-specific arterial models. Each involves choosing specific values for the LPN or wall properties which we wish to ultimately estimate, imposing an inflow rate, and running a forward simulation, whilst acquiring the synthetic observation data on pressures, velocities or wall displacement, each at specific spatial locations. The precise workflow for each will be described with the results themselves, as they differ case-to-case. An example setup for one of these cases is shown in Fig. 2. Subject-specific data 3D anatomical MRI was obtained from a 28-year-old male volunteer on a 1.5 T MR-system (Achieva, Philips Healthcare, the Netherlands) using cardiac triggering and respiratory navigation. From this, the aorta and surrounding major vessels were segmented and meshed, in preparation for simulation. Through-plane velocity data was acquired with high temporal and spatial resolution at three levels of the aorta (ascending, distal descending, and infra-renal) using 2D phase-contrast MRI (PC-MRI); the supra-aortic vessels; the left renal artery, and the left iliac artery. Figure 1 shows the segmented geometry, the PC-MRI acquisition locations and the corresponding flow waveforms. Time-resolved pressure waveforms were acquired using applanation tonometry (AtCor Medical SphygmoCor) in the left and right carotid artery while the subject was in a supine position. Ethical approval was obtained from St. Thomas' Hospital Research Ethics Committee/South East London Research Ethics Committee (10/H0802/65). Details on the acquisition are provided in Appendix: MRI data acquisition and geometric segmentation and Pressure data acquisition. Vascular geometry, reconstructed from MRI data, showing the PC-MRI flow acquisition planes, and the corresponding flow waveforms 3D blood flow formulation We wish to use the ROUKF to determine parameters for an incompressible Navier-Stokes model of flow in a space-time vascular domain, \((\vec {x},t)\in \Omega ^{\mathrm{f}}\times (0,T)\), where \(\Omega ^{\mathrm{f}}\subset {\mathbb {R}}^3\). The weak form of the FSI problem is as follows. Find the velocity \(\vec {v}\in {\mathcal {S}}\) and pressure \(p\in {\mathcal {P}}\) such that for all test functions \(\vec {w}\in {\mathcal {W}}\) and \(q\in {\mathcal {P}}\), $$\begin{aligned}&\int _{\Omega ^{\mathrm{f}}} \{ \vec {w}\cdot (\rho _{\mathrm{f}}\vec {\dot{v}}+ \rho _{\mathrm{f}}\vec {v}\cdot \vec {\nabla }\, \vec {v}) + \vec {\nabla }\, \vec {w}: (-p\varvec{I}+{\varvec{\tau }}_{\mathrm{f}}) - \vec {\nabla }\, q\cdot \vec {v}\} d\vec {x} \nonumber \\&\quad + \int _{\Gamma _{\mathrm{in}}} q\vec {v}\cdot \vec {n}_{\mathrm{f}}da +\int _{\Gamma _{\mathrm{out}}} \{q\vec {v}\cdot \vec {n}_{\mathrm{f}}- \vec {w}\cdot \vec {h} \} da \nonumber \\&\quad +\int _{\Gamma _{\mathrm{w}}} \{ q\vec {v}\cdot \vec {n}_{\mathrm{f}}+ h \rho _{\mathrm{s}}\vec {w}\vec {\dot{v}}+ h \vec {\nabla }\, \vec {w}: {\varvec{\sigma }}_\mathrm{s}(\vec {u}) \}da = 0, \end{aligned}$$ where the solution and test function spaces are \({\mathcal {S}}= \{\vec {v}| \vec {v}(\cdot ,t)\,\in H^1(\Omega ^{\mathrm{f}}), t \in [0,T], \vec {v}(\cdot ,t) = \vec {g} \text { on } \Gamma _{\mathrm{in}}\}\), \({\mathcal {P}}= \{p| p(\cdot ,t)\,\in H^1(\Omega ^{\mathrm{f}}), t \in [0,T] \}\), and \({\mathcal {W}}= \{\vec {w}| \vec {w}(\cdot ,t)\,\in H^1(\Omega ^{\mathrm{f}}), t \in [0,T], \vec {w}(\cdot ,t) = 0 \text { on } \Gamma _{\mathrm{in}}\}\). Here, \(H^1(\Omega ^{\mathrm{f}})\) is the space of real-valued functions on \(\Omega ^{\mathrm{f}}\) whose values and first derivatives are square-integrable. Here, \(\vec {v}(\vec {x},t)\) denotes the velocity field, \(p(\vec {x},t)\) the pressure field, \({\varvec{\tau }}_{\mathrm{f}}= \mu \{ \vec {\nabla }\, \vec {v}+ ( \vec {\nabla }\, \vec {v})^\text {T}\}\) is the viscous stress tensor, \(\mu \) is the dynamic viscosity of the blood, and \(\rho _{\mathrm{f}}\) is its density and \(\vec {u}\) the vessel wall displacement. The boundary conditions for velocity and traction are given by: $$\begin{aligned} \begin{array}{rcllcl} \vec {v}(\vec {x},t) &{} = &{} \vec {g}(\vec {x},t) &{}(\vec {x},t) &{} \in &{} \Gamma _{\mathrm{in}}\times (0,T), \\ (-p\varvec{I}+{\varvec{\tau }}_{\mathrm{f}})\vec {n}_{\mathrm{f}}&{} = &{} \vec {h}(\vec {v},p,\vec {x},t) &{}(\vec {x},t) &{} \in &{} \Gamma _{\mathrm{out}}\times (0,T), \end{array} \end{aligned}$$ where \(\vec {g}(\vec {x},t)\) is a prescribed velocity on the inflow boundary \(\Gamma _{\mathrm{in}}\), and \(\vec {h}\) is the traction on the boundary \(\Gamma _{\mathrm{out}}\). \(\vec {n}_{\mathrm{f}}\) denotes the outward-pointing boundary unit normal vector. On each connected component of the boundary \(\Gamma _{\mathrm{out}}\), \(\vec {h}\) is determined by an LPN model of the downstream vasculature. This is implicitly coupled to the 3D domain via the Coupled Multidomain method of Vignon-Clementel et al. [19, 20]. \(\Gamma _{\mathrm{w}}\) denotes the interface between the blood and the vessel wall, which is represented as a thin linear membrane with thickness h and Cauchy stress \({\varvec{\sigma }}_\mathrm{s}(\vec {u})\) using the Coupled Momentum Method of Figueroa et al. [21]. The displacement \(\vec {u}\) is calculated from the nodal velocities and accelerations for each time step using a Newmark integration scheme. The enhanced membrane Cauchy stress tensor \({\varvec{\sigma }}_\mathrm{s}\) is given as a function of a tensor \({\varvec{\tilde{K}}}\) of material parameters stiffness \(E\), Poisson's ratio \(\nu \), and transverse shear factor \(k\); a tensor \({\varvec{P}}\) describing the pre-stress of the wall, and the infinitesimal strain tensor \({\varvec{\epsilon }}(\vec {u})\); \({\varvec{\sigma }}_\mathrm{s}= \tilde{\varvec{K}}(E,\nu ,k) : {\varvec{\epsilon }}(\vec {u}) + {\varvec{P}}\). The pre-stress tensor can be specified using a variety of methods [22,23,24]. Boundary conditions: three-element Windkessel LPN Top: problem setup for an idealized carotid artery. The plane where the volumetric flow and cross section-averaged pressure are observed is denoted in blue. The inflow velocities are prescribed using a typical carotid flow waveform and a parabolic velocity profile. The outflow face is coupled to a three-element Windkessel. Bottom: flow and pressure synthetic data (with added Gaussian white noise) created from the mid-vessel observation site during the forward simulation The most commonly used LPN boundary condition is the three-element Windkessel model. Its parameters are a proximal resistance \(R_{1}\), a distal resistance \(R_{2}\), and a compliance C. Its structure can be seen in Fig. 2. At the interface \(\Gamma _{\mathrm{out}}\) with the 3D domain, pressure, P(t) and flow, Q(t), in the LPN are related by the ordinary differential equation: $$\begin{aligned} Q \left( 1 + \frac{R_1}{R_2} \right) + C R_1 \frac{d Q}{d t} = \frac{P}{R_2} + C \frac{d P}{d t}. \end{aligned}$$ From this, together with a choice of implicit temporal discretization scheme for Eq. 3, and using a boundary condition time-step \(\Delta t_{BC}\), the expression: $$\begin{aligned} -\int _{\Gamma _{out}^{i}} \vec {w}\cdot \vec {h} da =&\int _{\Gamma _{out}^{i}} \vec {w}\cdot \vec {n}_{\mathrm{f}}\bigg \{ \left( R_{1}+\frac{R_{2}}{1+C R_{2}/\Delta t_{BC}}\right) \nonumber \\&\cdot \left( \int _{\Gamma _{out}^{i}} \vec {v}\cdot \vec {n}_{\mathrm{f}}da\right) \nonumber \\&+ \frac{(P-R_{1}Q)C R_{2}}{\Delta t_{BC} + C R_{2}} \bigg \} da \end{aligned}$$ can be derived. Here, \(\Gamma _{out}^{i}\subset \Gamma _{\mathrm{out}}\) is the i-th outlet, to which this LPN is coupled, and \(\Delta t_{BC}\) is dependent upon the time-stepping scheme chosen for the PDE; see "Space–time discretization" section. This can be inserted into Eq. 1 to couple the LPN to the 3D domain. Boundary conditions: arbitrary Netlist LPNs To couple arbitrary LPNs to the 3D domain without manually deriving an ordinary differential equation on pressure P(t) and flow Q(t) equivalent to Eq. 3 for each, we make use of our Netlist boundary condition system [17]. This permits the design and coupling of LPNs representing more complex vascular beds; for example, the unique features of the coronary circulation [25], or of the heart [26], or even of a closed-loop circulatory system of patients with single-ventricle physiology [27]. Briefly described, the Netlist system takes a user-defined arbitrary LPN structure, designed using the CRIMSON GUI [17], and converts it into a matrix system of equtions for the LPN state, from which the time-dependent values effective resistance \({\tilde{R}}(t)\) and pressure shift \({\tilde{S}}(t)\) can be deduced. The term in \(\vec {h}\) in Eq. 1 can then be expressed via $$\begin{aligned} -\int _{\Gamma _{out}^{i}} \vec {w}\cdot \vec {h} da = \int _{\Gamma _{out}^{i}} \vec {w}\cdot \vec {n}_{\mathrm{f}}\bigg \{ {\tilde{R}} \left( \int _{\Gamma _{out}^{i}} \vec {v}\cdot \vec {n}_{\mathrm{f}}da\right) + {\tilde{S}} \bigg \} da \end{aligned}$$ The details of the determination of the coefficients \({\tilde{R}}(t)\) and \({\tilde{S}}(t)\) are beyond the scope of the present work, and will be presented in a separate publication. Space–time discretization Equation 1 is spatially discretized using a stabilized finite element formulation with equal-order interpolation spaces for velocity and pressure on a tetrahedral volumetric mesh [28,29,30,31]. Time discretization is achieved using the generalized \(\alpha \) method [32, 33]. Parameter estimation method Definition of the forward model Consider the spatially and temporally discretized nonlinear system of equations arising from the finite element formulation of the blood flow problem at time step k, $$\begin{aligned} X_k&= [v_k,p_k,\dot{v}_k,u_k, l_{k}]^\text {T}, \nonumber \\ X_0&= X^0, \nonumber \\ \theta _0&= \theta ^0 + \xi _\theta , \nonumber \\ X_k&= A(X_{k-1},\theta _k). \end{aligned}$$ Here, \(v_k,\dot{v}_k,p_k\), and \(u_k\) are vectors containing the finite element basis function solution weights for the velocity, acceleration, pressure, and wall displacement fields, respectively; and \(l_{k}\) is a vector of boundary condition internal state variables. Together they form the model state, \(X_k\), at time step k. Application of the operator A updates the FSI model state by one time step, from \(X_{k-1}\) to \(X_{k}\), given the current parameter set \(\theta _{k}\), which contains the model parameters to be estimated. \(X^0\) and \(\theta ^0\) are a priori initial values, and \(\xi _\theta \) represents parameter uncertainty. The combined vector \(\{X_k,\theta _k\}\) is called the augmented state. In the case where only Windkessel parameters are to be estimated, the components of \(\theta _k\) can be written: $$\begin{aligned} \theta _k= & {} [ {\tilde{r}}_1^{(1)},{\tilde{r}}_1^{(2)},\ldots ,{\tilde{r}}_1^{(n_\text {out})},\nonumber \\&{\tilde{r}}_2^{(1)},{\tilde{r}}_2^{(2)},\ldots ,{\tilde{r}}_2^{(n_\text {out})}, \nonumber \\&{\tilde{c}}^{(1)},{\tilde{c}}^{(2)},\ldots ,{\tilde{c}}^{(n_\text {out})}]_k^\text {T}, \nonumber \\ R_1^{(j)}= & {} 2^{{\tilde{r}}_1^{(j)}}, \ \ R_2^{(j)} = 2^{{\tilde{r}}_2^{(j)}}, \ \ C^{(j)} = 2^{{\tilde{c}}^{(j)}} \end{aligned}$$ where \(R_1^{(j)}\), \(R_2^{(j)}\), and \(C^{(j)}\) are the proximal resistance, distal resistance, and compliance at the j-th outlet, respectively. When there are \(n_\text {out}\) Windkessel models for which we wish to estimate parameters, the total number of parameters is \(N= 3n_{\text {out}}.\) Re-parameterization in terms of powers of two in Eq. 6 prevents estimation of negative values of resistance and compliance. Estimation algorithm—ROUKF for three-element Windkessel LPNs and vessel stiffness The ROUKF is based on the Unscented Kalman Filter (UKF) formulation of Julier et al. [34, 35]. This extension of the Kalman filter to nonlinear models relies on a deterministic, discrete sampling of the estimation error probability distribution with a set of particles or sigma points; these are perturbations of the model parameters around the current best estimate of their values. The ROUKF is designed for parameter estimation in large dynamical systems where the number of parameters is comparatively smaller than the number of state variables, and where it is assumed that the uncertainty in the system can be solely attributed to the parameters. Whereas the classical Kalman filter and its variants call for computations involving a full estimation error covariance matrix, \(P\), with the same dimension as the augmented state-space (state variables plus the uncertain parameters) at every time step, the key idea behind the ROUKF is restricting the uncertainty to a small part of the augmented state space (i.e. the parameters). In this case, \(P\) is factorized as \(P= L{U}^{-1} {L^{\text {T}}}\), where \(U\) is a \(N\)-by-\(N\) square matrix, \(L\) is a M-by-\(N\) matrix, \(N\) is the number of uncertain parameters and M the total number of state variables plus uncertain parameters. This allows the uncertain parameter covariances to be tracked in the small matrix \(U\), and makes computations involving the estimation error covariance matrix, \(P\), computationally tractable for large problems. Diagram of the estimation procedure during a single time step, showing the interaction between the FSI model and the filtering algorithm. The green box identifies the constrained least squares step, described in "Estimation algorithm—ROUKF-CLS for arbitrary LPNs" section, which is only part of the ROUKF-CLS algorithm (not of the basic ROUKF procedure) Let \(\hat{X}_k, \hat{\theta }_k\) be the state and parameters estimates at time step k, respectively, and let \(L_{k}\) be \(L\) at time step k. For convenience, we further define \(L^{X}_k\) and \(L^\theta _k\), which are matrices formed from the rows of \(L_k\) corresponding to the state variables and model parameters, respectively. The estimation procedure is summarized next and depicted graphically in Fig. 3. Note that the step in the green box in Fig. 3 is not part of this basic ROUKF algorithm (for basic ROUKF, it is simply omitted, passing directly to the next step), and will be formally introduced in "Estimation algorithm—ROUKF-CLS for arbitrary LPNs" section. Precompute the \(N+1\) simplex sigma point direction vectors, \(\sigma ^{N}_{(i)}\) [36]. The details of the procedure are described in Appendix: Sigma point generation. Initialize the error covariance factors, $$\begin{aligned} L^{X}_0&= \left[ \begin{array}{l} 0_{(M-N)\times N} \end{array} \right] \nonumber \\ L^\theta _0&= \left[ \begin{array}{l} I_{N} \end{array} \right] \nonumber \\ U_0&= \left[ \begin{array}{llll} 1/s_1 &{} &{} \\ &{} 1/s_2 &{} \\ &{} &{} \ddots \\ &{} &{} &{} 1/s_{N} \end{array} \right] . \end{aligned}$$ Here, \(U_0\) is assumed to be diagonal, \(0_{(M-N)\times N}\) is a \((M-N) \times N\) matrix of zeros, and \(I_{N}\) is the \(N\times N\) identity matrix. The values \(s_1, s_2, \ldots , s_{N}\) represent the confidence in the initial guess for the parameters. Using the sigma point direction vectors, create the set of \(N+1\) sigma points around the current estimate at time step k: $$\begin{aligned} \hat{X}_{k}^{(i)+}&= \hat{X}_{k}^{+} + L^{X}_{k}\sqrt{(U_{k})^{-1}}\sigma ^N_{(i)} \nonumber \\ \hat{\theta }_{k}^{(i)+}&= \hat{\theta }_{k}^{+} + L^\theta _{k}\sqrt{(U_{k})^{-1}}\sigma ^N_{(i)} \nonumber \\ 1&\le i \le N+ 1, \end{aligned}$$ where the square root may be computed using Cholesky factorization. Each sigma point consists of state and parameter variables of the model, $$\begin{aligned} \hat{X}_{k}^{(i)+} = [v_{k_{(i)}},p_{k_{(i)}},\dot{v}_{k_{(i)}}, u_{k_{(i)}},l_{k_{(i)}}]^\text {T}, \ \ \hat{\theta }_{k}^{(i)+}. \end{aligned}$$ Here, \(l_{k}\) represents the internal state (zero-dimensional pressures and flows) in the boundary condition LPNs. Propagate each sigma point forward by one time step, using the finite element formulation described in "3D blood flow formulation" section. Each one of these independently-propagating states is called a simulation particle. Upon completion of the \(N+1\) forward solves, the updated sigma points, indexed by i, are given by: $$\begin{aligned} \hat{X}_{k+1}^{(i)-}= & {} [v_{k+1_{(i)}},p_{k+1_{(i)}},\dot{v}_{k+1_{(i)}}, u_{k+1_{(i)}},l_{k+1_{(i)}}]^\text {T}, \nonumber \\&\hat{\theta }_{k+1}^{(i)-} \ \ 1 \le i \le N+ 1. \end{aligned}$$ Compute the ensemble mean, to obtain the a priori estimate: $$\begin{aligned} \hat{X}_{k+1}^{-}&= \sum _{i=1}^{N} {\alpha _{\mathrm{s}}^{(i)}} \hat{X}_{k+1}^{(i)-} \nonumber \\ \hat{\theta }_{k+1}^{-}&= \sum _{i=1}^{N} {\alpha _{\mathrm{s}}^{(i)}} \hat{\theta }_{k+1}^{(i)-}, \end{aligned}$$ where the \(\alpha _{\mathrm{s}}^{(i)}\) are the sigma point weights (see Appendix: Sigma point generation). Compute the updated innovation, \(\Gamma _{k+1}\), using the measurements \(Z_{k+1}\) and the state component of the sigma points, \(\hat{X}_{k+1}^{(i)-}\): $$\begin{aligned} \Gamma _{k+1}^{(i)}&= Z_{k+1}-H(\hat{X}_{k+1}^{(i)-}) 1 \le i \le N+ 1. \end{aligned}$$ Here, \(Z_{k+1}\) is the recorded patient data—for example, the instantaneous volumetric blood flow through a vessel of interest at the current time step—and \(H\) is the operator which mimics this measurement in the model—such as the instantaneous flow through a cross-sectional plane. Compute the updated estimation error covariance factors: $$\begin{aligned} L^{X}_{k+1}&= [\hat{X}_{k+1}^{(*)-}] D_{\alpha }[\sigma _{(*)}]^\text {T}\nonumber \\ L^\theta _{k+1}&= [\hat{\theta }_{k+1}^{(*)-}] D_{\alpha }[\sigma _{(*)}]^\text {T}\nonumber \\ \{HL\}_{k+1}&= [\Gamma _{k+1}^{(*)}] D_{\alpha }[\sigma _{(*)}]^\text {T}\nonumber \\ U_{k+1}&= P_\alpha + \{HL\}^\text {T}_{k+1} W^{-1}_{k+1} \{HL\}_{k+1} \end{aligned}$$ where \([\hat{X}_{k+1}^{(*)-}]\), \([\hat{\theta }_{k+1}^{(*)-}]\), \([\Gamma _{k+1}^{(*)}]\), and \([\sigma _{(*)}]\) are matrices whose i-th columns are \(\hat{X}_{k+1}^{(i)-}\), \(\hat{\theta }_{k+1}^{(i)-}\,\Gamma _{k+1}^{(i)}\), and \(\sigma ^N_{(i)}\), respectively. The diagonal matrix \(D_{\alpha }\) stores the weights \(\alpha _{\mathrm{s}}^{(i)}\) associated to each sigma-point (see Appendix: Sigma point generation) and \(W_{k+1}\) is the measurement error covariance matrix described in Appendix: Measurement error covariance matrix. Compute the updated a posteriori estimate: $$\begin{aligned} \hat{X}_{k+1}^{+} =&\hat{X}_{k+1}^{-} \nonumber \\&- L^{X}_{k+1}U_{k+1}^{-1}\{HL\}_{k+1}^\text {T}W^{-1}_{k+1} \sum _{i=1}^{N}\alpha _{\mathrm{s}}^{(i)} \Gamma _{k+1}^{(i)} \nonumber \\ \hat{\theta }_{k+1}^{+}&= \hat{\theta }_{k+1}^{-} \nonumber \\&- L^\theta _{k+1}U_{k+1}^{-1}\{HL\}_{k+1}^\text {T}W^{-1}_{k+1} \sum _{i=1}^{N}\alpha _{\mathrm{s}}^{(i)} \Gamma _{k+1}^{(i)}. \end{aligned}$$ This provides the updated estimate of the state of System 5 given the data \(Z\); in particular, this includes updated best-estimates of the parameters \(\theta \) that we are trying to determine. Repeat from step 3 after incrementing the time index: \(k+1 \rightarrow k\). In this work, this ROUKF method is used to estimate vessel stiffness and three-element Windkessel model parameters. Estimation algorithm—ROUKF-CLS for arbitrary LPNs We now motivate and describe the modification to the above ROUKF method required to obtain the ROUKF-CLS algorithm. ROUKF-CLS is required for parameter estimation in LPNs more complex than the three-element Windkessel model. The reason is that general, time-discretized LPNs have too many time-dependent pressure and flow "history" states, due to the appearance of time derivatives in the LPN equations. For any time step index k, the value of these states is a function of the historical behavior of the model, and of the value of the LPN parameters. When a particle is generated, this state at step k is inconsistent with the new parameters, so it must be updated before advancing to \(k+1\). To achieve this, an additional step is taken after particle generation, between Steps 3 and 4 in "Estimation algorithm—ROUKF for three-element Windkessel LPNs and vessel stiffness" section; we refer to this as the consistency step. Specifically, the 3D-to-LPN interface flow rate and interface pressure at time-step k are imposed upon the LPN, and consistent internal state variables are computed. This approach is similar in spirit to those utilizing projection-based constraint of the state onto some subspace [37, 38]. This additional step is shown in the green box in Fig. 3. We now describe the consistency step of ROUKF-CLS. Let the time-discretized linear system for the LPN be given by: $$\begin{aligned} L_{LPN} {\mathbf {x}} = {\mathbf {b}}, \end{aligned}$$ where \(L_{LPN}\) is a matrix containing the standard equations for resistors, capacitors, etc., representing the LPN; \({\mathbf {x}}\) is the vector of unknown pressure and flow variables for time-step k inside the LPN; and \({\mathbf {b}}\) contains any necessary state from time-step \(k-1\) (i.e. from \(l_{k-1}\); see Eq. 9), together with the 3D domain flow rate at step k. This system is square.Footnote 1 To enforce consistency with the 3D domain, we add a row to this matrix equation, imposing the a posteriori 3D interface pressure from time step k upon the LPN, obtaining the overdetermined system, $$\begin{aligned} L_{LPN}^{over} {\mathbf {x}} = {\mathbf {b}}^{over}. \end{aligned}$$ Next, we create a second linear system as follows. We remove all rows enforcing the state \(l_{k-1}\) at time-step \(k-1\) from Eq. 16, obtaining $$\begin{aligned} L_{LPN}^{under} {\mathbf {z}} = {\mathbf {d}}, \end{aligned}$$ which amounts to deconstraining of the problem from a historical state which was inconsistent with the new parameters. If there were more than one such row, this system is underdetermined.Footnote 2 Since the filter-induced changes in the LPN parameters for each time-step are small, we assert now that while the state at step \(k-1\) has been deconstrained, we do not want to deviate far from it. We also know that the solution must exactly satisfy the pressure-flow relationships across each LPN component at time k. Taking these concepts together, we obtain a constrained least squares problem: $$\begin{aligned}&\mathrm {minimize}~\left\| L_{LPN}^{over} {\mathbf {x}} - {\mathbf {b}}^{over}\right\| , \end{aligned}$$ $$\begin{aligned}&\mathrm {~subject~to~the~constraint~} L_{LPN}^{under} {\mathbf {z}} = {\mathbf {d}}, \end{aligned}$$ which can be formulated as: find \({\mathbf {x}}\) such that, $$\begin{aligned} \left( \begin{array}{cc} {L_{LPN}^{over}L_{LPN}^{over}}^{T} &{} {L_{LPN}^{under}}^{T} \\ {L_{LPN}^{under}} &{} {\mathbf {0}} \\ \end{array} \right) \left( \begin{array}{c} {\mathbf {x}} \\ {\mathbf {z}} \\ \end{array} \right) = \left( \begin{array}{c} {L_{LPN}^{over}}^{T}{\mathbf {b}}^{over} \\ {\mathbf {d}} \\ \end{array} \right) . \end{aligned}$$ We solve this system each time a particle is generated, obtaining a particle-specific consistent internal state \(l_{k}\) for step k, ready for propagation to the \(\textit{a priori}\) state at \(k+1\), in Step 4 of "Estimation algorithm—ROUKF for three-element Windkessel LPNs and vessel stiffness" section. We refer to the ROUKF algorithm, augmented by the consistency step involving Eq. 20, as ROUKF-CLS. Model observations and real-world data The measurement term \(Z_k\) at time step k is a vector of clinically-recorded flow and pressure waveforms. The innovation term \(Z_k-H(X_k)\) in Eq. 12 is thus the discrepancy between the model-predicted and real-world values of blood pressure and flow. The observation operator H extracts data from the model. To mimic the patient data we wish to assimilate, we define it as: $$\begin{aligned} H(X_k)&= [Q_1,Q_2,\ldots ,Q_{n_\text {obs-Q}},P_1,P_2,\ldots ,P_{n_\text {obs-P}}]^\text {T}_k, \end{aligned}$$ where volumetric flow rate and spatially-averaged pressure are respectively defined by: $$\begin{aligned} Q_i&= \int _{\Pi _i} \vec {v}^h(\vec {x},t)\cdot \vec {n}_{i} da, \nonumber \\ P_i&= \frac{1}{\text {area}(\Pi _{i})}\int _{\Pi _i} p^h(\vec {x},t) da, \end{aligned}$$ where \(\Pi _{i}\) is the intersection of a plane with the 3D domain. Here, \(\vec {n}_{i}\) is a choice of unit normal to \(\Pi _{i}\). Compare the PC-MRI planes shown in Fig. 1; typically, these planes mimic the location of the PC-MRI measurement planes used to record the patient data. When the available data includes wall motion, the definition of the innovation term in Eq. 12 can be made by means of a 'distance' operator between the wall position in the simulation and in the data. Suppose the data consists of a time-dependent series of M surfaces \(S_{j}\), defining the vessel wall at times \(t_{j}\), \(j\in \left\{ 1,2,\dots , M\right\} \). Following [39], we define \(dist(\vec {x}, S_{j})\) to be the signed Euclidean distance between \(\vec {x}\) and the closest point to \(\vec {x}\) on \(S_{j}\). The sign is negative if \(\vec {x}\) lies inside the volume delimited by \(S_{j}\), and positive otherwise, see Panel A of Fig. 4. When surfaces \(S_{j}\) are not available for every time-step of the simulation, we interpolate between them. We define \(a_{j}(t)\) to be a linear function of t such that \(a_{j}(t_{j})=0\) and \(a_{j}(t_{j+1})=1\), and define the distance to the interpolated surface \(S_{t}\) at time \(t\in [t_{j}, t_{j+1}]\) by: $$\begin{aligned} D(\vec {x}, S_{t})=a_{j}(t)dist(\vec {x}, S_{j}) + (1-a_{j}(t))dist(\vec {x}, S_{j+1}). \end{aligned}$$ For a distance observation at the i-th wall node \(\vec {x}_{i}\), and its displacement \(\vec {u}_{i,k}\) at time \(t_{k}\in [t_{j}, t_{j+1}]\) for some j (Fig. 4b), we directly define the innovation as: $$\begin{aligned} Z_{k}-H(X_k)=&a_{j}(t_{k})dist(\vec {x}_{i}+\vec {u}_{i,k}, S_{j}) \nonumber \\&+(1-a_{j}(t_{k}))dist(\vec {x}_{i} + \vec {u}_{i,k}, S_{j+1}). \end{aligned}$$ Signed wall distance metric for wall motion observations. a The distance metric is the Euclidean distance between a point and the surface, and is positive in the outward-pointing unit normal direction, and negative otherwise. A distance map between points on the deformed wall boundary and the j-th data surface \(S_{j}\) is shown. b The signed metric is used to compute an interpolated distance between consecutive times where wall position data \(S_{j}\) are available. Note that the blue curve represents the current wall boundary; the "interpolated" surface \(S_{t}\) between \(S_{j}\) and \(S_{j+1}\) is never constructed, and is not shown We present numerical results in the following cases: in "Idealized carotid artery with synthetic data" section, ROUKF Windkessel parameter estimation in an idealized carotid artery with synthetic data; in "Subject-specific aorta with synthetic data" and "Subject-specific aorta with PC-MRI and applanation tonometry data" sections, ROUKF Windkessel parameter estimation in a subject-specific aorta, with cases involving both synthetic and PC-MRI and applanation tonometry data; in "Reduced order Unscented Kalman Filter with constrained least squares (ROUKF-CLS)" section, the performance of the ROUKF-CLS is explored, first through a consistency test between ROUKF and ROUKF-CLS Windkessel parameter estimation ("ROUKF-CLS verification against ROUKF using a Windkessel model" section); and then in a case of coronary LPN parameter estimation, for which ROUKF is unsuitable ("ROUKF-CLS applied to a coronary LPN model" section). Idealized carotid artery with synthetic data Estimation of Windkessel parameters from pressure and flow A straight, deformable vessel with a prescribed inflow at one end, and a Windkessel model at the other was studied, as shown in Fig. 2. The inflow velocity was chosen to represent a typical carotid flow waveform, mapped to a parabolic velocity profile. The stiffness E of the vessel wall was 0.7 MPa and the thickness h was chosen to be 0.3 mm. Forward problem Synthetic flow and pressure waveforms were generated using a simulation with known Windkessel parameters, run with a time-step of 1 ms, until cycle-to-cycle periodicity was achieved. During the final cardiac cycle, volumetric flow and cross-sectional averaged pressure waveforms were obtained at the vessel midplane (see Fig. 2) with a time-spacing of 20 ms (50 Hz sampling rate). Gaussian white noise was added to the waveforms with a signal-to-noise ratio (SNR) of 40 dB. Estimation problem Six different estimation problems, A–F, were studied. In cases A, B and C, a single parameter of the Windkessel model shown in Fig. 2 was estimated—C, \(R_2\), and \(R_1\) respectively. In case D, C and \(R_{2}\) were estimated, in case E, \(R_{1}\) and \(R_{2}\) were estimated, and in case F, all three parameters were estimated. In all cases, the initial estimation covariances \(s_{i}\) were set to 0.2 (Eq. 7), and the measurement error covariances for the pressure and the flow data were set to \(w_P^{(1)}=12.3^2~(\mathrm {mmHg})^{2}\) and \(w_Q^{(1)}=1.21^{2}~(\mathrm {cc}/\mathrm {s})^{2}\) (Appendix: Measurement error covariance matrix). Figure 5 shows the evolution of the estimated parameters from their initial guesses in each of the six cases. We observed that in cases A–D, convergence to the true value was achieved after one cardiac cycle (1.1 s). In cases E and F, the estimation run does not fully recover the true parameters. For case F (both of the lower-most two panels of Fig. 5), the right plot shows that restarting the estimation using the parameter values reached by the end of the initial (left plot) run results in better convergence, highlighting the importance of good initial guesses, as well as indicating a strategy for recovering when those guesses are poor. Note that in all cases shown, \(R_{1}\) predominantly changes during the early part of the cycle, which corresponds to systole. This is because \(R_{1}\) has the greatest impact on the results during this period. During diastole, \(R_{2}\) and C are more important. In both cases E and F, \(R_{1}\) does not change sufficiently during the first systole, so a second cycle is required (shown by the two panels for Case F). Evolution of the parameters in the idealized carotid example with synthetic data. In cases A–C (top) only a single parameter was estimated. In cases D, E (middle), two parameters were simultaneously estimated. In case F (bottom) all three parameters were simultaneously estimated. The true values, initial guesses, and final estimated values for the parameters are shown in the table. The shaded regions depict plus/minus the estimation error standard deviation Simultaneous estimation of wall stiffness and Windkessel compliance We demonstrate simultaneous estimation of both the Windkessel compliance C, and wall stiffness, E. The total compliance of the vessel is determined by the combined effect of E and C, and it affects the pulse pressure amplitude. Conversely, the wall stiffness informs the wall deformation. To capture vessel stiffness and Windkessel compliance, we employ two observations (see Fig. 6): a cross-sectional averaged pressure; and a distance observation using wall motion data from the forward FSI simulation, implicitly defined by Eq. 23 [39,40,41]. Simultaneous estimation of wall stiffnesses and Windkessel compliance. Top: the simulation set-up. The vessel surface is divided into three regions, each with a different value of stiffness, E. The inflow rate is given by a prescribed waveform, and a three-element Windkessel model is coupled at the outlet. The three values of E and the Windkessel compliance, C, will be estimated. Middle: evolution of the parameters during estimation. Solid lines denote the estimates; dashed lines indicate their true values. Shaded regions depict the estimation standard deviation. Bottom: the results are summarized Forward problem The cylindrical vessel shown in Fig. 6 was divided into three regions, with differing values of stiffness E in each, and a simulation was run until periodicity was achieved, using \(C = 1.06\times 10^{-5}\,\hbox {cm}^5/\hbox {dyne} = 0.106\,\hbox {mm}^3/\hbox {Pa}\). From the final cardiac cycle, wall deformation observations composed of a series of twenty-four wall surfaces was extracted. Then, pressure data was obtained at the observation plane at 100 Hz, and subsequently adding Gaussian white noise at 30 dB SNR. Estimation problem We employed \(n_{w}\) nodal distance observations for each of the twenty-four wall surfaces, and one pressure observation. Initial guesses for the parameters were set to \(E=0.65\) MPa (uniformly down the length of the vessel) and \(C = 0.175\,\hbox {mm}^3\)/Pa. Initial estimation error covariance values were \(s_i = 0.5\). The observation covariance matrix \(W_k\) was block-diagonal, with a diagonal entry \(w_P=(12.3~\mathrm {mmHg})^2\) for the pressure variance, and a \(n_{w}\times n_{w}\) block \((w^{-1}M_{\Gamma _{\mathrm{w}}})^{-1}\) for the wall node covariances, where \(w = (0.1\hbox { mm})^2\) and \(M_{\Gamma _{\mathrm{w}}}\) is a normalized mass matrix associated with the wall boundary [41]. The results of the sequential parameter estimation simulation are summarized in Fig. 6, demonstrating accurate recovery of both regional stiffness E and Windkessel C after two cardiac cycles. Of note, the region with the lowest stiffness \(E = 0.45\) in the center of the vessel showed the smallest standard deviation in the the estimation error (green shaded region). This is likely due to this region having the largest wall displacement (and thus the largest signal). Subject-specific aorta with synthetic data We now consider a subject-specific aorta and its main branches, obtained as described in "Data acquisition" section. The geometric model has one inlet and nine outlets; we aim to recover \(R_1\), \(R_2\) and C at all nine outlets. Inlet velocities were prescribed using the ascending aortic PC-MRI data. Wall thickness h was set to be 10% of the local vessel radius. Vessel stiffness E was specified by an empirical formula relating pulse wave velocity to local vessel radius [42], scaled uniformly to match the subject-specific wave speed measured from the four flow waveforms in the aorta and iliac artery (see Fig. 1) [43]. The resulting maps of vessel wall properties are shown in Fig. 7. Simulations were performed using a finite element mesh comprising \(\sim \) 220k linear tetrahedral elements, and a time-step of 0.25 ms. Spatial distribution of aortic wall properties. Left: stiffness, E. Right: wall thickness, h Forward problem Synthetic data was generated using known parameter values for the Windkessel LPN in all nine outlets. The simulation was run to full periodicity, then cross-sectional averaged pressure and volumetric flow waveforms in the aortic branches were recorded during the final cardiac cycle, at 160 Hz; see Fig. 8 for the locations at which these observations were made. Estimation problem Motivated by the difficulty of obtaining pressure data in the clinic, we examined three different estimation cases, each with differing availability of pressure data. Case A considered no pressure data, Case B used a single pressure observation at the left carotid artery, and Case C assumed pressure observations in every outlet vessel. In all three cases, flow data in every outlet vessel were available; this choice is justified due to the much greater availability of flow data in the clinic. A key assumption in all three cases was that the initial state variables, including pressure, were assumed to be error-free; the impact of this assumption will be addressed later in this section. Sequential estimations were then run for each case. Initial estimation error covariances were all set to \(s_i=0.2\), and measurement error covariances for pressure and flow to \(w_P^{(i)}=(29.3\hbox { mmHg})^2\) and \(w_Q^{(i)}=(8.3\hbox { cc/s})^2\), respectively (see Appendix: Measurement error covariance matrix). The synthetic data were interpolated linearly to provide values for all time-steps of the simulation. Boundary conditions, outlet numbering, and observation planes (depicted in blue, roman numerals) in the subject-specific aorta with synthetic data. The table shows the different sets of observations considered. In all cases, flow was observed at every branch. In case A, no pressure observations were available. Case B used a single pressure observation in the left carotid (outlet III). In Case C, pressure was observed in every branch Results for cases A, B and C are presented in Fig. 9, demonstrating the evolution of the twenty-seven Windkessel parameters over 2.0 s (one cardiac cycle \(\sim 0.8\) s). Stable parameter estimates were achieved in all cases. However, while \(R_1\) converged rapidly (third row plots), C and \(R_{2}\) took longer to converge (first and second row plots, respectively). The final estimated values are summarized in Table 1, where the errors are stratified into three categories; those with errors in excess of 5% (colorless cells), those with less than 5% error (blue cells), and those with less than 2.5% error (green cells). For the resistance estimates \(R_1\) and \(R_2\), there is little difference between Cases B and C, and both are superior to Case A. This indicates that having at least one pressure observation is beneficial. Conversely, for the compliance estimates C, there is little difference between Cases A and B, and both are inferior to Case C. This suggests that reconstruction of Windkessel compliances in multi-branched models benefits from having pressure measurements in more than one location, something it is not always available in the clinic. Regardless, in this example we note that the relative errors are generally low in all cases, suggesting that, given reasonable initial guesses for the parameters and the model state, parameter identifiability does not strongly depend on pressure data. Subject-specific aorta with synthetic data: evolution of Windkessel parameters during estimation, assuming error-free initial state variables. (Left) Case A: estimation with no pressure observations. (Middle) Case B: estimation run with a single pressure observation in the left carotid artery (cross-section III in Fig. 8). (Right) Case C: estimation run with a pressure observation in each of the branches (see Fig. 8). From top to bottom: Windkessel compliance, C, proximal resistance, \(R_1\), and distal resistance, \(R_2\). Dashed lines denote the true parameter value used in the forward simulation and the solid lines the estimate. The shaded regions cover the standard deviation of the parameter estimates Table 1 Final estimated Windkessel parameters Estimation with initial errors in the pressure field To examine the importance of accurate initial states, Cases A and B were performed a second time, now with an initial uniform error of 20 mmHg in the pressure field. We refer to these as the "pressure error" (PE) Cases, A-PE and B-PE. From top to bottom, Fig. 10 shows estimation and true data for pressure at outlets 1 (innominate artery, blue lines) and 9 (left common iliac artery, black lines), and estimates of compliance C, distal resistance \(R_{2}\) and proximal resistance \(R_{1}\). In both cases, estimation were run for two cardiac cycles, stopped, and then started again using as initial estimates the results of the first run. We observed that in Case A-PE the parameter values can not be recovered, particularly, the distal resistance values, \(R_2\). However, the results for Case B-PE show that a single pressure observation is sufficient to enable parameter recovery, even with errors in the initial pressure field. Note that in this case, Fig. 10 shows that the pressure waveforms in case B-PE agree closely with the data and that the parameters are recovered. The percentage parameter errors in each case are given in Appendix: Parameter errors With and without pressure data. Subject-specific aorta with synthetic data: evolution of Windkessel parameters during estimation with initial errors introduced into the pressure field. (Left) Case A-PE: estimation run with no pressure observations. (Right) Case B-PE: estimation run with a single pressure observation in the left carotid artery (cross-section III in Fig. 8). From top to bottom: pressure waveforms at outlets 1 (innominate artery, blue lines) and 9 (left common iliac, black lines); Windkessel compliance, C; proximal resistance, \(R_1\); and distal resistance, \(R_2\). The dashed lines denote the truth and the solid lines the estimate. The shaded regions cover the standard deviation of the parameter estimates. In both cases, estimations were run for two cardiac cycles, stopped, and then started again using as initial estimates the results of the first run Left: the flow observation sites (roman numerals) in the simulated aorta, which coincide with the flow acquisition planes in the PCMRI scans. Waveforms at oulets IV and VI were not measured directly with MRI, but were assumed to be identical to those measured at sites V and VII, respectively. Parameters at outlets 4 (celiac trunk) and 5 (superior mesenteric artery) were not estimated due to the lack of flow data. For simulation, three-element Windkessel models are attached at each outlet, indicated by the asterisks. Right: flow waveforms derived from PCMRI data were used in the estimation problem Subject-specific aorta with PC-MRI and applanation tonometry data: evolution of Windkessel parameters during estimation for each outlet. Top: compliance C; middle: distal resistance \(R_2\); bottom: proximal resistance \(R_1\). The solid lines denote the estimated values for each parameter; the shaded regions show the range of estimated values plus/minus the the estimation error standard deviation, providing a visual representation of the estimation error Subject-specific aorta with PC-MRI and applanation tonometry data: observations from the model (red lines), computed using the a posteriori estimate during the estimation procedure, compared with the measured data (black lines) at each of the seven observation locations for flow, and the location for pressure (outlet III) Volumetric flow in the descending aorta (left), and the infrarenal abdominal aorta (right) during a forward simulation using the final estimated parameters (see Fig. 12). The waveforms are compared to the PC-MRI data Subject-specific aorta with PC-MRI and applanation tonometry data We studied the same aortic geometry, now using subject-specific PC-MRI flow and applanation tonometry pressure data. The flow data were recorded in the ascending aorta, the three branches of the aortic arch, the left renal and the left iliac arteries, and two other locations within the aorta. Right renal and right iliac flow data were synthesized as copies of those for the left renal and left iliac. Observation locations in the model were chosen to agree with the PC-MRI acquisition planes, as shown in Fig. 11. Pressure data consisted of the cycle-averaged pressure waveform acquired in the left carotid artery. Details on the data acquisition are given in "Data acquisition" section. Estimation problem Windkessel parameters were estimated at all but two outlets; the celiac trunk (outlet 6) and the superior mesenteric artery (outlet 7) were omitted due to the absence of local flow data. Thus, estimates of \(R_1^{i}\), \(R_2^{i}\), and \(C^{i}\), \(i\in \{1, \dots , 7\}\), were made; a total of twenty-one parameters. The initial guess for \(R_1^{(i)}\), for each i, was chosen to match the characteristic impedance of the associated 3D outlet [44]. Initial values for \(R_{2}^{(i)}\) were chosen such that \(R_1^{(i)}+R_2^{(i)}\) was the same at all estimated outlets, and such that the mean pressure at the known inflow rate was physiological. The initial guesses for \(C^{(i)}\) were based on an estimation of the total peripheral compliance [45, 46], and apportioned to each outlet following previous work [7]. These initial parameter values are given in Table 4. The Windkessel parameters at the celiac trunk and mesenteric artery, which were not subject to estimation, are given in Table 5. The measurement error covariances were chosen to be \(w_P=(75\hbox { mmHg})^2\) for the pressure observations, and \(w_Q^{(i)}=(20\hbox { cc/s})^2\) for the flow observations. The estimation problem was run. Figure 12 shows the evolution and convergence of the Windkessel parameters over eight cardiac cycles. The parameters are seen to be generally stable after the first cycle is completed, with minimal subsequent adjustments. The true parameters in this subject-specific case are unknown, so evaluation of the results must be made in terms of the agreement with flow and pressure data. Such an evaluation is made in Fig. 13, where we observe that the simulated flow rates initially differ significantly from the data, before rapidly converging, so that the flow waveforms match the overall shape of the data at the various flow observation sites. The best agreement is observed in the aortic arch branches (outlets I, II, and III) and iliac branches (outlets VI and VII), whereas the biggest discrepancies in the flow waveforms are observed in the renal arteries (outlets IV and V). Figure 13 also compares the predicted left carotid pressure waveform (outlet III) with the subject data. Results revealed that the simulation matches well with the tonometry data during the early part of systole, but shows some discrepancy during the diastolic phase. Figure 14 shows two comparisons between predicted and subject-recorded flow waveforms in two locations; one in the descending thoracic aorta, and the other in the infrarenal abdominal aorta. These data were not provided to the filter during parameter estimation, so the observed agreement provides stronger validation of the method. Each predicted waveform was generated using the final estimated parameters, as shown in Table 2. Table 2 Final estimates after eight cardiac cycles for the Windkessel parameters for the subject-specific aortic aorta estimation problem with PC-MRI and applanation tonometry data Reduced order Unscented Kalman Filter with constrained least squares (ROUKF-CLS) In this section, we consider two estimation problems made to test the numerical performance of the ROUKF-CLS algorithm. These problems utilize simple geometries and flow and pressure assumptions which are not always of high physiological relevance but that nevertheless provide a solid testbed for our purposes. ROUKF-CLS verification against ROUKF using a Windkessel model We compared the ROUKF and ROUKF-CLS methods by estimating the parameters of a three-element Windkessel. We applied the ROUKF-CLS using a Netlist-implemented Windkessel (i.e. specified using the arbitrary boundary condition framework of Equation ). Conversely, we used the ROUKF to estimate the parameters in an equivalent "hard-coded" Windkessel, implemented in terms of Eq. 3. This test enabled us to determine equivalence of the estimation algorithms in cases where either method is applicable, and also verify the correctness of the Netlist filtering implementation. Forward problems In both cases, we considered a deformable vessel, 40 mm in length and 5.9 mm in diameter, with a pulsatile velocity boundary condition of period \(\hbox {T} = 1.1\) s at the inlet. The vessel stiffness and thickness were \(E = 0.7\) MPa and \(h = 0.3\) mm, respectively. In the ROUKF method, a hard-coded Windkessel model was coupled at the outlet, whereas in the ROUKF-CLS method, a Netlist Windkessel model was used. The vessel was discretized into 4157 elements, and the time-step was 1 ms. The simulation was run until full periodicity was achieved, and spatially-averaged pressure and volumetric flow waveforms were recorded on an observation plane at the center of of the vessel. Estimation problems In each method, we set the initial value of all parameters to be estimated to 1.0, and attempted to recover the observed pressure and flow data. The measurement error covariances were chosen to be \(w_P=(10\hbox { mmHg})^{2}\) for the pressure observations, and \(w_Q^{(i)}=(1\hbox { cc/s})^{2}\) for the flow observations. Initial estimation error covariances were all set to \(s_i=0.6\). In both cases, the parameter evolution followed similar trajectories, reaching good estimates within one second of simulation time, as shown in Fig. 15. This result strongly supports the equivalence of the two algorithms for estimating Windkessel LPN. Comparison between Windkessel parameter estimation using different filtering methods. a Case A, with ROUKF, using the non-augmented filtering algorithm and the "hard-coded" Windkessel model b Case B, with ROUKF-CLS, using the constrained least squares augmentation of ROUKF, and the Netlist Windkessel model. Convergence of the parameter values for the three LPN parameters is shown. Given uniform initialization of all parameters to 1, the true values (dashed lines) are recovered. Shaded regions indicate the standard deviations for the parameter estimates, demonstrating the evolving confidence of the filter in the estimate ROUKF-CLS applied to a coronary LPN model We next studied the estimation of all five parameters in a single coronary LPN model, whose structure is shown in Fig. 16 [25]. As described in "Estimation algorithm—ROUKF-CLS for arbitrary LPNs" section, the ROUKF-CLS method is required due to the multiple internal states of this LPN. Here, we considered a rigid vessel, 68.9 mm in length and 9.6 mm in diameter. At the inflow, a generic pulsatile velocity condition of period \(\hbox {T}=1.1\) s was imposed, and the coronary LPN was attached at the outflow. A sinusoidal time-varying pressure of period \(\hbox {T}=2\pi \) s was applied to the base of the coronary capacitor \(C_{im}\). This represents a synthetic extravascular compression, and is imposed to demonstrate numerical efficacy of ROUKF-CLS in the presence of such a load on the LPN. Illustration of the problem setup for filtering the five-element coronary LPN using ROUKF-CLS. The inflow velocities are prescribed, and the outflow face is coupled to a Netlist coronary model. The artificial sinusoidal extravascular compression waveform is shown, applied to the base of capacitor \(C_{im}\) Forward problem The simulation was run using known parameter values and a time-step of 10 ms for a total of 34 cardiac cycles, capturing sufficient variations in solution due to the different periods of the inflow and the intra-myocardial pressure wavefroms. Synthetic data, consisting of spatially-averaged pressure and volumetric flow, was collected at a cross-sectional plane located in the center of the vessel. Estimation problem All five coronary LPN parameters were initialized to 1, and The measurement error covariances were chosen to be \(w_P=(10\hbox { mmHg})^{2}\) for the pressure observations, and \(w_Q^{(i)}=(1\hbox { cc/s})^{2}\) for the flow observations. Initial estimation error covariances were all set to \(s_i=0.3\). The sequential estimation simulation was run for 34 cardiac cycles. Similarly to the results in case F of "Estimation of Windkessel parameters from pressure and flow" section, where estimation of all three Windkessel parameters was made, parameter estimates for the coronary LPN did not converge fully after the first estimation run (results not shown here). Therefore, the filtering procedure was restarted from the final parameter states of the first estimation run. Figure 17 presents the results of this second sequential estimation run; Fig. 17a shows the convergence of the coronary LPN parameters to steady values, and Fig. 17b demonstrates that the observed pressure waveform converges to that of the synthetic target data, as desired. Note the expected sinusoidal variation in the peak and minimum pressures, due to the externally-imposed waveform. Fig. 17a shows that some of the original parameter values are not recovered. In particular, \(R_p\) and \(R_d\) overestimated and underestimated, respectively, their true values (denoted by the superimposed dashed magenta and red lines in the graph at \(y=0.86\)) by an equal amount. Note that this implies that the recovered value of \(R_p + R_d\) is correct, but that the individual parameters were not identifiable. This will be discussed further in "ROUKF-CLS filtering of a LPN coronary Netlist model" section. a Filtering of the five coronary LPN parameters. The true value of each parameter during synthetic data generation is shown by dashed lines; colors indicate which component each belongs to. The units of resistances are \(\mathrm {Pa}~\mathrm {s}~\mathrm {mm}^{-3}\), and the units of compliance are \(\mathrm {mm}^{3}~\mathrm {Pa}^{-1}\). Shaded regions show the standard deviations of the filtered estimates. b Target pressure observations at the cross-sectional plane and the sequentially achieved pressure observations during the filtering process We discuss the most relevant findings of the different results presented earlier, in particular for the subject-specific aortic case (with both synthetic data and also PC-MRI and applanation tonometry data), and also for the tests examining the numerical performance of the ROUKF-CLS algorithm. Subject-specific aorta cases This application example included two main estimation scenarios: (1) parameter estimation assuming error-free initial states (flow and pressure), and (2) parameter estimation assuming errors for the initial state variables. The first scenario is unrealistic for any clinical application. Nevertheless, under the major assumption of error-free initial states, we could see that reasonable results were obtained for the estimated parameters even if no pressure observation was available (Case A in Table 1). Of course, having a perfect knowledge of the pressure initial state (since it was assumed error-free) is indeed an important observation in itself. Interestingly, Case B (a single pressure observation available, a rather realistic clinical scenario) revealed that the largest error in the estimated compliance corresponded to the vessel with the pressure observation. Errors for the compliance estimates consistently smaller than 5% could only be achieved for Case C, which assumed pressure observations in all branches, a highly unusual clinical scenario. As expected, when errors in the initial state were introduced (see Fig. 10), we observed that the Windkessel parameters could not be recovered if no pressure observation was available (Case A-PE). A key result of this paper was demonstrating that Windkessel parameters can be recovered for a subject-specific model of the human aorta and its main branches, using PC-MRI flow data for each outlet, together with a single applanation tonometry pressure recording ("Subject-specific aorta with PC-MRI and applanation tonometry data" section). This is of particular interest for clinical applications, as non-invasive flow measurements can be readily acquired, whereas the options for non-invasive pressure measurement are limited and of lower fidelity. The model observation flow waveforms shown in Fig. 13 displayed good agreement with the measured data, once the Windkessel parameters have stabilized, demonstrating that the parameter estimates are good. However, for the pressure data, a clear discrepancy in the diastolic decay phase is noticeable. This may be due to the fact that flow and pressure were not acquired simultaneously in the subject. Furthermore, the vessel of interest (left common carotid in this case) is compressed slightly during the applanation tonometry procedure, introducing a wave reflection site that is simply not present in the PC-MRI flow waveforms. One could therefore argue that not only are the flow and pressure waveforms not acquired simultaneously, but that they correspond to different hemdoynamic conditions altogether. Therefore, the estimator will give more weight to the flow or the pressure data, depending on the choices made for the measurement error covariances. However, it is clear from Case B-PE in Fig. 10 that when flow and pressure data are acquired simultaneously—at least in a synthetic case— the pressure waveforms can also be closely recovered. For additional validation, we compared the simulation results with subject-specific flow data which was not used during the estimation procedure. In Fig. 14, we observed that, using the final estimated parameters, the simulated flow and the unseen data agreed well. Note that there is a good match of the more predominant diastolic backflow in the infrarenal abdominal aorta, compared to the milder diastolic backflow in the descending thoracic aorta. These results provide additional confidence in the value and efficacy of the methods. An interesting observation is that the estimated \(R_1\) parameter in the upper branches of the aorta differed significantly from the initial guesses, which were determined by theoretically matching the characteristic impedances at the outlets. This implies that this common method for assigning \(R_1\), which fundamentally minimizes wave reflections at the outlet, may not always be appropriate. ROUKF-CLS algorithm ROUKF-CLS validation against ROUKF by application to a three element Windkessel model In order to test the ROUKF-CLS algorithm, we evaluated it on a problem to which both ROUKF and ROUKF-CLS can be applied: that of estimating three-element Windkessel parameters. The two cases, one with ROUKF (Case A) and ROUKF-CLS (Case B), differed in two aspects. The first is whether the implementation of the Windkessel LPN was hard coded (Case A) or via Netlist (Case B); this aspect is unrelated to filtering. The second aspect is which of ROUKF or ROUKF-CLS was used as the filtering algorithm. The reason for this dual difference is that ROUKF-CLS is designed for filtering arbitrary LPNs; creation of these requires Netlist functionality. The results shown in Fig. 15 thus provide validation of both the Netlist implementation, and the ROUKF-CLS algorithm. In this particular case, it is arguable that the ROUKF-CLS (Fig. 15b) produces better results. This gives us confidence in the efficacy of the formulation. The ROUKF algorithm is applicable to Windkessel models because they are sufficiently simple; ROUKF-CLS is a generalization to a wider class of LPNs. ROUKF-CLS filtering of a LPN coronary Netlist model This case allowed us to validate ROUKF-CLS in a more complex setting. The parameters of the five-element LPN coronary model were shown to converge to stable values in Fig. 17a. Strikingly, despite the differences from the ground-truth parameter values, the stable values give a good reproduction of the data (Fig. 17b). This perhaps indicates that, with only one pressure and one volumetric flow recording, the LPN parameters are not uniquely identifiable. This identifiability issue may also be reflected by the fact that the standard deviations—the shaded regions which indicate the algorithm's confidence in the parameter estimates—remain relatively large in some cases. Future work should confirm this in a formal parameter identifiability study [16]. The errors in the final estimates for the five LPN components \(R_{a}\), \(C_{a}\), \(R_{p}\), \(C_{im}\) and \(R_{d}\) were respectively 5.8% 15.7% 31.8% 1.1% 27.1%. At face value, this indicates some substantial errors, but this would seem at odds with the high-accuracy recovery of the pressure recording shown in Fig. 17b. Looking more carefully, we see that the filter has has recovered the correct total resistance; the sum of the estimates \(R_{a}+R_{p}+R_{d}=2.184\) and the true total resistance is 2.122, giving an overall error of 2.9% in the resistance. For the two more distal resistors, the sum of the estimates is 1.781, and the true sum is 1.740; this is 2.4% a error. Similarly, for the total compliance, the sum of the estimates \(C_{a}+C_{im}=0.416\), and the true sum is 0.389, giving an error of 6.9%. This supports the hypothesis that the parameters are not uniquely identifiable with the available data. The impact of constrained least squares Log plot of the \(\ell ^{2}\) norm of the error between the constrained least squares solution and the desired right-hand side for the coronary LPN case, as given by \(\left\| L_{LPN}^{over} {\mathbf {x}} - {\mathbf {b}}\right\| \) (cf. Eq. 18). The data is presented over time, demonstrating how the magnitude of the perturbations required to enforce LPN consistency reduces over time, as the parameters converge. The actual parameter values over time for this simulation are shown in Fig. 17 Figure 18 shows the \(\ell ^{2}\) error (Eq. 18) between the solution to the CLS system and the overdetermined target for the LPN coronary model results of Fig. 17. This measures the extent to which CLS is adjusting the system to maintain LPN consistency as the parameters are adjusted. As the parameters converge, the CLS system comes close to being solved exactly and therefore little adjustment is taking place on the system. Upper panel: the absolute error in each of the three Windkessel parameters after 4 seconds of estimation simulation, at different values of initial parameter estimate covariances \(s_{i}\) (set uniformly for all three parameters). Lower three images: the parameter convergence history with the initial parameter estimate variances set to 0.1, 1.0 and 2.0 (left to right). See "The choice of initial covariance values for the estimated parameters" section for further discussion An empirical strategy for resolving convergence failures In general, we would expect that good initial guesses for the values of most parameters are possible, based on an understanding of the parameters in models of similar patients. When initial guesses are poor, there is a chance that convergence will fail. However, as seen in "Estimation of Windkessel parameters from pressure and flow" and "ROUKF-CLS applied to a coronary LPN model" sections, we found that poor convergence could be resolved by restarting the estimation simulation, using the final estimated parameters from the previous simulation. We speculate that this resolution is due to the following. Whilst the filter can adjust the parameters, it is constrained by the time history of the simulation, and—regardless of further adjustments to the parameters—it may have reached a state (i.e. pressure, velocity and wall displacement in the 3D domain) too far from the expected pressure and flow limit cycle to ever converge back to it. Restarting the simulation resets the state, and so permits convergence of the parameters. This could be done systematically, for example by choosing to always reset the state, keep the parameters, and restart the estimation at the end of every cardiac cycle. The choice of initial covariance values for the estiamted parameters We generally used initial covariance values of \(s_{i}=0.2\) for the parameter estimates—with some variation between cases—regardless of the parameter or its order of magnitude. The values chosen gave good convergence, whether assessed by the recovery of a known synthetic value, or in terms of recovery of the data itself with the converged parameters. To explore this in more detail, Fig. 19 shows the impact of different initial parameter covariance choices on the parameter estimates recovered after 4 s of estimation, for the same problem as was presented in "ROUKF-CLS verification against ROUKF using a Windkessel model" section using the ROUKF-CLS method. To evaluate longer-term behaviour, these estimation simulations were run for longer, and the results demonstrate that for some variance choices, convergence is slower. The minimal variation in the ultimate results (see Fig. 19, lower plots) indicate that the initial variance choice is not of critical importance, and that the values used in the present work represent reasonable choices. The impact of the choice of measurement error covariance The measurement error covariance for the pressure and flow data, \(w_P^{(i)}\) and \(w_Q^{(i)}\) respectively, are chosen according to how much we trust the measurements. In the present work, we chose these such that the square root of the variance was around 10% of the total variation in the measured quantity. To explore the impact of this choice on the convergence of the parameter estimates, we multiplicatively scaled the covariances by various values between 0.1 and 10.0, running an estimation simulation for each. The ROUKF-CLS Windkessel case of "ROUKF-CLS verification against ROUKF using a Windkessel model" section. is used as a baseline (scaling factor = 1.0). The errors in the parameter estimates after 4 s of estimation are plotted against the scaling factor in the upper panel of Fig. 20. Three of the resulting parameter convergence plots are shown in the lower portion of the figure. We observe that increasing the trust in the data (reducing the variance that we assert for the data) increases the speed of convergence. This is unsurprising in this case, as the measurement data is synthetic. We expect that optimisation of the measurement covariance for clinical data would be dependent on the particular data source, and likely needs to be determined once for each type of data source and device model. Upper panel: the absolute error in each of the three Windkessel parameters after 4 s of estimation simulation, at different scalings of measurement (pressure and flow) covariances (scaled multiplicatively and uniformly for both). Baseline values are as given in "ROUKF-CLS verification against ROUKF using a Windkessel model" section Lower three images: the parameter convergence history with the measurement variances, \(w_P^{(i)}\) and \(w_Q^{(i)}\), scaled by 0.1, 4.8 and 10.0 (left to right). See "The impact of the choice of measurement error covariance" section for further discussion Computational cost The total computational cost of the ROUKF procedure scales linearly with the number, \(N\), of parameters to be estimated; specifically, the cost is \(N+1\) times the cost of a forward simulation. It is easily parallelized, as once the \(N+1\) particles are generated for each time-step, they represent \(N+1\) independent simulations. An additional strategy for reducing the computational cost is to use a low-fidelity model (a coarse simulation mesh, or a reduced model such as 1D Navier–Stokes) to obtain reasonable parameter estimates rapidly, which can then be fine-tuned via short simulations using the high-resolution model. This work studied the application of Kalman filtering techniques for automatic determination of parameters in computational hemodynamics models. The three main contributions of this paper are: A demonstration of automatic determination of boundary condition parameters in a subject-specific, 3D FSI model of the human aorta and its main branches. An efficient sequential filtering technique, the ROUKF, was used to assimilate blood flow and pressure waveform data. We found that successful estimation of all three Windkessel parameters at every outlet is possible when the data consists of one flow observation at each outlet together with at least a single pressure observation. A key aspect is that the data used can be readily recordable in the clinic. The most powerful demonstration of this is given in Fig. 13. We demonstrated that the ROUKF can also be used to simultaneously estimate regional wall material properties and Windkessel parameters, by using a distance operator to incorporate wall motion data, in conjunction with a pressure observation. The key results were given in Fig. 6. For cases where the boundary condition models are more complex than three-element Windkessel LPNs, we introduced a modified filtering procedure, the ROUKF-CLS, which augments ROUKF to enable estimation of parameters in arbitrary LPN boundary condition circuits. We demonstrated this in the case of a five-element coronary LPN boundary condition. The key result here was shown in Fig. 15. ROUKF-CLS greatly expands the class of computational hemodynamics problems to which the ROUKF can be applied. An implementation of ROUKF-CLS was developed and tested as part of the Netlist arbitrary boundary condition design system in the computational hemodynamics software package CRIMSON. For a normal (non-filtering) simulation, this system could be immediately used as part of an iterative scheme to determine the coefficients \({\tilde{R}}\) and \({\tilde{S}}\) for Equation . This is the core of the Netlist boundary condition system. This is the reason for why ROUKF-CLS is not required for a three-element Windkessel; it has exactly one such row, so requires no CLS augmentation. \(\vec {v}\) : Fluid velocity \(\vec {u}\) : Solid displacement \(\vec {\dot{v}}\) : Time derivative of fluid velocity \(p\) : Fluid pressure \(\rho _\text {f}\) : Fluid density \(\rho _\text {s}\) : Solid density \(\mu \) : Fluid dynamic viscosity \(\vec {w} \) : Test function for momentum conservation equation \(q\) : Test function for continuity equation \(\Omega ^\text {f}\) : Fluid domain \(\Gamma _{\text {in}}\) : Inflow boundary \(\Gamma _{\text {out}}\) : Outflow boundary \(\Gamma _{\text {w}}\) : Wall boundary/interface \(\vec {n}_\text {f}\) : Outward normal vector on fluid domain boundary \(\varvec{\tau }_\text {f}\) : Fluid viscous stress \(\varvec{\sigma }_\text {s}\) : Solid domain Cauchy stress \(\varvec{\tilde{K}}\) : Tensor of material parameters \(\varvec{P}\) : Wall pre-stress tensor \(\varvec{\epsilon }\) : Wall strain tensor \(\mathbf {E}\) : Linearized wall stiffness \(\mathbf {k}\) : Wall transverse shear factor \(\nu \) : Poisson's ratio \(\mathcal {S}\) : Solution space for velocity \(\mathcal {P}\) : Solution space for pressure \(\mathcal {W}\) : Momentum test function space \(\hat{X}\) : State estimate \(\hat{\theta }\) : Estimation error covariance \(W\) : Measurement error covariance \(-\) : Denotes a priori estimate \(+\) : Denotes posteriori estimate \(Z\) : The measurement term in Kalman filtering \(L\) : ROUKF covariance factor L \(U\) : ROUKF covariance factor U \(H\) : Observation operator \(A\) : Model forward operator The innovation term in Kalman filtering \(\{HL\}\) : Observed state sensitivity \(N\) : Size of the reduced state (i.e. the parameters) \(\sigma \) : Sigma point vector \([\sigma _{(*)}]\) : Matrix whose columns are the sigma point vectors \(\alpha _\text {s}\) : Sigma point weight \(D_\alpha \) : Diagonal matrix of sigma point weights \(v\) : Fluid velocity nodal solution vector \(\dot{v}\) : Fluid acceleration nodal solution vector Pressure numerical nodal solution vector Displacement numerical nodal solution vector Grinberg L, Karniadakis GE. Outflow boundary conditions for arterial networks with multiple outlets. Ann Biomed Eng. 2008;36(9):1496–514. Troianowski G, Taylor CA, Feinstein JA, Vignon-Clementel IE. Three-dimensional simulations in Glenn patients: clinically based boundary conditions, hemodynamic results and sensitivity to input data. J Biomech Eng. 2011;133(11):111006. Spilker RL, Taylor CA. Tuning multidomain hemodynamic simulations to match physiological measurements. Ann Biomed Eng. 2010;38(8):2635–48. Blanco PJ, Watanebe SM, Feijóo RA. Identification of vascular territory resistances in one-dimensional hemodynamics simulations. J Biomech. 2012;45(12):2066–73. Ismail M, Wall WA, Gee MW. Adjoint-based inverse analysis of windkessel parameters for patient-specific vascular models. J Comput Phys. 2013;244(2013):113–30. Alimohammadi M, Agu O, Balabani S, Díaz-Zuccarini V. Development of a patient-specific simulation tool to analyse aortic dissections: assessment of mixed patient-specific flow and pressure boundary conditions. Med Eng Phys. 2014;36(3):275–84. Xiao N, Alastruey J, Figueroa CA. A systematic comparison between 1-D and 3-D hemodynamics in compliant arterial models. Int J Numer Meth Biomed Eng. 2014;30(2):204–31. Perdikaris P, Karniadakis GE. Model inversion via multi-fidelity bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond. J R Soc Interface. 2016;13: DeVault K, Gremaud P, Novak V. Blood flow in the circle of Willis: modeling and calibration. Multiscale Model Sim. 2008;4006:888–909. Lal R, Mohammadi B, Nicoud F. Data assimilation for identification of cardiovascular network characteristics. Int J Numer Meth Biomed Eng. 2016;33:2824. Pant S, Fabrèges B, Gerbeau J-F, Vignon-Clementel I. A multiscale filtering-based parameter estimation method for patient-specific coarctation simulations in rest and exercise. In: STACOM, MICCAI, Nagoya, Japan; 2013. Pant S, Fabrèges B, Gerbeau J-F, Vignon-Clementel I. A Methodological Paradigm for Patient-specific Multi-scale CFD Simulations: from Clinical Measurements to Parameter Estimates for Individual Analysis Lombardi D. Inverse Problems in 1D Hemodynamics on Systemic Networks: A Sequential Approach Müller LO, Caiazzo A, Blanco PJ. Reduced-order unscented kalman filter with observations in the frequency domain: application to computational hemodynamics. IEEE Trans Biomed Engng. 2019;66:1269–76. Moireau P, Chapelle D. Reduced-order Unscented Kalman Filtering with application to parameter identification in large-dimensional systems. ESAIM: Contr Op Ca Va. 2010;17(2):380–405. Caiazzo A, Caforio F, Montecinos G, Müller LO, Blanco PJ, Toro EF. Assessment of reduced-order unscented kalman filter for parameter identification in 1-dimensional blood flow models using experimental data. Int J Num Method Biomed Engng. 2017;33:2843. CRIMSON: CRIMSON, the Cardiovascular Integrated Modelling and Simulation software package. http://www.crimson.software (2020) Chapelle D, Fragu M, Mallet V, Moireau P. Fundamental principles of data assimilation underlying the Verdandi library: applications to biophysical model personalization within euHeart. Med Biol Eng Comput. 2012;51(11):1221–33. Vignon-Clementel IE, Figueroa CA, Jansen KE, Taylor CA. Outflow boundary conditions for three-dimensional finite element modeling of blood flow and pressure in arteries. Comput Method Appl M. 2006;195(29–32):3776–96. Vignon-Clementel IE, Figueroa CA, Jansen KE, Taylor CA. Outflow boundary conditions for 3D simulations of non-periodic blood flow and pressure fields in deformable arteries. Comput Method Appl M. 2010;13(5):625–40. Figueroa CA, Vignon-Clementel IE, Jansen KE, Hughes TJR, Taylor CA. A coupled momentum method for modeling blood flow in three-dimensional deformable arteries. Comput Method Appl M. 2006;195(41–43):5685–706. Moireau P, Xiao N, Astorino M, Figueroa CA, Chapelle D, Taylor CA, Gerbeau J-F. External tissue support and fluid structure simulation in blood flows. Biomech Model Mechanobiol. 2011;11(1–2):1–8. Figueroa CA, Baek S, Taylor CA, Humphrey JD. A computational framework for fluid-solid-growth modeling in cardiovascular simulations. Comput Method Appl M. 2009;198(45–46):3583–602. Baek S, Gleason RL, Rajagopal KR, Humphrey JD. Theory of small on large: Potential utility in computations of fluid-solid interactions in arteries. Comput Method Appl M. 2007;196(31–32):3070–8. Arthurs CJ, Lau KD, Asrress K, Redwood SR, Figueroa CA. A mathematical model of coronary blood flow control: simulation of patient-specific three-dimensional hemodynamics during exercise. 2016;310:1242–58. https://doi.org/10.1152/ajpheart.00517.2015. Silva Vieira M, Arthurs CJ, Hussain T, Razavi R, Figueroa CA. Patient-specific modeling of right coronary circulation vulnerability post-liver transplant in alagille's syndrome. PLOS One. 2018;13:0205829. https://doi.org/10.1371/journal.pone.0205829. Arthurs CJ, Agarwal P, John AV, Dorfman AL, Grifka RG, Figueroa CA. Reproducing patient-specific hemodynamics in the blalock-taussig circulation using a flexible multi-domain simulation framework: Applications for optimal shunt design. Front Pediatr. 2017;5:78. https://doi.org/10.3389/fped.2017.00078. Brooks AN, Hughes TJR. Streamline upwind/Petrov-Galerkin formulations for convection dominated flows with particular emphasis on the incompressible Navier-Stokes equations. Comput Method Appl M. 1982;32(1–3):199–259. Franca LP, Frey SL. Stabilized finite element methods: II. The incompressible Navier-Stokes equations. Comput Method Appl M. 1992;99(2–3):209–33. Taylor CA, Hughes TJR, Zarins CK. Finite element modeling of blood flow in arteries. Comput Method Appl M. 1998;7825(97): Whiting CH, Jansen KE. A stabilized finite element method for the incompressible Navier-Stokes equations using a hierarchical basis. Int J Numer Meth Fl. 2001;35(1):93–116. Chung J, Hulbert GM. A time integration algorithm for structural dynamics with improved numerical dissipation: the generalized-alpha method. J Appl Mech. 1993;6(June):371–5. Jansen KE, Whiting CH, Hulbert GM. A generalized-\(\alpha \) method for integrating the filtered Navier-Stokes equations with a stabilized finite element method. Comput Method Appl M. 2001;190(31):305–19. Julier SJ, Uhlmann JK, Durrant-Whyte H. A new approach for filtering nonlinear systems. P Amer Contr Conf. 1995;3(3):1628–32. Julier SJ, Uhlmann JK, Durrant-Whyte H. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE T Automat Contr. 2000;45(3):477–82. Julier SJ. The spherical simplex unscented transformation. P Amer Contr Conf. 2003;3:2430–4. Simon D. Kalman filtering with state constraints: a survey of linear and nonlinear algorithms. IET Control Theory Appl. 2010;4:1303–18. Simon D, Li Chia T. Kalman filtering with state equality constraints. IEEE T Aero Electronic Sys. 2002;38:128–36. Moireau P, Bertoglio C, Xiao N, Figueroa CA, Taylor CA, Chapelle D, Gerbeau J-F. Sequential identification of boundary support parameters in a fluid-structure vascular model using patient image data. Biomech Model Mechan. 2013. Moireau P, Chapelle D, Le Tallec P. Filtering for distributed mechanical systems using position measurements: perspectives in medical imaging. Inverse Probl. 2009;25(3):035010. Chabiniok R, Moireau P, Lesault P-F, Rahmouni A, Deux J-F, Chapelle D. Estimation of tissue contractility from cardiac cine-MRI using a biomechanical heart model. Biomech Model Mechan. 2012;11(5):609–30. Reymond P, Merenda F, Perren F, Rüfenacht D, Stergiopulos N. Validation of a one-dimensional model of the systemic arterial tree. Am J Physiol-heart C. 2009;297(1):208–22. Alastruey J, Xiao N, Fok H, Schaeffter T, Figueroa CA. On the impact of modelling assumptions in multi-scale, subject-specific models of aortic haemodynamics. journal of The Royal Society Interface. 2016;13(119):20160073. https://doi.org/10.1098/rsif.2016.0073. Alastruey J, Parker KH, Peiró J, Sherwin SJ. Lumped parameter outflow models for 1-D blood flow simulations: effect on pulse waves and parameter estimation. Commun Comput Phys. 2008;4:317–36. Alastruey J, Parker KH, Peiró J, Sherwin SJ. Analysing the pattern of pulse waves in arterial networks: a time-domain study. J Eng Math. 2009;64(4):331–51. Alastruey J, Passerini T, Formaggia L, Peiró J. Physical determining factors of the arterial pulse waveform: theoretical analysis and calculation using the 1-D formulation. J Eng Math. 2012;77(1):19–37. Swalen MJP, Khir AW. Resolving the time lag between pressure and flow for the determination of local wave speed in elastic tubes and arteries. J Biomech. 2009;42(10):1574–7. The authors gratefully acknowledge Dr. Jordi Alastruey for discussions related to boundary conditions, Dr. Christoph Kolbitsch for his invaluable assistance in acquiring the MR data, Dr. Henry Fok for sharing his experience in applanation tonometry and Simmetrix, Inc. (http://www.simmetrix.com/) for their MeshSim mesh generation library. The authors gratefully acknowledge support from the United States National Institutes of Health (NIH) grants R01 HL105297 and U01 HL135842, the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement No. 307532, the United Kingdom Department of Health via the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy's & St. Thomas' NHS Foundation Trust in partnership with King's College London and King's College Hospital NHS Foundation Trust, and funding from the Wellcome Trust Institutional Strategic Support Fund grant to King's College London under Wellcome Trust grant [204823/Z/16/Z]. Christopher J. Arthurs and Nan Xiao contributed equally to this work. Dept. of Biomedical Engineering, King's College London, London, UK Christopher J. Arthurs & Nan Xiao Inria, Inria Saclay-Ile de France, 91128, Palaiseau, France Philippe Moireau LMS, Ecole Polytechnique, CNRS, Institut Polytechnique de Paris, 91128, Palaiseau, France Physikalisch-Technische Bundesanstalt, Berlin, Germany Tobias Schaeffter Technical University Berlin, Berlin, Germany Depts. of Surgery and Biomedical Engineering, University of Michigan, 2800 Plymouth Rd, Ann Arbor, MI, 48109, USA C. Alberto Figueroa Christopher J. Arthurs Nan Xiao CA and NX contributed to the conception, design, analysis, interpretation, software engineering, and the drafting and revising of the manuscript. PM and AF contributed to the conception, design, analysis, and the drafting and revising of the manuscript. TS contributed to the data acquisition, and the drafting and revising of the manuscript. All authors read and approved the final manuscript. Correspondence to Christopher J. Arthurs or Nan Xiao or C. Alberto Figueroa. CA and AF have an interest in CRIMSON Technologies LLC. There are no further competing interests. MRI data acquisition and geometric segmentation A free-breathing SSFP sequence with fat-suppression was used to acquire anatomy images of the aorta covering its full extent from the arch down to the aortic bifurcation. Two separate image volumes were obtained in parasagittal orientation, one above and one below the diaphragm. ECG-triggering and respiratory navigation were employed to avoid motion artifacts and to ensure a good spatial resolution of \(1.2 \times 1.2 \times 1.8\) mm. To extract the subject-specific geometry of the aorta, the above-diaphragm and below-diaphragm images were segmented separately to generate 2D contours perpendicular to the centerlines of the aorta and its first generation of branches. To spatially align the two sets of contours, a rigid body transformation was defined using visible elements common to both images as fiducial markers: the center of the inferior ECG electrodes (connection fluid) and the centers of two intervertebral discs (located between the thoracic vertebrae T11 and T10, and between T10 and T9) were used to generate an orthonormal set of vectors for each image volume (Fig. 21) and thus a transformation for the below-diaphragm contours. The combined set of 2D contours was used to generated a 3D parametric surface, which in turn was used to create the finite element meshes. a The upper thoracic image, containing the aortic arch and neck vessels. b The lower thoracic image, containing the aortic bifurcation. Two-dimensional segmentations (red) perpendicular to vessel centerlines. Normal vectors (cyan), common to both images, were defined using the chest ECG electrode and two intervertebral discs as fiducial markers. c The 3D geometry was created by interpolating a parametric surface from the 2D segmentations PC-MRI data was obtained with a temporal resolution of 35 ms and spatial resolution of \(1.04 \times 1.04\) mm. The acquisition planes were placed orthogonal to the direction of flow. Velocity encoding was adapted to the maximum velocity in the different vessels (80–200 cm/s) to ensure accurate measurement of velocity profiles. Static tissue phase correction was employed to correct for potential baseline errors using the vendor's software. The area in each individual vessel segment was computed and the phase images were processed to obtain time-resolved flow waveforms using the commercial software package GTFlow (GyroTools LLC). A cubic spline interpolation was then performed to resample the flow data with a 2 ms resolution. Pressure data acquisition Pressure data was obtained at the carotid arteries using applanation tonometry. The sampling rate was 128 Hz and the pressure measurements were calibrated using systolic and diastolic pressures obtained from a pressure cuff (Omron Healthcare). The pressure and PC-MRI flow data were not acquired simultaneously; however, the phase lag between pressure and flow was derived by time-shifting the flow and pressure waveforms relative to each other so that a linear relationship between pressure and flow during early systole is achieved [47]. Sigma point generation The \(N+1\) spherical simplex sigma point directions \(\sigma \) are generated recursively as follows, using the procedure given by [36]. We initialize the \(\sigma \) vectors as follows: $$\begin{aligned} \sigma _{(1)}^{1}&= \frac{-1}{2\alpha _{\mathrm{s}}^{(1)}} \ \ \ \sigma _{(2)}^{1} = \frac{1}{2\alpha _{\mathrm{s}}^{(2)}} \end{aligned}$$ where the sigma point weights, \(\alpha _{\mathrm{s}}^{(i)}\), are given by: $$\begin{aligned} \alpha _{\mathrm{s}}^{(i)} = \frac{1}{N+1}, i = 1,\ldots ,N+1 \end{aligned}$$ Then, for \(j=2,\ldots ,N\) and \(1 \le i \le j+1,\) $$\begin{aligned} \sigma _{(i)}^{j} = \left\{ \begin{array}{ll} \left[ \begin{array}{c}\sigma _{(i)}^{j-1} \\ \\ \frac{-1}{\sqrt{j(j+1)\alpha _{\mathrm{s}}^{(i)}}}\end{array}\right] &{} i = 1,\ldots ,j \\ \\ \left[ \begin{array}{c}0_{j-1} \\ \frac{j}{\sqrt{j(j+1)\alpha _{\mathrm{s}}^{(i)}}}\end{array}\right]&i = j + 1 \end{array} \right. \end{aligned}$$ Here, \(0_{j}\) is a column vector containing j zeros. Measurement error covariance matrix In the examples presented, the measurement error covariance matrix at time step k, \(W_k\), was assumed to be a constant diagonal matrix of the form: $$\begin{aligned} W_k = \left[ \begin{array}{cccccc} w_Q^{(1)} &{} 0 &{} &{} \ldots &{} &{} 0 \\ 0 &{} \ddots &{} &{} &{} &{} \\ &{} &{} w_Q^{n_\text {obs-Q}} \\ \vdots &{} &{} &{} w_P^{(1)} &{} &{} \vdots \\ &{} &{} &{} &{} \ddots &{} \\ 0 &{} &{} \ldots &{} &{} &{} w_P^{n_\text {obs-P}} \end{array} \right] , \end{aligned}$$ where \(w_Q^{(i)}\) corresponds to the i-th flow observation location and \(w_P^{(j)}\) corresponds to the j-th pressure observation location. Diagonal entries for other observations, such as wall motion, are included as necessary. Parameter errors with and without pressure data Table 3 compares the percentage errors in each estimated parameter in the case where an initial pressure error is present in the model, and in the presence (Case A-PE) or absence (Case B-PE) of pressure data recorded at a single location on the vasculature. It can be seen that the results are greatly superior in the presence of the pressure measurement. This study was described in "Subject-specific aorta with synthetic data" section. Table 3 Estimated Windkessel parameters (at the end of two cardiac cycles) in the cases with an initial error in the pressure field (corresponding to Fig. 10). Initial parameter guesses Table 4 Initial guesses for the Windkessel parameters for the full aorta estimation problem with real data Table 5 Windkessel parameters that are fixed and not estimated in the full aorta estimation problem with real data Tables 4 and 5 list the initial parameter values used in the aorta estimation with real data, presented in "Subject-specific aorta with PC-MRI and applanation tonometry data " section. Arthurs, C.J., Xiao, N., Moireau, P. et al. A flexible framework for sequential estimation of model parameters in computational hemodynamics. Adv. Model. and Simul. in Eng. Sci. 7, 48 (2020). https://doi.org/10.1186/s40323-020-00186-x Accepted: 06 November 2020 DOI: https://doi.org/10.1186/s40323-020-00186-x Kalman filtering Computational hemodynamics Boundary conditions Patient specific modeling
CommonCrawl
Get Steam Turbines essential facts below. View Videos or join the Steam Turbines discussion. Add Steam Turbines to your PopFlock.com topic list for future reference or share this resource on social media. Machine that uses steam to rotate a shaft The rotor of a modern steam turbine used in a power plant A steam turbine is a device that extracts thermal energy from pressurized steam and uses it to do mechanical work on a rotating output shaft. Its modern manifestation was invented by Charles Parsons in 1884.[1][2] The steam turbine is a form of heat engine that derives much of its improvement in thermodynamic efficiency from the use of multiple stages in the expansion of the steam, which results in a closer approach to the ideal reversible expansion process. Because the turbine generates rotary motion, it is particularly suited to be used to drive an electrical generator--about 85% of all electricity generation in the United States in the year 2014 was by use of steam turbines.[3] A steam turbine connected to an electric generator is called a turbo generator. A 250 kW industrial steam turbine from 1910 (right) directly linked to a generator (left). The first device that may be classified as a reaction steam turbine was little more than a toy, the classic Aeolipile, described in the 1st century by Hero of Alexandria in Roman Egypt.[4][5] In 1551, Taqi al-Din in Ottoman Egypt described a steam turbine with the practical application of rotating a spit. Steam turbines were also described by the Italian Giovanni Branca (1629)[6] and John Wilkins in England (1648).[7][8] The devices described by Taqi al-Din and Wilkins are today known as steam jacks. In 1672 an impulse steam turbine driven car was designed by Ferdinand Verbiest. A more modern version of this car was produced some time in the late 18th century by an unknown German mechanic. In 1775 at Soho James Watt designed a reaction turbine that was put to work there.[9] In 1827 the Frenchmen Real and Pichon patented and constructed a compound impulse turbine.[10] The modern steam turbine was invented in 1884 by Charles Parsons, whose first model was connected to a dynamo that generated 7.5 kilowatts (10.1 hp) of electricity.[11] The invention of Parsons' steam turbine made cheap and plentiful electricity possible and revolutionized marine transport and naval warfare.[12] Parsons' design was a reaction type. His patent was licensed and the turbine scaled-up shortly after by an American, George Westinghouse. The Parsons turbine also turned out to be easy to scale up. Parsons had the satisfaction of seeing his invention adopted for all major world power stations, and the size of generators had increased from his first 7.5 kilowatts (10.1 hp) set up to units of 50,000 kilowatts (67,000 hp) capacity. Within Parsons' lifetime, the generating capacity of a unit was scaled up by about 10,000 times,[13] and the total output from turbo-generators constructed by his firm C. A. Parsons and Company and by their licensees, for land purposes alone, had exceeded thirty million horse-power.[11] Other variations of turbines have been developed that work effectively with steam. The de Laval turbine (invented by Gustaf de Laval) accelerated the steam to full speed before running it against a turbine blade. De Laval's impulse turbine is simpler and less expensive and does not need to be pressure-proof. It can operate with any pressure of steam, but is considerably less efficient.[]Auguste Rateau developed a pressure compounded impulse turbine using the de Laval principle as early as 1896,[14] obtained a US patent in 1903, and applied the turbine to a French torpedo boat in 1904. He taught at the École des mines de Saint-Étienne for a decade until 1897, and later founded a successful company that was incorporated into the Alstom firm after his death. One of the founders of the modern theory of steam and gas turbines was Aurel Stodola, a Slovak physicist and engineer and professor at the Swiss Polytechnical Institute (now ETH) in Zurich. His work Die Dampfturbinen und ihre Aussichten als Wärmekraftmaschinen (English: The Steam Turbine and its prospective use as a Heat Engine) was published in Berlin in 1903. A further book Dampf und Gas-Turbinen (English: Steam and Gas Turbines) was published in 1922.[15] The Brown-Curtis turbine, an impulse type, which had been originally developed and patented by the U.S. company International Curtis Marine Turbine Company, was developed in the 1900s in conjunction with John Brown & Company. It was used in John Brown-engined merchant ships and warships, including liners and Royal Navy warships. A steam turbine without its top cover The present-day manufacturing industry for steam turbines is dominated by Chinese power equipment makers. Harbin Electric, Shanghai Electric, and Dongfang Electric, the top three power equipment makers in China, collectively hold a majority stake in the worldwide market share for steam turbines in 2009-10 according to Platts.[16] Other manufacturers with minor market share include Bharat Heavy Electricals Limited, Siemens, Alstom, General Electric, Doosan ?koda Power, Mitsubishi Heavy Industries, and Toshiba.[16][needs update] The consulting firm Frost & Sullivan projects that manufacturing of steam turbines will become more consolidated by 2020 as Chinese power manufacturers win increasing business outside of China.[17] Steam turbines are made in a variety of sizes ranging from small <0.75 kW (<1 hp) units (rare) used as mechanical drives for pumps, compressors and other shaft driven equipment, to 1,500 MW (2,000,000 hp) turbines used to generate electricity. There are several classifications for modern steam turbines. Blade and stage design Schematic diagram outlining the difference between an impulse and a 50% reaction turbine Turbine blades are of two basic types, blades and nozzles. Blades move entirely due to the impact of steam on them and their profiles do not converge. This results in a steam velocity drop and essentially no pressure drop as steam moves through the blades. A turbine composed of blades alternating with fixed nozzles is called an impulse turbine, Curtis turbine, Rateau turbine, or Brown-Curtis turbine. Nozzles appear similar to blades, but their profiles converge near the exit. This results in a steam pressure drop and velocity increase as steam moves through the nozzles. Nozzles move due to both the impact of steam on them and the reaction due to the high-velocity steam at the exit. A turbine composed of moving nozzles alternating with fixed nozzles is called a reaction turbine or Parsons turbine. Except for low-power applications, turbine blades are arranged in multiple stages in series, called compounding, which greatly improves efficiency at low speeds.[18] A reaction stage is a row of fixed nozzles followed by a row of moving nozzles. Multiple reaction stages divide the pressure drop between the steam inlet and exhaust into numerous small drops, resulting in a pressure-compounded turbine. Impulse stages may be either pressure-compounded, velocity-compounded, or pressure-velocity compounded. A pressure-compounded impulse stage is a row of fixed nozzles followed by a row of moving blades, with multiple stages for compounding. This is also known as a Rateau turbine, after its inventor. A velocity-compounded impulse stage (invented by Curtis and also called a "Curtis wheel") is a row of fixed nozzles followed by two or more rows of moving blades alternating with rows of fixed blades. This divides the velocity drop across the stage into several smaller drops.[19] A series of velocity-compounded impulse stages is called a pressure-velocity compounded turbine. Diagram of an AEG marine steam turbine circa 1905 By 1905, when steam turbines were coming into use on fast ships (such as HMS Dreadnought) and in land-based power applications, it had been determined that it was desirable to use one or more Curtis wheels at the beginning of a multi-stage turbine (where the steam pressure is highest), followed by reaction stages. This was more efficient with high-pressure steam due to reduced leakage between the turbine rotor and the casing.[20] This is illustrated in the drawing of the German 1905 AEG marine steam turbine. The steam from the boilers enters from the right at high pressure through a throttle, controlled manually by an operator (in this case a sailor known as the throttleman). It passes through five Curtis wheels and numerous reaction stages (the small blades at the edges of the two large rotors in the middle) before exiting at low pressure, almost certainly to a condenser. The condenser provides a vacuum that maximizes the energy extracted from the steam, and condenses the steam into feedwater to be returned to the boilers. On the left are several additional reaction stages (on two large rotors) that rotate the turbine in reverse for astern operation, with steam admitted by a separate throttle. Since ships are rarely operated in reverse, efficiency is not a priority in astern turbines, so only a few stages are used to save cost. Blade design challenges A major challenge facing turbine design was reducing the creep experienced by the blades. Because of the high temperatures and high stresses of operation, steam turbine materials become damaged through these mechanisms. As temperatures are increased in an effort to improve turbine efficiency, creep becomes significant. To limit creep, thermal coatings and superalloys with solid-solution strengthening and grain boundary strengthening are used in blade designs. Protective coatings are used to reduce the thermal damage and to limit oxidation. These coatings are often stabilized zirconium dioxide-based ceramics. Using a thermal protective coating limits the temperature exposure of the nickel superalloy. This reduces the creep mechanisms experienced in the blade. Oxidation coatings limit efficiency losses caused by a buildup on the outside of the blades, which is especially important in the high-temperature environment.[21] The nickel-based blades are alloyed with aluminum and titanium to improve strength and creep resistance. The microstructure of these alloys is composed of different regions of composition. A uniform dispersion of the gamma-prime phase - a combination of nickel, aluminum, and titanium - promotes the strength and creep resistance of the blade due to the microstructure.[22] Refractory elements such as rhenium and ruthenium can be added to the alloy to improve creep strength. The addition of these elements reduces the diffusion of the gamma prime phase, thus preserving the fatigue resistance, strength, and creep resistance.[23] Steam supply and exhaust conditions A low-pressure steam turbine in a nuclear power plant. These turbines exhaust steam at a pressure below atmospheric. Turbine types include condensing, non-condensing, reheat, extraction and induction. Condensing turbines are most commonly found in electrical power plants. These turbines receive steam from a boiler and exhaust it to a condenser. The exhausted steam is at a pressure well below atmospheric, and is in a partially condensed state, typically of a quality near 90%. Non-condensing or back pressure turbines are most widely used for process steam applications, in which the steam will be used for additional purposes after being exhausted from the turbine. The exhaust pressure is controlled by a regulating valve to suit the needs of the process steam pressure. These are commonly found at refineries, district heating units, pulp and paper plants, and desalination facilities where large amounts of low pressure process steam are needed. Reheat turbines are also used almost exclusively in electrical power plants. In a reheat turbine, steam flow exits from a high-pressure section of the turbine and is returned to the boiler where additional superheat is added. The steam then goes back into an intermediate pressure section of the turbine and continues its expansion. Using reheat in a cycle increases the work output from the turbine and also the expansion reaches conclusion before the steam condenses, thereby minimizing the erosion of the blades in last rows. In most of the cases, maximum number of reheats employed in a cycle is 2 as the cost of super-heating the steam negates the increase in the work output from turbine. Extracting type turbines are common in all applications. In an extracting type turbine, steam is released from various stages of the turbine, and used for industrial process needs or sent to boiler feedwater heaters to improve overall cycle efficiency. Extraction flows may be controlled with a valve, or left uncontrolled. Extracted steam results in a loss of power in the downstream stages of the turbine. Induction turbines introduce low pressure steam at an intermediate stage to produce additional power. Casing or shaft arrangements These arrangements include single casing, tandem compound and cross compound turbines. Single casing units are the most basic style where a single casing and shaft are coupled to a generator. Tandem compound are used where two or more casings are directly coupled together to drive a single generator. A cross compound turbine arrangement features two or more shafts not in line driving two or more generators that often operate at different speeds. A cross compound turbine is typically used for many large applications. A typical 1930s-1960s naval installation is illustrated below; this shows high- and low-pressure turbines driving a common reduction gear, with a geared cruising turbine on one high-pressure turbine. Starboard steam turbine machinery arrangement of Japanese Furutaka- and Aoba-class cruisers. Two-flow rotors A two-flow turbine rotor. The steam enters in the middle of the shaft, and exits at each end, balancing the axial force. The moving steam imparts both a tangential and axial thrust on the turbine shaft, but the axial thrust in a simple turbine is unopposed. To maintain the correct rotor position and balancing, this force must be counteracted by an opposing force. Thrust bearings can be used for the shaft bearings, the rotor can use dummy pistons, it can be double flow- the steam enters in the middle of the shaft and exits at both ends, or a combination of any of these. In a double flow rotor, the blades in each half face opposite ways, so that the axial forces negate each other but the tangential forces act together. This design of rotor is also called two-flow, double-axial-flow, or double-exhaust. This arrangement is common in low-pressure casings of a compound turbine.[24] Principle of operation and design An ideal steam turbine is considered to be an isentropic process, or constant entropy process, in which the entropy of the steam entering the turbine is equal to the entropy of the steam leaving the turbine. No steam turbine is truly isentropic, however, with typical isentropic efficiencies ranging from 20 to 90% based on the application of the turbine. The interior of a turbine comprises several sets of blades or buckets. One set of stationary blades is connected to the casing and one set of rotating blades is connected to the shaft. The sets intermesh with certain minimum clearances, with the size and configuration of sets varying to efficiently exploit the expansion of steam at each stage. Practical thermal efficiency of a steam turbine varies with turbine size, load condition, gap losses and friction losses. They reach top values up to about 50% in a 1,200 MW (1,600,000 hp) turbine; smaller ones have a lower efficiency.[] To maximize turbine efficiency the steam is expanded, doing work, in a number of stages. These stages are characterized by how the energy is extracted from them and are known as either impulse or reaction turbines. Most steam turbines use a mixture of the reaction and impulse designs: each stage behaves as either one or the other, but the overall turbine uses both. Typically, lower pressure sections are reaction type and higher pressure stages are impulse type.[] Impulse turbines A selection of impulse turbine blades An impulse turbine has fixed nozzles that orient the steam flow into high speed jets. These jets contain significant kinetic energy, which is converted into shaft rotation by the bucket-like shaped rotor blades, as the steam jet changes direction. A pressure drop occurs across only the stationary blades, with a net increase in steam velocity across the stage. As the steam flows through the nozzle its pressure falls from inlet pressure to the exit pressure (atmospheric pressure or, more usually, the condenser vacuum). Due to this high ratio of expansion of steam, the steam leaves the nozzle with a very high velocity. The steam leaving the moving blades has a large portion of the maximum velocity of the steam when leaving the nozzle. The loss of energy due to this higher exit velocity is commonly called the carry over velocity or leaving loss. The law of moment of momentum states that the sum of the moments of external forces acting on a fluid which is temporarily occupying the control volume is equal to the net time change of angular momentum flux through the control volume. The swirling fluid enters the control volume at radius r1{\displaystyle r_{1}} with tangential velocity Vw1{\displaystyle V_{w1}} and leaves at radius r2{\displaystyle r_{2}} with tangential velocity Vw2{\displaystyle V_{w2}} . Velocity triangle A velocity triangle paves the way for a better understanding of the relationship between the various velocities. In the adjacent figure we have: V1{\displaystyle V_{1}} and V2{\displaystyle V_{2}} are the absolute velocities at the inlet and outlet respectively. Vf1{\displaystyle V_{f1}} and Vf2{\displaystyle V_{f2}} are the flow velocities at the inlet and outlet respectively. Vw1{\displaystyle V_{w1}} and Vw2{\displaystyle V_{w2}} are the swirl velocities at the inlet and outlet respectively, in the moving reference. Vr1{\displaystyle V_{r1}} and Vr2{\displaystyle V_{r2}} are the relative velocities at the inlet and outlet respectively. U1{\displaystyle U_{1}} and U2{\displaystyle U_{2}} are the velocities of the blade at the inlet and outlet respectively. α{\displaystyle \alpha } is the guide vane angle and β{\displaystyle \beta } is the blade angle. Then by the law of moment of momentum, the torque on the fluid is given by: T=m˙(r2Vw2−r1Vw1){\displaystyle T={\dot {m}}\left(r_{2}V_{w2}-r_{1}V_{w1}\right)} For an impulse steam turbine: r2=r1=r{\displaystyle r_{2}=r_{1}=r} . Therefore, the tangential force on the blades is Fu=m˙(Vw1−Vw2){\displaystyle F_{u}={\dot {m}}\left(V_{w1}-V_{w2}\right)} . The work done per unit time or power developed: W=Tω{\displaystyle W=T\omega } . When ? is the angular velocity of the turbine, then the blade speed is U=ωr{\displaystyle U=\omega r} . The power developed is then W=m˙U(ΔVw){\displaystyle W={\dot {m}}U(\Delta V_{w})} . Blade efficiency Blade efficiency (ηb{\displaystyle {\eta _{b}}} ) can be defined as the ratio of the work done on the blades to kinetic energy supplied to the fluid, and is given by ηb=Work DoneKinetic Energy Supplied=UΔVwV12{\displaystyle \eta _{b}={\frac {\mathrm {Work~Done} }{\mathrm {Kinetic~Energy~Supplied} }}={\frac {U\Delta V_{w}}{{V_{1}}^{2}}}} Stage efficiency Convergent-divergent nozzle Graph depicting efficiency of impulse turbine A stage of an impulse turbine consists of a nozzle set and a moving wheel. The stage efficiency defines a relationship between enthalpy drop in the nozzle and work done in the stage. ηstage=Work done on bladeEnergy supplied per stage=UΔVwΔh{\displaystyle {\eta _{\mathrm {stage} }}={\frac {\mathrm {Work~done~on~blade} }{\mathrm {Energy~supplied~per~stage} }}={\frac {U\Delta V_{w}}{\Delta h}}} Where Δh=h2−h1{\displaystyle \Delta h=h_{2}-h_{1}} is the specific enthalpy drop of steam in the nozzle. By the first law of thermodynamics: h1+12V12=h2+12V22{\displaystyle h_{1}+{\frac {1}{2}}{V_{1}}^{2}=h_{2}+{\frac {1}{2}}{V_{2}}^{2}} Assuming that V1{\displaystyle V_{1}} is appreciably less than V2{\displaystyle V_{2}} , we get Δh≈12V22{\displaystyle {\Delta h}\approx {\frac {1}{2}}{V_{2}}^{2}} . Furthermore, stage efficiency is the product of blade efficiency and nozzle efficiency, or ηstage=ηbηN{\displaystyle \eta _{\text{stage}}=\eta _{b}\eta _{N}} . Nozzle efficiency is given by ηN=V222(h1−h2){\displaystyle \eta _{N}={\frac {{V_{2}}^{2}}{2\left(h_{1}-h_{2}\right)}}} , where the enthalpy (in J/Kg) of steam at the entrance of the nozzle is h1{\displaystyle h_{1}} and the enthalpy of steam at the exit of the nozzle is h2{\displaystyle h_{2}} . ΔVw=Vw1−(−Vw2)=Vw1+Vw2=Vr1cos⁡β1+Vr2cos⁡β2=Vr1cos⁡β1(1+Vr2cos⁡β2Vr1cos⁡β1){\displaystyle {\begin{aligned}\Delta V_{w}&=V_{w1}-\left(-V_{w2}\right)\\&=V_{w1}+V_{w2}\\&=V_{r1}\cos \beta _{1}+V_{r2}\cos \beta _{2}\\&=V_{r1}\cos \beta _{1}\left(1+{\frac {V_{r2}\cos \beta _{2}}{V_{r1}\cos \beta _{1}}}\right)\end{aligned}}} The ratio of the cosines of the blade angles at the outlet and inlet can be taken and denoted c=cos⁡β2cos⁡β1{\displaystyle c={\frac {\cos \beta _{2}}{\cos \beta _{1}}}} . The ratio of steam velocities relative to the rotor speed at the outlet to the inlet of the blade is defined by the friction coefficient k=Vr2Vr1{\displaystyle k={\frac {V_{r2}}{V_{r1}}}} . k<1{\displaystyle k<1} and depicts the loss in the relative velocity due to friction as the steam flows around the blades (k=1{\displaystyle k=1} for smooth blades). ηb=2UΔVwV12=2UV1(cos⁡α1−UV1)(1+kc){\displaystyle \eta _{b}={\frac {2U\Delta V_{w}}{{V_{1}}^{2}}}={\frac {2U}{V_{1}}}\left(\cos \alpha _{1}-{\frac {U}{V_{1}}}\right)(1+kc)} The ratio of the blade speed to the absolute steam velocity at the inlet is termed the blade speed ratio ρ=UV1{\displaystyle \rho ={\frac {U}{V_{1}}}} . ηb{\displaystyle \eta _{b}} is maximum when dηbdρ=0{\displaystyle {\frac {d\eta _{b}}{d\rho }}=0} or, ddρ(2cos⁡α1−ρ2(1+kc))=0{\displaystyle {\frac {d}{d\rho }}\left(2{\cos \alpha _{1}-\rho ^{2}}(1+kc)\right)=0} . That implies ρ=12cos⁡α1{\displaystyle \rho ={\frac {1}{2}}\cos \alpha _{1}} and therefore UV1=12cos⁡α1{\displaystyle {\frac {U}{V_{1}}}={\frac {1}{2}}\cos \alpha _{1}} . Now ρopt=UV1=12cos⁡α1{\displaystyle \rho _{opt}={\frac {U}{V_{1}}}={\frac {1}{2}}\cos \alpha _{1}} (for a single stage impulse turbine). Therefore, the maximum value of stage efficiency is obtained by putting the value of UV1=12cos⁡α1{\displaystyle {\frac {U}{V_{1}}}={\frac {1}{2}}\cos \alpha _{1}} in the expression of ηb{\displaystyle \eta _{b}} . We get: ηbmax=2(ρcos⁡α1−ρ2)(1+kc)=12cos2⁡α1(1+kc){\displaystyle {\eta _{b}}_{\text{max}}=2\left(\rho \cos \alpha _{1}-\rho ^{2}\right)(1+kc)={\frac {1}{2}}\cos ^{2}\alpha _{1}(1+kc)} . For equiangular blades, β1=β2{\displaystyle \beta _{1}=\beta _{2}} , therefore c=1{\displaystyle c=1} , and we get ηbmax=12cos2⁡α1(1+k){\displaystyle {\eta _{b}}_{\text{max}}={\frac {1}{2}}\cos ^{2}\alpha _{1}(1+k)} . If the friction due to the blade surface is neglected then ηbmax=cos2⁡α1{\displaystyle {\eta _{b}}_{\text{max}}=\cos ^{2}\alpha _{1}} . Conclusions on maximum efficiency ηbmax=cos2⁡α1{\displaystyle {\eta _{b}}_{\text{max}}=\cos ^{2}\alpha _{1}} For a given steam velocity work done per kg of steam would be maximum when cos2⁡α1=1{\displaystyle \cos ^{2}\alpha _{1}=1} or α1=0{\displaystyle \alpha _{1}=0} . As α1{\displaystyle \alpha _{1}} increases, the work done on the blades reduces, but at the same time surface area of the blade reduces, therefore there are less frictional losses. Reaction turbines In the reaction turbine, the rotor blades themselves are arranged to form convergent nozzles. This type of turbine makes use of the reaction force produced as the steam accelerates through the nozzles formed by the rotor. Steam is directed onto the rotor by the fixed vanes of the stator. It leaves the stator as a jet that fills the entire circumference of the rotor. The steam then changes direction and increases its speed relative to the speed of the blades. A pressure drop occurs across both the stator and the rotor, with steam accelerating through the stator and decelerating through the rotor, with no net change in steam velocity across the stage but with a decrease in both pressure and temperature, reflecting the work performed in the driving of the rotor. Energy input to the blades in a stage: E=Δh{\displaystyle E=\Delta h} is equal to the kinetic energy supplied to the fixed blades (f) + the kinetic energy supplied to the moving blades (m). Or, E{\displaystyle E} = enthalpy drop over the fixed blades, Δhf{\displaystyle \Delta h_{f}} + enthalpy drop over the moving blades, Δhm{\displaystyle \Delta h_{m}} . The effect of expansion of steam over the moving blades is to increase the relative velocity at the exit. Therefore, the relative velocity at the exit Vr2{\displaystyle V_{r2}} is always greater than the relative velocity at the inlet Vr1{\displaystyle V_{r1}} . In terms of velocities, the enthalpy drop over the moving blades is given by: Δhm=Vr22−Vr122{\displaystyle \Delta h_{m}={\frac {V_{r2}^{2}-V_{r1}^{2}}{2}}} (it contributes to a change in static pressure) Velocity diagram The enthalpy drop in the fixed blades, with the assumption that the velocity of steam entering the fixed blades is equal to the velocity of steam leaving the previously moving blades is given by: Δhf=V12−V022{\displaystyle \Delta h_{f}={\frac {V_{1}^{2}-V_{0}^{2}}{2}}} where V0 is the inlet velocity of steam in the nozzle V0{\displaystyle V_{0}} is very small and hence can be neglected. Therefore, Δhf=V122{\displaystyle \Delta h_{f}={\frac {V_{1}^{2}}{2}}} E=Δhf+Δhm=V122+Vr22−Vr122{\displaystyle {\begin{aligned}E&=\Delta h_{f}+\Delta h_{m}\\&={\frac {V_{1}^{2}}{2}}+{\frac {V_{r2}^{2}-V_{r1}^{2}}{2}}\end{aligned}}} A very widely used design has half degree of reaction or 50% reaction and this is known as Parson's turbine. This consists of symmetrical rotor and stator blades. For this turbine the velocity triangle is similar and we have: α1=β2{\displaystyle \alpha _{1}=\beta _{2}} , β1=α2{\displaystyle \beta _{1}=\alpha _{2}} V1=Vr2{\displaystyle V_{1}=V_{r2}} , Vr1=V2{\displaystyle V_{r1}=V_{2}} Assuming Parson's turbine and obtaining all the expressions we get E=V12−Vr122{\displaystyle E=V_{1}^{2}-{\frac {V_{r1}^{2}}{2}}} From the inlet velocity triangle we have Vr12=V12+U2−2UV1cos⁡α1{\displaystyle V_{r1}^{2}=V_{1}^{2}+U^{2}-2UV_{1}\cos \alpha _{1}} E=V12−V122−U22+2UV1cos⁡α12=V12−U2+2UV1cos⁡α12{\displaystyle {\begin{aligned}E&=V_{1}^{2}-{\frac {V_{1}^{2}}{2}}-{\frac {U^{2}}{2}}+{\frac {2UV_{1}\cos \alpha _{1}}{2}}\\&={\frac {V_{1}^{2}-U^{2}+2UV_{1}\cos \alpha _{1}}{2}}\end{aligned}}} Work done (for unit mass flow per second): W=UΔVw=U(2V1cos⁡α1−U){\displaystyle W=U\Delta V_{w}=U\left(2V_{1}\cos \alpha _{1}-U\right)} Therefore, the blade efficiency is given by ηb=2U(2V1cos⁡α1−U)V12−U2+2V1Ucos⁡α1{\displaystyle \eta _{b}={\frac {2U(2V_{1}\cos \alpha _{1}-U)}{V_{1}^{2}-U^{2}+2V_{1}U\cos \alpha _{1}}}} Condition of maximum blade efficiency Comparing Efficiencies of Impulse and Reaction turbines If ρ=UV1{\displaystyle {\rho }={\frac {U}{V_{1}}}} , then ηbmax=2ρ(cos⁡α1−ρ)V12−U2+2UV1cos⁡α1{\displaystyle {\eta _{b}}_{\text{max}}={\frac {2\rho (\cos \alpha _{1}-\rho )}{V_{1}^{2}-U^{2}+2UV_{1}\cos \alpha _{1}}}} For maximum efficiency dηbdρ=0{\displaystyle {d\eta _{b} \over d\rho }=0} , we get (1−ρ2+2ρcos⁡α1)(4cos⁡α1−4ρ)−2ρ(2cos⁡α1−ρ)(−2ρ+2cos⁡α1)=0{\displaystyle \left(1-\rho ^{2}+2\rho \cos \alpha _{1}\right)\left(4\cos \alpha _{1}-4\rho \right)-2\rho \left(2\cos \alpha _{1}-\rho \right)\left(-2\rho +2\cos \alpha _{1}\right)=0} and this finally gives ρopt=UV1=cos⁡α1{\displaystyle \rho _{opt}={\frac {U}{V_{1}}}=\cos \alpha _{1}} Therefore, ηbmax{\displaystyle {\eta _{b}}_{\text{max}}} is found by putting the value of ρ=cos⁡α1{\displaystyle \rho =\cos \alpha _{1}} in the expression of blade efficiency ηbreaction=2cos2⁡α11+cos2⁡α1ηbimpulse=cos2⁡α1{\displaystyle {\begin{aligned}{\eta _{b}}_{\text{reaction}}&={\frac {2\cos ^{2}\alpha _{1}}{1+\cos ^{2}\alpha _{1}}}\\{\eta _{b}}_{\text{impulse}}&=\cos ^{2}\alpha _{1}\end{aligned}}} A modern steam turbine generator installation Because of the high pressures used in the steam circuits and the materials used, steam turbines and their casings have high thermal inertia. When warming up a steam turbine for use, the main steam stop valves (after the boiler) have a bypass line to allow superheated steam to slowly bypass the valve and proceed to heat up the lines in the system along with the steam turbine. Also, a turning gear is engaged when there is no steam to slowly rotate the turbine to ensure even heating to prevent uneven expansion. After first rotating the turbine by the turning gear, allowing time for the rotor to assume a straight plane (no bowing), then the turning gear is disengaged and steam is admitted to the turbine, first to the astern blades then to the ahead blades slowly rotating the turbine at 10-15 RPM (0.17-0.25 Hz) to slowly warm the turbine. The warm-up procedure for large steam turbines may exceed ten hours.[25] During normal operation, rotor imbalance can lead to vibration, which, because of the high rotation velocities, could lead to a blade breaking away from the rotor and through the casing. To reduce this risk, considerable efforts are spent to balance the turbine. Also, turbines are run with high-quality steam: either superheated (dry) steam, or saturated steam with a high dryness fraction. This prevents the rapid impingement and erosion of the blades which occurs when condensed water is blasted onto the blades (moisture carry over). Also, liquid water entering the blades may damage the thrust bearings for the turbine shaft. To prevent this, along with controls and baffles in the boilers to ensure high-quality steam, condensate drains are installed in the steam piping leading to the turbine. Maintenance requirements of modern steam turbines are simple and incur low costs (typically around $0.005 per kWh);[25] their operational life often exceeds 50 years.[25] Speed regulation Diagram of a steam turbine generator system The control of a turbine with a governor is essential, as turbines need to be run up slowly to prevent damage and some applications (such as the generation of alternating current electricity) require precise speed control.[26] Uncontrolled acceleration of the turbine rotor can lead to an overspeed trip, which causes the governor and throttle valves that control the flow of steam to the turbine to close. If these valves fail then the turbine may continue accelerating until it breaks apart, often catastrophically. Turbines are expensive to make, requiring precision manufacture and special quality materials. During normal operation in synchronization with the electricity network, power plants are governed with a five percent droop speed control. This means the full load speed is 100% and the no-load speed is 105%. This is required for the stable operation of the network without hunting and drop-outs of power plants. Normally the changes in speed are minor. Adjustments in power output are made by slowly raising the droop curve by increasing the spring pressure on a centrifugal governor. Generally this is a basic system requirement for all power plants because the older and newer plants have to be compatible in response to the instantaneous changes in frequency without depending on outside communication.[27] Thermodynamics of steam turbines T-s diagram of a superheated Rankine cycle The steam turbine operates on basic principles of thermodynamics using the part 3-4 of the Rankine cycle shown in the adjoining diagram. Superheated steam (or dry saturated steam, depending on application) leaves the boiler at high temperature and high pressure. At entry to the turbine, the steam gains kinetic energy by passing through a nozzle (a fixed nozzle in an impulse type turbine or the fixed blades in a reaction type turbine). When the steam leaves the nozzle it is moving at high velocity towards the blades of the turbine rotor. A force is created on the blades due to the pressure of the vapor on the blades causing them to move. A generator or other such device can be placed on the shaft, and the energy that was in the steam can now be stored and used. The steam leaves the turbine as a saturated vapor (or liquid-vapor mix depending on application) at a lower temperature and pressure than it entered with and is sent to the condenser to be cooled.[28] The first law enables us to find a formula for the rate at which work is developed per unit mass. Assuming there is no heat transfer to the surrounding environment and that the changes in kinetic and potential energy are negligible compared to the change in specific enthalpy we arrive at the following equation W˙m˙=h3−h4{\displaystyle {\frac {\dot {W}}{\dot {m}}}=h_{3}-h_{4}} ? is the rate at which work is developed per unit time ? is the rate of mass flow through the turbine Isentropic efficiency To measure how well a turbine is performing we can look at its isentropic efficiency. This compares the actual performance of the turbine with the performance that would be achieved by an ideal, isentropic, turbine.[29] When calculating this efficiency, heat lost to the surroundings is assumed to be zero. Steam's starting pressure and temperature is the same for both the actual and the ideal turbines, but at turbine exit, steam's energy content ('specific enthalpy') for the actual turbine is greater than that for the ideal turbine because of irreversibility in the actual turbine. The specific enthalpy is evaluated at the same steam pressure for the actual and ideal turbines in order to give a good comparison between the two. The isentropic efficiency is found by dividing the actual work by the ideal work.[29] ηt=h3−h4h3−h4s{\displaystyle \eta _{t}={\frac {h_{3}-h_{4}}{h_{3}-h_{4s}}}} h3 is the specific enthalpy at state three h4 is the specific enthalpy at state 4 for the actual turbine h4s is the specific enthalpy at state 4s for the isentropic turbine (but note that the adjacent diagram does not show state 4s: it is vertically below state 3) A direct-drive 5 MW steam turbine fuelled with biomass Electrical power stations use large steam turbines driving electric generators to produce most (about 80%) of the world's electricity. The advent of large steam turbines made central-station electricity generation practical, since reciprocating steam engines of large rating became very bulky, and operated at slow speeds. Most central stations are fossil fuel power plants and nuclear power plants; some installations use geothermal steam, or use concentrated solar power (CSP) to create the steam. Steam turbines can also be used directly to drive large centrifugal pumps, such as feedwater pumps at a thermal power plant. The turbines used for electric power generation are most often directly coupled to their generators. As the generators must rotate at constant synchronous speeds according to the frequency of the electric power system, the most common speeds are 3,000 RPM for 50 Hz systems, and 3,600 RPM for 60 Hz systems. Since nuclear reactors have lower temperature limits than fossil-fired plants, with lower steam quality, the turbine generator sets may be arranged to operate at half these speeds, but with four-pole generators, to reduce erosion of turbine blades.[30] Marine propulsion Turbinia, 1894, the first steam turbine-powered ship High and low pressure turbines for SS Maui. Parsons turbine from the 1928 Polish destroyer Wicher. In steamships, advantages of steam turbines over reciprocating engines are smaller size, lower maintenance, lighter weight, and lower vibration. A steam turbine is efficient only when operating in the thousands of RPM, while the most effective propeller designs are for speeds less than 300 RPM; consequently, precise (thus expensive) reduction gears are usually required, although numerous early ships through World War I, such as Turbinia, had direct drive from the steam turbines to the propeller shafts. Another alternative is turbo-electric transmission, in which an electrical generator run by the high-speed turbine is used to run one or more slow-speed electric motors connected to the propeller shafts; precision gear cutting may be a production bottleneck during wartime. Turbo-electric drive was most used in large US warships designed during World War I and in some fast liners, and was used in some troop transports and mass-production destroyer escorts in World War II. The higher cost of turbines and the associated gears or generator/motor sets is offset by lower maintenance requirements and the smaller size of a turbine in comparison with a reciprocating engine of equal power, although the fuel costs are higher than those of a diesel engine because steam turbines have lower thermal efficiency. To reduce fuel costs the thermal efficiency of both types of engine have been improved over the years. The development of steam turbine marine propulsion from 1894 to 1935 was dominated by the need to reconcile the high efficient speed of the turbine with the low efficient speed (less than 300 rpm) of the ship's propeller at an overall cost competitive with reciprocating engines. In 1894, efficient reduction gears were not available for the high powers required by ships, so direct drive was necessary. In Turbinia, which has direct drive to each propeller shaft, the efficient speed of the turbine was reduced after initial trials by directing the steam flow through all three direct drive turbines (one on each shaft) in series, probably totaling around 200 turbine stages operating in series. Also, there were three propellers on each shaft for operation at high speeds.[31] The high shaft speeds of the era are represented by one of the first US turbine-powered destroyers, USS Smith, launched in 1909, which had direct drive turbines and whose three shafts turned at 724 rpm at 28.35 knots (52.50 km/h; 32.62 mph).[32] The use of turbines in several casings exhausting steam to each other in series became standard in most subsequent marine propulsion applications, and is a form of cross-compounding. The first turbine was called the high pressure (HP) turbine, the last turbine was the low pressure (LP) turbine, and any turbine in between was an intermediate pressure (IP) turbine. A much later arrangement than Turbinia can be seen on RMS Queen Mary in Long Beach, California, launched in 1934, in which each shaft is powered by four turbines in series connected to the ends of the two input shafts of a single-reduction gearbox. They are the HP, 1st IP, 2nd IP, and LP turbines. Cruising machinery and gearing The quest for economy was even more important when cruising speeds were considered. Cruising speed is roughly 50% of a warship's maximum speed and 20-25% of its maximum power level. This would be a speed used on long voyages when fuel economy is desired. Although this brought the propeller speeds down to an efficient range, turbine efficiency was greatly reduced, and early turbine ships had poor cruising ranges. A solution that proved useful through most of the steam turbine propulsion era was the cruising turbine. This was an extra turbine to add even more stages, at first attached directly to one or more shafts, exhausting to a stage partway along the HP turbine, and not used at high speeds. As reduction gears became available around 1911, some ships, notably the battleship USS Nevada, had them on cruising turbines while retaining direct drive main turbines. Reduction gears allowed turbines to operate in their efficient range at a much higher speed than the shaft, but were expensive to manufacture. Cruising turbines competed at first with reciprocating engines for fuel economy. An example of the retention of reciprocating engines on fast ships was the famous RMS Titanic of 1911, which along with her sisters RMS Olympic and HMHS Britannic had triple-expansion engines on the two outboard shafts, both exhausting to an LP turbine on the center shaft. After adopting turbines with the Delaware-class battleships launched in 1909, the United States Navy reverted to reciprocating machinery on the New York-class battleships of 1912, then went back to turbines on Nevada in 1914. The lingering fondness for reciprocating machinery was because the US Navy had no plans for capital ships exceeding 21 knots (39 km/h; 24 mph) until after World War I, so top speed was less important than economical cruising. The United States had acquired the Philippines and Hawaii as territories in 1898, and lacked the British Royal Navy's worldwide network of coaling stations. Thus, the US Navy in 1900-1940 had the greatest need of any nation for fuel economy, especially as the prospect of war with Japan arose following World War I. This need was compounded by the US not launching any cruisers 1908-1920, so destroyers were required to perform long-range missions usually assigned to cruisers. So, various cruising solutions were fitted on US destroyers launched 1908-1916. These included small reciprocating engines and geared or ungeared cruising turbines on one or two shafts. However, once fully geared turbines proved economical in initial cost and fuel they were rapidly adopted, with cruising turbines also included on most ships. Beginning in 1915 all new Royal Navy destroyers had fully geared turbines, and the United States followed in 1917. In the Royal Navy, speed was a priority until the Battle of Jutland in mid-1916 showed that in the battlecruisers too much armour had been sacrificed in its pursuit. The British used exclusively turbine-powered warships from 1906. Because they recognized that a long cruising range would be desirable given their worldwide empire, some warships, notably the Queen Elizabeth-class battleships, were fitted with cruising turbines from 1912 onwards following earlier experimental installations. In the US Navy, the Mahan-class destroyers, launched 1935-36, introduced double-reduction gearing. This further increased the turbine speed above the shaft speed, allowing smaller turbines than single-reduction gearing. Steam pressures and temperatures were also increasing progressively, from 300 psi (2,100 kPa)/425 °F (218 °C) [saturated steam] on the World War I-era Wickes class to 615 psi (4,240 kPa)/850 °F (454 °C) [superheated steam] on some World War II Fletcher-class destroyers and later ships.[33][34] A standard configuration emerged of an axial-flow high-pressure turbine (sometimes with a cruising turbine attached) and a double-axial-flow low-pressure turbine connected to a double-reduction gearbox. This arrangement continued throughout the steam era in the US Navy and was also used in some Royal Navy designs.[35][36] Machinery of this configuration can be seen on many preserved World War II-era warships in several countries.[37] When US Navy warship construction resumed in the early 1950s, most surface combatants and aircraft carriers used 1,200 psi (8,300 kPa)/950 °F (510 °C) steam.[38] This continued until the end of the US Navy steam-powered warship era with the Knox-class frigates of the early 1970s. Amphibious and auxiliary ships continued to use 600 psi (4,100 kPa) steam post-World War II, with USS Iwo Jima, launched in 2001, possibly the last non-nuclear steam-powered ship built for the US Navy. Turbo-electric drive NS 50 Let Pobedy, a nuclear icebreaker with nuclear-turbo-electric propulsion Turbo-electric drive was introduced on the battleship USS New Mexico, launched in 1917. Over the next eight years the US Navy launched five additional turbo-electric-powered battleships and two aircraft carriers (initially ordered as Lexington-class battlecruisers). Ten more turbo-electric capital ships were planned, but cancelled due to the limits imposed by the Washington Naval Treaty. Although New Mexico was refitted with geared turbines in a 1931-1933 refit, the remaining turbo-electric ships retained the system throughout their careers. This system used two large steam turbine generators to drive an electric motor on each of four shafts. The system was less costly initially than reduction gears and made the ships more maneuverable in port, with the shafts able to reverse rapidly and deliver more reverse power than with most geared systems. Some ocean liners were also built with turbo-electric drive, as were some troop transports and mass-production destroyer escorts in World War II. However, when the US designed the "treaty cruisers", beginning with USS Pensacola launched in 1927, geared turbines were used to conserve weight, and remained in use for all fast steam-powered ships thereafter. Current usage Since the 1980s, steam turbines have been replaced by gas turbines on fast ships and by diesel engines on other ships; exceptions are nuclear-powered ships and submarines and LNG carriers.[39] Some auxiliary ships continue to use steam propulsion. In the U.S. Navy, the conventionally powered steam turbine is still in use on all but one of the Wasp-class amphibious assault ships. The Royal Navy decommissioned its last conventional steam-powered surface warship class, the Fearless-class landing platform dock, in 2002, with the Italian Navy following in 2006 by decommissioning its last conventional steam-powered surface warships, the Audace-class destroyers. In 2013, the French Navy ended its steam era with the decommissioning of its last Tourville-class frigate. Amongst the other blue-water navies, the Russian Navy currently operates steam-powered Kuznetsov-class aircraft carriers and Sovremenny-class destroyers. The Indian Navy currently operates INS Vikramaditya, a modified Kiev-class aircraft carrier; it also operates three Brahmaputra-class frigates commissioned in the early 2000s and one Godavari-class frigate scheduled for decommissioning. The Chinese Navy currently operates steam-powered Kuznetsov-class aircraft carriers, Sovremenny-class destroyers along with Luda-class destroyers and the lone Type 051B destroyer. Most other naval forces have either retired or re-engined their steam-powered warships. As of 2020, the Mexican Navy operates four steam-powered former U.S. Knox-class frigates. The Egyptian Navy and the Republic of China Navy respectively operate two and six former U.S. Knox-class frigates. The Ecuadorian Navy currently operates two steam-powered Condell-class frigates (modified Leander-class frigates). Today, propulsion steam turbine cycle efficiencies have yet to break 50%, yet diesel engines routinely exceed 50%, especially in marine applications.[40][41][42] Diesel power plants also have lower operating costs since fewer operators are required. Thus, conventional steam power is used in very few new ships. An exception is LNG carriers which often find it more economical to use boil-off gas with a steam turbine than to re-liquify it. Nuclear-powered ships and submarines use a nuclear reactor to create steam for turbines. Nuclear power is often chosen where diesel power would be impractical (as in submarine applications) or the logistics of refuelling pose significant problems (for example, icebreakers). It has been estimated that the reactor fuel for the Royal Navy's Vanguard-class submarines is sufficient to last 40 circumnavigations of the globe - potentially sufficient for the vessel's entire service life. Nuclear propulsion has only been applied to a very few commercial vessels due to the expense of maintenance and the regulatory controls required on nuclear systems and fuel cycles. A steam turbine locomotive engine is a steam locomotive driven by a steam turbine. The first steam turbine rail locomotive was built in 1908 for the Officine Meccaniche Miani Silvestri Grodona Comi, Milan, Italy. In 1924 Krupp built the steam turbine locomotive T18 001, operational in 1929, for Deutsche Reichsbahn. The main advantages of a steam turbine locomotive are better rotational balance and reduced hammer blow on the track. However, a disadvantage is less flexible output power so that turbine locomotives were best suited for long-haul operations at a constant output power.[43] British, German, other national and international test codes are used to standardize the procedures and definitions used to test steam turbines. Selection of the test code to be used is an agreement between the purchaser and the manufacturer, and has some significance to the design of the turbine and associated systems. In the United States, ASME has produced several performance test codes on steam turbines. These include ASME PTC 6-2004, Steam Turbines, ASME PTC 6.2-2011, Steam Turbines in Combined Cycles, PTC 6S-1988, Procedures for Routine Performance Test of Steam Turbines. These ASME performance test codes have gained international recognition and acceptance for testing steam turbines. The single most important and differentiating characteristic of ASME performance test codes, including PTC 6, is that the test uncertainty of the measurement indicates the quality of the test and is not to be used as a commercial tolerance.[44] Balancing machine Mercury vapour turbine Tesla turbine ^ Stodola 1927. ^ "Sir Charles Algernon Parsons". Encyclopædia Britannica. n.d. Retrieved . ^ "Electricity Net Generation" (PDF). US EIA. March 2015. ^ Keyser 1992, pp. 107-124. ^ O'Connor & Robertson 1999. ^ Nag 2002, pp. 432-. ^ "Taqi al-Din and the First Steam Turbine, 1551 A.D." History of Science and Technology in Islam. Archived from the original on 2008-02-18. ^ Hassan 1976, p. 34-35. ^ "James Watt". www.steamindex.com. Archived from the original on 2017-09-06. ^ Stodola & Loewenstein 1945. ^ a b The Steam Turbine at the Wayback Machine (archived May 13, 2010) ^ Charles Parsons at the Wayback Machine (archived May 5, 2010) ^ Parsons 1911. ^ Giampaolo 2014, p. 9. ^ a b Capital Goods: China Losing Its Shine at the Wayback Machine (archived December 23, 2015) ^ Bayar 2014. ^ Parsons 1911, pp. 7-8. ^ Parsons 1911, pp. 20-22. ^ Tamarin 2002, p. 5-. ^ Bhadeshia 2003. ^ Latief & Kakehi 2013. ^ "Steam Turbines (Course No. M-3006)" (PDF). PhD Engineer. Archived (PDF) from the original on 2012-04-02. Retrieved . ^ a b c "Technology Characterization: Steam Turbines" (PDF). U.S. Environmental Protection Agency. December 2008. p. 13. Archived from the original (PDF) on 18 November 2012. Retrieved 2013. ^ Whitaker 2006, p. 35. ^ "Speed Droop and Power Generation. Application Note 01302" (pdf). Woodward. 1991. ^ "Thermodynamics Steam Turbine". www.roymech.co.uk. Archived from the original on 2011-01-08. ^ a b Moran et al. 2010. ^ Leyzerovich 2005, p. 111. ^ Friedman 2004, p. 23-24. ^ "1,500-ton destroyers in World War II". destroyerhistory.org. Archived from the original on 2013-11-05. ^ Friedman 2004, p. 472. ^ Bowie 2010. ^ "Steam Turbines". www.leander-project.homecall.co.uk. Archived from the original on 2013-11-22. ^ "Historic Naval Ships Association". Archived from the original on 2013-06-22. ^ "Mitsubishi Heavy starts construction of first Sayaendo series LNG carrier". December 2012. Archived from the original on 2014-08-07. ^ Deckers 2003, p. 14-15. ^ Leyzerovich 2002. ^ Takaishi, Tatsuo; Numata, Akira; Nakano, Ryouji; Sakaguchi, Katsuhiko (March 2008). "Approach to High Efficiency Diesel and Gas Engines" (PDF). Technical Review. Mitsubishi Heavy Industries. Retrieved 2019. ^ Streeter 2007, p. 85. ^ Sanders 2004, p. 292. Bayar, Tildy (July 31, 2014). "Global gas and steam turbine market to reach $43.5bn by 2020". Power Engineering International. Bhadeshia, H. K. D. H. (2003). "Nickel Based Superalloys". University of Cambridge. Retrieved . Bowie, David (2010). "Cruising Turbines of the Y-100 Naval Propulsion Machinery" (PDF). Deckers, Matthias (Summer 2003). "CFX Aids Design of World's Most Efficient Steam Turbine" (PDF). CFXUpdate (23). Archived from the original (PDF) on 2005-10-24. Giampaolo, Tony (2014). Gas Turbine Handbook Principles and Practices By Tony Giampaolo: Gas Turbine Handbook. Digital Designs. Friedman, Norman (2004). U.S. Destroyers: An Illustrated Design History. Annapolis: Naval Institute Press. ISBN 978-1-55750-442-5. Hassan, Ahmad Y (1976). Taqi al-Din and Arabic Mechanical Engineering. Institute for the History of Arabic Science, University of Aleppo. Keyser, Paul (1992). "A new look at Heron's Steam Engine". Archive for History of Exact Sciences. 44 (2): 107-124. doi:10.1007/BF00374742. ISSN 0003-9519. Latief, Fahamsyah H.; Kakehi, Koji (2013). "Effects of Re content and crystallographic orientation on creep behavior of aluminized Ni-base single crystal superalloys". Materials & Design. 49: 485-492. doi:10.1016/j.matdes.2013.01.022. ISSN 0261-3069. Leyzerovich, Alexander S. (August 1, 2002). "New Benchmarks for Steam Turbine Efficiency". Power Engineering. Archived from the original on 2009-09-18. Retrieved . Leyzerovich, Alexander (2005). Wet-steam Turbines for Nuclear Power Plants. PennWell Books. ISBN 978-1-59370-032-4. Moran, Michael J.; Shapiro, Howard N.; Boettner, Daisie D.; Bailey, Margaret B. (2010). Fundamentals of Engineering Thermodynamics. John Wiley & Sons. ISBN 978-0-470-49590-2. Nag, P. K. (2002). Power Plant Engineering. Tata McGraw-Hill Education. ISBN 978-0-07-043599-5. Parsons, Charles A. (1911). The Steam Turbine . University Press, Cambridge. O'Connor, J.J.; Robertson, E.F. (1999). "Heron of Alexandria". The MacTutor History of Mathematics. Sanders, William P. (2004). Turbine Steam Path: Mechanical Design and Manufacture. Vol. III a. PennWell. Stodola, A. (2013) [1924]. Dampf- und Gasturbinen. Mit einem Anhang über die Aussichten der Wärmekraftmaschinen [Steam and Gas Turbines: With an appendix on the prospective use as heat engines] (in German) (Supplement to the 5th ed.). Springer-Verlag. ISBN 978-3-642-50854-7. Stodola, Aurel (1927). Steam and Gas Turbines: With a Supplement on The Prospects of the Thermal Prime Mover. McGraw-Hill. Stodola, Aurel; Loewenstein, Louis Centennial (1945). Steam and gas turbines: with a supplement on The prospects of the thermal prime mover. P. Smith. Streeter, Tony (2007). "Testing the Limit". Steam Railway Magazine (336). Tamarin, Y. (2002). Protective Coatings for Turbine Blades. ASM International. ISBN 978-1-61503-070-5. Whitaker, Jerry C. (2006). AC Power Systems Handbook (Third ed.). Taylor & Francis. ISBN 978-0-8493-4034-5. Cotton, K.C. (1998). Evaluating and Improving Steam Turbine Performance. Johnston, Ian (2019). "The Rise of the Brown-Curtis Turbine". In Jordan, John (ed.). Warship 2019. Oxford, UK: Osprey Publishing. pp. 58-68. ISBN 978-1-4728-3595-6. Traupel, W. (1977). Thermische Turbomaschinen (in German). Thurston, R. H. (1878). A History of the Growth of the Steam Engine. D. Appleton and Co. Waliullah, Noushad (2017). "An overview of Concentrated Solar Power (CSP) technologies and its opportunities in Bangladesh". 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE). CUET. pp. 844-849. doi:10.1109/ECACE.2017.7913020. ISBN 978-1-5090-5627-9. Steam Turbines: A Book of Instruction for the Adjustment and Operation of the Principal Types of this Class of Prime Movers by Hubert E. Collins Steam Turbine Construction at Mike's Engineering Wonders Tutorial: "Superheated Steam" Flow Phenomenon in Steam Turbine Disk-Stator Cavities Channeled by Balance Holes Guide to the Test of a 100 K.W. De Laval Steam Turbine with an Introduction on the Principles of Design circa 1920 Extreme Steam- Unusual Variations on The Steam Locomotive Interactive Simulation of 350MW Steam Turbine with Boiler developed by The University of Queensland, in Brisbane Australia "Super-Steam...An Amazing Story of Achievement" Popular Mechanics, August 1937 Modern Energetics - The Steam Turbine Steam_turbines
CommonCrawl
Τρίτη, 30 Μαΐου 2017 The Value of a Closer Look http://ift.tt/2skXN4I Traitement endoscopique des carcinomes épidermoïdes superficiels de l'œsophage http://ift.tt/2r93yVi Exploring an adapted Risk Behaviour Diagnosis Scale among Indigenous Australian women who had experiences of smoking during pregnancy: a cross-sectional survey in regional New South Wales, Australia Explore Aboriginal women's responses to an adapted Risk Behaviour Diagnosis (RBD) Scale about smoking in pregnancy. Methods and design An Aboriginal researcher interviewed women and completed a cross-sectional survey including 20 Likert scales. Aboriginal Community Controlled Health Services, community groups and playgroups and Aboriginal Maternity Services in regional New South Wales, Australia. Aboriginal women (n=20) who were pregnant or gave birth in the preceding 18 months; included if they had experiences of smoking or quitting during pregnancy. Primary and secondary outcome measures Primary outcomes: RBD constructs of perceived threat and perceived efficacy, dichotomised into high versus low. Women who had quit smoking, answered retrospectively. Secondary outcome measures: smoking status, intentions to quit smoking (danger control), protection responses (to babies/others) and fear control responses (denial/refutation). Scales were assessed for internal consistency. A chart plotted responses from low to high efficacy and low to high threat. RBD Scales had moderate-to-good consistency (0.67–0.89 Cronbach's alpha). Nine women had quit and 11 were smoking; 6 currently pregnant and 14 recently pregnant. Mean efficacy level 3.9 (SD=0.7); mean threat 4.3 (SD=0.7). On inspection, a scatter plot revealed a cluster of 12 women in the high efficacy-high threat quadrant—of these 11 had quit or had a high intention of quitting. Conversely, a group with low threat-low efficacy (5 women) were all smokers and had high fear control responses: of these, 4 had low protection responses. Pregnant women had a non-significant trend for higher threat and lower efficacy, than those previously pregnant. Findings were consistent with a previously validated RBD Scale showing Aboriginal smokers with high efficacy-high threat had greater intentions to quit smoking. The RBD Scale could have diagnostic potential to tailor health messages. Longitudinal research required with a larger sample to explore associations with the RBD Scale and quitting. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2sksozy Healthcare costs of asthma comorbidities: a systematic review protocol Asthma is associated with many comorbid conditions that have the potential to impact on its management, control and outcomes. These comorbid conditions have the potential to impact on healthcare expenditure. We plan to undertake a systematic review to synthesise the evidence on the healthcare costs associated with asthma comorbidity. Methods and analysis We will systematically search the following electronic databases between January 2000 and January 2017: National Health Service (NHS) Economic Evaluation Database, Google Scholar, Allied and Complementary Medicine Database (AMED), Global Health, PsychINFO, Medline, Embase, Institute for Scientific Information Web of Science and Cumulative Index to Nursing and Allied Health Literature. We will search the references in the identified studies for additional potential papers. Additional literature will be identified by contacting experts in the field and through searching of registers of ongoing studies. The review will include cost-effectiveness and economic modelling/evaluation studies and analytical observational epidemiology studies that have investigated the healthcare costs of asthma comorbidity. Two reviewers will independently screen studies and extract relevant data from included studies. Methodological quality of epidemiological studies will be assessed using the Effective Public Health Practice Project tool, while that of economic evaluation studies will be assessed using the Drummond checklist. This protocol has been published in International Prospective Register of Systematic Reviews (PROSPERO) database (No. CRD42016051005). Ethics and dissemination As there are no primary data collected, formal NHS ethical review is not necessary. The findings of this systematic review will be disseminated in a peer-reviewed journal and presented at relevant conferences. PROSPEROregistration number CRD42016051005. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rmEMS9 Physical pain is common and associated with nonmedical prescription opioid use among people who inject drugs People who inject drugs (PWID) often have poor health and lack access to health care. The aim of this study was to examine whether PWID engage in self-treatment through nonmedical prescription opioid use (NMPO... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2qEp6G8 Gender differences in discharge dispositions of emergency department visits involving drug misuse and abuse—2004-2011 Drug use-related visits to the emergency department (ED) can undermine discharge planning and lead to recurrent use of acute services. Yet, little is known about where patients go post discharge. We explored t... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2r9Q4J3 Structural Basis for Regulation of ESCRT-III Complexes by Lgd Source:Cell Reports, Volume 19, Issue 9 Author(s): Brian J. McMillan, Christine Tibbe, Andrew A. Drabek, Tom C.M. Seegar, Stephen C. Blacklow, Thomas Klein The ESCRT-III complex induces outward membrane budding and fission through homotypic polymerization of its core component Shrub/CHMP4B. Shrub activity is regulated by its direct interaction with a protein called Lgd in flies, or CC2D1A or B in humans. Here, we report the structural basis for this interaction and propose a mechanism for regulation of polymer assembly. The isolated third DM14 repeat of Lgd binds Shrub, and an Lgd fragment containing only this DM14 repeat and its C-terminal C2 domain is sufficient for in vivo function. The DM14 domain forms a helical hairpin with a conserved, positively charged tip, that, in the structure of a DM14 domain-Shrub complex, occupies a negatively charged surface of Shrub that is otherwise used for homopolymerization. Lgd mutations at this interface disrupt its function in flies, confirming functional importance. Together, these data argue that Lgd regulates ESCRT activity by controlling access to the Shrub self-assembly surface. Shrub is a Drosophila ESCRT-III protein that self-associates to promote membrane budding and fission. Its activity is modulated by binding to Lgd, which suppresses self-association. McMillan et al. report the structural basis for masking of the Shrub self-association surface of Shrub by Lgd, which, together with functional studies in flies, suggests models for modulation of Shrub activity by Lgd. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCQTLO The Histone Variant MacroH2A1 Is a BRCA1 Ubiquitin Ligase Substrate Author(s): Beom-Jun Kim, Doug W. Chan, Sung Yun Jung, Yue Chen, Jun Qin, Yi Wang The breast- and ovarian-cancer-specific tumor suppressor BRCA1 and its heterodimeric partner BARD1 contain RING domains that implicate them as E3 ubiquitin ligases. Despite extensive efforts, the bona fide substrates of BRCA1/BARD1 remain elusive. Here, we used recombinant GST fused to four UBA domains to enrich ubiquitinated proteins followed by a Lys-ε-Gly-Gly (diGly) antibody to enrich ubiquitinated tryptic peptides. This tandem affinity purification method coupled with mass spectrometry identified 101 putative BRCA1/BARD1 E3 substrates. We identified the histone variant macroH2A1 from the screen and showed that BRCA1/BARD1 ubiquitinates macroH2A1 at lysine 123 in vitro and in vivo. Primary human fibroblasts stably expressing a ubiquitination-deficient macroH2A1 mutant were defective in cellular senescence compared to their wild-type counterpart. Our study demonstrates that BRCA1/BARD1 is a macroH2A1 E3 ligase and implicates a role for macroH2A1 K123 ubiquitination in cellular senescence. Using a tandem affinity purification method coupled with mass spectrometry, Kim et al. identified 101 putative substrates of the BRCA1/BARD1 E3 ubiquitin ligase. They report that, among these substrates, ubiquitination at Lys123 of macroH2A1 plays an important role in replicative senescence. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rDwPce Synaptic Remodeling Depends on Signaling between Serotonin Receptors and the Extracellular Matrix Author(s): Monika Bijata, Josephine Labus, Daria Guseva, Michał Stawarski, Malte Butzlaff, Joanna Dzwonek, Jenny Schneeberg, Katrin Böhm, Piotr Michaluk, Dmitri A. Rusakov, Alexander Dityatev, Grzegorz Wilczyński, Jakub Wlodarczyk, Evgeni Ponimaskin Rewiring of synaptic circuitry pertinent to memory formation has been associated with morphological changes in dendritic spines and with extracellular matrix (ECM) remodeling. Here, we mechanistically link these processes by uncovering a signaling pathway involving the serotonin 5-HT7 receptor (5-HT7R), matrix metalloproteinase 9 (MMP-9), the hyaluronan receptor CD44, and the small GTPase Cdc42. We highlight a physical interaction between 5-HT7R and CD44 (identified as an MMP-9 substrate in neurons) and find that 5-HT7R stimulation increases local MMP-9 activity, triggering dendritic spine remodeling, synaptic pruning, and impairment of long-term potentiation (LTP). The underlying molecular machinery involves 5-HT7R-mediated activation of MMP-9, which leads to CD44 cleavage followed by Cdc42 activation. One important physiological consequence of this interaction includes an increase in neuronal outgrowth and elongation of dendritic spines, which might have a positive effect on complex neuronal processes (e.g., reversal learning and neuronal regeneration). Bijata et al. examine a signaling module involving the 5-HT7 receptor (5-HT7R), matrix metalloproteinase 9 (MMP-9), the hyaluronan receptor CD44, and the small GTPase Cdc42. Stimulation of 5-HT7R results in MMP-9 activation, which, in turn, cleaves CD44. This results in local detachment from the ECM, thus facilitating spine elongation via 5-HT7R/Cdc42 signaling. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rD5Cqc A Glio-Protective Role of mir-263a by Tuning Sensitivity to Glutamate Author(s): Sherry Shiying Aw, Isaac Kok Hwee Lim, Melissa Xue Mei Tang, Stephen Michael Cohen Glutamate is a ubiquitous neurotransmitter, mediating information flow between neurons. Defects in the regulation of glutamatergic transmission can result in glutamate toxicity, which is associated with neurodegeneration. Interestingly, glutamate receptors are expressed in glia, but little is known about their function, and the effects of their misregulation, in these non-neuronal cells. Here, we report a glio-protective role for Drosophila mir-263a mediated by its regulation of glutamate receptor levels in glia. mir-263a mutants exhibit a pronounced movement defect due to aberrant overexpression of CG5621/Grik, Nmdar1, and Nmdar2. mir-263a mutants exhibit excitotoxic death of a subset of astrocyte-like and ensheathing glia in the CNS. Glial-specific normalization of glutamate receptor levels restores cell numbers and suppresses the movement defect. Therefore, microRNA-mediated regulation of glutamate receptor levels protects glia from excitotoxicity, ensuring CNS health. Chronic low-level glutamate receptor overexpression due to mutations affecting microRNA (miRNA) regulation might contribute to glial dysfunction and CNS impairment. Excessive glutamatergic signaling can cause neurodegeneration. Aw et al. report that Drosophila mir-263a limits glutamate receptor levels in a subset of glia, protecting them from excitotoxicity. mir-263a mutants exhibit severe movement defects. This study reveals a mechanism by which glia protect themselves from excess glutamate signaling to maintain CNS health. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rD2uKP ER Stress Inhibits Liver Fatty Acid Oxidation while Unmitigated Stress Leads to Anorexia-Induced Lipolysis and Both Liver and Kidney Steatosis Author(s): Diane DeZwaan-McCabe, Ryan D. Sheldon, Michelle C. Gorecki, Deng-Fu Guo, Erica R. Gansemer, Randal J. Kaufman, Kamal Rahmouni, Matthew P. Gillum, Eric B. Taylor, Lynn M. Teesch, D. Thomas Rutkowski The unfolded protein response (UPR), induced by endoplasmic reticulum (ER) stress, regulates the expression of factors that restore protein folding homeostasis. However, in the liver and kidney, ER stress also leads to lipid accumulation, accompanied at least in the liver by transcriptional suppression of metabolic genes. The mechanisms of this accumulation, including which pathways contribute to the phenotype in each organ, are unclear. We combined gene expression profiling, biochemical assays, and untargeted lipidomics to understand the basis of stress-dependent lipid accumulation, taking advantage of enhanced hepatic and renal steatosis in mice lacking the ER stress sensor ATF6α. We found that impaired fatty acid oxidation contributed to the early development of steatosis in the liver but not the kidney, while anorexia-induced lipolysis promoted late triglyceride and free fatty acid accumulation in both organs. These findings provide evidence for both direct and indirect regulation of peripheral metabolism by ER stress. The mechanisms by which the liver and kidney become steatotic when challenged by ER stress are not known. DeZwaan-McCabe et al. show that ER stress inhibits fatty acid oxidation in the liver and that unmitigated stress causes anorexia and promotes adipose lipolysis and further steatosis in the liver and kidney. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rDeegI Progressive Motor Neuron Pathology and the Role of Astrocytes in a Human Stem Cell Model of VCP-Related ALS Author(s): Claire E. Hall, Zhi Yao, Minee Choi, Giulia E. Tyzack, Andrea Serio, Raphaelle Luisier, Jasmine Harley, Elisavet Preza, Charlie Arber, Sarah J. Crisp, P. Marc D. Watson, Dimitri M. Kullmann, Andrey Y. Abramov, Selina Wray, Russell Burley, Samantha H.Y. Loh, L. Miguel Martins, Molly M. Stevens, Nicholas M. Luscombe, Christopher R. Sibley, Andras Lakatos, Jernej Ule, Sonia Gandhi, Rickie Patani Motor neurons (MNs) and astrocytes (ACs) are implicated in the pathogenesis of amyotrophic lateral sclerosis (ALS), but their interaction and the sequence of molecular events leading to MN death remain unresolved. Here, we optimized directed differentiation of induced pluripotent stem cells (iPSCs) into highly enriched (> 85%) functional populations of spinal cord MNs and ACs. We identify significantly increased cytoplasmic TDP-43 and ER stress as primary pathogenic events in patient-specific valosin-containing protein (VCP)-mutant MNs, with secondary mitochondrial dysfunction and oxidative stress. Cumulatively, these cellular stresses result in synaptic pathology and cell death in VCP-mutant MNs. We additionally identify a cell-autonomous VCP-mutant AC survival phenotype, which is not attributable to the same molecular pathology occurring in VCP-mutant MNs. Finally, through iterative co-culture experiments, we uncover non-cell-autonomous effects of VCP-mutant ACs on both control and mutant MNs. This work elucidates molecular events and cellular interplay that could guide future therapeutic strategies in ALS. Hall et al. use iPSCs to examine the sequence of events by which motor neurons degenerate in a genetic form of ALS. They find that astrocytes, a type of supportive cell, also degenerate under these conditions. The ALS-causing mutation disrupts the ability of astrocytes to promote survival of motor neurons. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rDgh4k Oxysterol-Binding Protein-Related Protein 1L Regulates Cholesterol Egress from the Endo-Lysosomal System Author(s): Kexin Zhao, Neale D. Ridgway Lipoprotein cholesterol is delivered to the limiting membrane of late endosomes/lysosomes (LELs) by Niemann-Pick C1 (NPC1). However, the mechanism of cholesterol transport from LELs to the endoplasmic reticulum (ER) is poorly characterized. We report that oxysterol-binding protein-related protein 1L (ORP1L) is necessary for this stage of cholesterol export. CRISPR-mediated knockout of ORP1L in HeLa and HEK293 cells reduced esterification of cholesterol to the level in NPC1 knockout cells, and it increased the expression of sterol-regulated genes and de novo cholesterol synthesis, indicative of a block in cholesterol transport to the ER. In the absence of this transport pathway, cholesterol-enriched LELs accumulated in the Golgi/perinuclear region. Cholesterol delivery to the ER required the sterol-, phosphatidylinositol 4-phosphate-, and vesicle-associated membrane protein-associated protein (VAP)-binding activities of ORP1L, as well as NPC1 expression. These results suggest that ORP1L-dependent membrane contacts between LELs and the ER coordinate cholesterol transfer with the retrograde movement of endo-lysosomal vesicles. Cholesterol is delivered to the limiting membrane of late endosomes/lysosomes (LELs) by Niemann-Pick C1 (NPC1), but how cholesterol is subsequently transported to organelles is poorly characterized. Zhao and Ridgway find that OSBP-related protein 1L (ORP1L) mediates cholesterol transfer from LELs to the endoplasmic reticulum (ER) at contacts between these organelles. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCUhGS Replication-Coupled Dilution of H4K20me2 Guides 53BP1 to Pre-replicative Chromatin Author(s): Stefania Pellegrino, Jone Michelena, Federico Teloni, Ralph Imhof, Matthias Altmeyer The bivalent histone modification reader 53BP1 accumulates around DNA double-strand breaks (DSBs), where it dictates repair pathway choice decisions by limiting DNA end resection. How this function is regulated locally and across the cell cycle to channel repair reactions toward non-homologous end joining (NHEJ) in G1 and promote homology-directed repair (HDR) in S/G2 is insufficiently understood. Here, we show that the ability of 53BP1 to accumulate around DSBs declines as cells progress through S phase and reveal that the inverse relationship between 53BP1 recruitment and replicated chromatin is linked to the replication-coupled dilution of 53BP1's target mark H4K20me2. Consistently, premature maturation of post-replicative chromatin restores H4K20me2 and rescues 53BP1 accumulation on replicated chromatin. The H4K20me2-mediated chromatin association of 53BP1 thus represents an inbuilt mechanism to distinguish DSBs in pre- versus post-replicative chromatin, allowing for localized repair pathway choice decisions based on the availability of replication-generated template strands for HDR. Pellegrino et al. report how replication-dependent differences in chromatin inform the DNA damage response machinery about the replication status of broken genomic loci. This chromatin-embedded information affects 53BP1 binding and directs the choice of repair pathway used to fix DNA double-strand breaks. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCQSre Mad2 Overexpression Uncovers a Critical Role for TRIP13 in Mitotic Exit Author(s): Daniel Henry Marks, Rozario Thomas, Yvette Chin, Riddhi Shah, Christine Khoo, Robert Benezra The mitotic checkpoint ensures proper segregation of chromosomes by delaying anaphase until all kinetochores are bound to microtubules. This inhibitory signal is composed of a complex containing Mad2, which inhibits anaphase progression. The complex can be disassembled by p31comet and TRIP13; however, TRIP13 knockdown has been shown to cause only a mild mitotic delay. Overexpression of checkpoint genes, as well as TRIP13, is correlated with chromosomal instability (CIN) in cancer, but the initial effects of Mad2 overexpression are prolonged mitosis and decreased proliferation. Here, we show that TRIP13 overexpression significantly reduced, and TRIP13 reduction significantly exacerbated, the mitotic delay associated with Mad2 overexpression, but not that induced by microtubule depolymerization. The combination of Mad2 overexpression and TRIP13 loss reduced the ability of checkpoint complexes to disassemble and significantly inhibited the proliferation of cells in culture and tumor xenografts. These results identify an unexpected dependency on TRIP13 in cells overexpressing Mad2. TRIP13 is a putative mitotic checkpoint silencing protein. However, depletion of TRIP13 causes only mild mitotic exit phenotypes. Marks et al. find that TRIP13 becomes critical for mitotic exit in Mad2-overexpressing cells. Both proteins are co-overexpressed in cancer, and TRIP13 may be a therapeutic target in Mad2-overexpressing tumors. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rD5FlS 2-HG Inhibits Necroptosis by Stimulating DNMT1-Dependent Hypermethylation of the RIP3 Promoter Author(s): Zhentao Yang, Bin Jiang, Yan Wang, Hengxiao Ni, Jia Zhang, Jinmei Xia, Minggang Shi, Li-Man Hung, Jingsong Ruan, Tak Wah Mak, Qinxi Li, Jiahuai Han 2-hydroxyglutarate-(2-HG)-mediated inhibition of TET2 activity influences DNA hypermethylation in cells harboring mutations of isocitrate dehydrogenases 1 and 2 (IDH1/2). Here, we show that 2-HG also regulates DNA methylation mediated by DNA methyltransferase 1 (DNMT1). DNMT1-dependent hypermethylation of the RIP3 promoter occurred in both IDH1 R132Q knockin mutant mouse embryonic fibroblast (MEFs) and 2-HG-treated wild-type (WT) MEFs. We found that 2-HG bound to DNMT1 and stimulated its association with the RIP3 promoter, inducing hypermethylation that reduces RIP3 protein and consequently impaired RIP3-dependent necroptosis. In human glioma samples, RIP3 protein levels correlated negatively with IDH1 R132H levels. Furthermore, ectopic expression of RIP3 in transformed IDH1-mutated MEFs inhibited the growth of tumors derived from these cells following transplantation into nude mice. Thus, our research sheds light on a mechanism of 2-HG-induced DNA hypermethylation and suggests that impaired necroptosis contributes to the tumorigenesis driven by IDH1/2 mutations. Yang et al. report that oncometabolite 2-HG produced by tumor-associated IDH1 mutation physically binds to DNMT1 and stimulates its association with the RIP3 promoter, inducing hypermethylation that reduces RIP3 protein and consequently impaired RIP3-dependent necroptosis. Loss of RIP3-mediated necroptosis contributes to tumorigenesis driven by 2-HG. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCUg5M Cancer-Associated IDH1 Promotes Growth and Resistance to Targeted Therapies in the Absence of Mutation Author(s): Andrea E. Calvert, Alexandra Chalastanis, Yongfei Wu, Lisa A. Hurley, Fotini M. Kouri, Yingtao Bi, Maureen Kachman, Jasmine L. May, Elizabeth Bartom, Youjia Hua, Rama K. Mishra, Gary E. Schiltz, Oleksii Dubrovskyi, Andrew P. Mazar, Marcus E. Peter, Hongwu Zheng, C. David James, Charles F. Burant, Navdeep S. Chandel, Ramana V. Davuluri, Craig Horbinski, Alexander H. Stegh Oncogenic mutations in two isocitrate dehydrogenase (IDH)-encoding genes (IDH1 and IDH2) have been identified in acute myelogenous leukemia, low-grade glioma, and secondary glioblastoma (GBM). Our in silico and wet-bench analyses indicate that non-mutated IDH1 mRNA and protein are commonly overexpressed in primary GBMs. We show that genetic and pharmacologic inactivation of IDH1 decreases GBM cell growth, promotes a more differentiated tumor cell state, increases apoptosis in response to targeted therapies, and prolongs the survival of animal subjects bearing patient-derived xenografts (PDXs). On a molecular level, diminished IDH1 activity results in reduced α-ketoglutarate (αKG) and NADPH production, paralleled by deficient carbon flux from glucose or acetate into lipids, exhaustion of reduced glutathione, increased levels of reactive oxygen species (ROS), and enhanced histone methylation and differentiation marker expression. These findings suggest that IDH1 upregulation represents a common metabolic adaptation by GBMs to support macromolecular synthesis, aggressive growth, and therapy resistance. Calvert et al. demonstrate that wild-type IDH1 is overexpressed in glioblastoma and that genetic or pharmacological suppression of IDH1 activity reduces tumor cell growth through effect on lipid production, redox homeostasis, and the regulation of cellular differentiation. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCV36r A TLR3-Specific Adjuvant Relieves Innate Resistance to PD-L1 Blockade without Cytokine Toxicity in Tumor Vaccine Immunotherapy Author(s): Yohei Takeda, Keisuke Kataoka, Junya Yamagishi, Seishi Ogawa, Tsukasa Seya, Misako Matsumoto Cancer patients having anti-programmed cell death-1 (PD-1)/PD ligand 1 (L1)-unresponsive tumors may benefit from advanced immunotherapy. Double-stranded RNA triggers dendritic cell (DC) maturation to cross-prime antigen-specific cytotoxic T lymphocytes (CTLs) via Toll-like receptor 3 (TLR3). The TLR3-specific RNA agonist, ARNAX, can induce anti-tumor CTLs without systemic cytokine/interferon (IFN) production. Here, we have developed a safe vaccine adjuvant for cancer that effectively implements anti-PD-L1 therapy. Co-administration of ARNAX with a tumor-associated antigen facilitated tumor regression in mouse models, and in combination with anti-PD-L1 antibody, activated tumor-specific CTLs in lymphoid tissues, enhanced CTL infiltration, and overcame anti-PD-1 resistance without cytokinemia. The TLR3-TICAM-1-interferon regulatory factor (IRF)3-IFN-β axis in DCs exclusively participated in CD8+ T cell cross-priming. ARNAX therapy established Th1 immunity in the tumor microenvironment, upregulating genes involved in DC/T cell/natural killer (NK) cell recruitment and functionality. Human ex vivo studies disclosed that ARNAX+antigen induced antigen-specific CTL priming and proliferation in peripheral blood mononuclear cells (PBMCs), supporting the feasibility of ARNAX for potentiating anti-PD-1/PD-L1 therapy in human vaccine immunotherapy. PD-1 blockade benefits a small proportion of cancer patients with pre-existing anti-tumor CTLs. Takeda et al. show that the TLR3-specific ligand, ARNAX, and tumor-associated antigens (TAAs) induce anti-tumor CTLs, establish Th1-type anti-tumor immunity, and lead to tumor regression without inflammation. Combination therapy using ARNAX/TAA and anti-PD-L1 Ab overcomes PD-1 blockade-unresponsiveness. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rDwN46 Genome-wide Analysis of STAT3-Mediated Transcription during Early Human Th17 Cell Differentiation Author(s): Subhash K. Tripathi, Zhi Chen, Antti Larjo, Kartiek Kanduri, Kari Nousiainen, Tarmo Äijo, Isis Ricaño-Ponce, Barbara Hrdlickova, Soile Tuomela, Essi Laajala, Verna Salo, Vinod Kumar, Cisca Wijmenga, Harri Lähdesmäki, Riitta Lahesmaa The development of therapeutic strategies to combat immune-associated diseases requires the molecular mechanisms of human Th17 cell differentiation to be fully identified and understood. To investigate transcriptional control of Th17 cell differentiation, we used primary human CD4+ T cells in small interfering RNA (siRNA)-mediated gene silencing and chromatin immunoprecipitation followed by massive parallel sequencing (ChIP-seq) to identify both the early direct and indirect targets of STAT3. The integrated dataset presented in this study confirms that STAT3 is critical for transcriptional regulation of early human Th17 cell differentiation. Additionally, we found that a number of SNPs from loci associated with immune-mediated disorders were located at sites where STAT3 binds to induce Th17 cell specification. Importantly, introduction of such SNPs alters STAT3 binding in DNA affinity precipitation assays. Overall, our study provides important insights for modulating Th17-mediated pathogenic immune responses in humans. Tripathi et al. show that STAT3 is critical for transcriptional regulation of early human Th17 cell differentiation. A number of SNPs from loci associated with immune-mediated disorders occur at STAT3-binding sites. Introduction of such SNPs alters STAT3 binding in DNA affinity precipitation assays. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCZka9 Specification and Diversification of Pericytes and Smooth Muscle Cells from Mesenchymoangioblasts Author(s): Akhilesh Kumar, Saritha Sandra D'Souza, Oleg V. Moskvin, Huishi Toh, Bowen Wang, Jue Zhang, Scott Swanson, Lian-Wang Guo, James A. Thomson, Igor I. Slukvin Elucidating the pathways that lead to vasculogenic cells, and being able to identify their progenitors and lineage-restricted cells, is critical to the establishment of human pluripotent stem cell (hPSC) models for vascular diseases and development of vascular therapies. Here, we find that mesoderm-derived pericytes (PCs) and smooth muscle cells (SMCs) originate from a clonal mesenchymal progenitor mesenchymoangioblast (MB). In clonogenic cultures, MBs differentiate into primitive PDGFRβ+CD271+CD73− mesenchymal progenitors, which give rise to proliferative PCs, SMCs, and mesenchymal stem/stromal cells. MB-derived PCs can be further specified to CD274+ capillary and DLK1+ arteriolar PCs with a proinflammatory and contractile phenotype, respectively. SMC maturation was induced using a MEK inhibitor. Establishing the vasculogenic lineage tree, along with identification of stage- and lineage-specific markers, provides a platform for interrogating the molecular mechanisms that regulate vasculogenic cell specification and diversification and manufacturing well-defined mural cell populations for vascular engineering and cellular therapies from hPSCs. Kumar et al. find that mesodermal pericytes and smooth muscle cells in human pluripotent stem cell cultures originate from a common endothelial and mesenchymal cell precursor, the mesenchymoangioblast. They show how different lineages of mural cells are specified from mesenchymoangioblasts and define stage- and lineage-specific markers for vasculogenic cells. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rDhl82 Structural Basis of the Human Endoglin-BMP9 Interaction: Insights into BMP Signaling and HHT1 Author(s): Takako Saito, Marcel Bokhove, Romina Croci, Sara Zamora-Caballero, Ling Han, Michelle Letarte, Daniele de Sanctis, Luca Jovine Endoglin (ENG)/CD105 is an essential endothelial cell co-receptor of the transforming growth factor β (TGF-β) superfamily, mutated in hereditary hemorrhagic telangiectasia type 1 (HHT1) and involved in tumor angiogenesis and preeclampsia. Here, we present crystal structures of the ectodomain of human ENG and its complex with the ligand bone morphogenetic protein 9 (BMP9). BMP9 interacts with a hydrophobic surface of the N-terminal orphan domain of ENG, which adopts a new duplicated fold generated by circular permutation. The interface involves residues mutated in HHT1 and overlaps with the epitope of tumor-suppressing anti-ENG monoclonal TRC105. The structure of the C-terminal zona pellucida module suggests how two copies of ENG embrace homodimeric BMP9, whose binding is compatible with ligand recognition by type I but not type II receptors. These findings shed light on the molecular basis of the BMP signaling cascade, with implications for future therapeutic interventions in this fundamental pathway. Endoglin (ENG)/CD105, a key player in angiogenesis and vascular homeostasis, is mutated in the genetic disorder HHT1 and implicated in tumor angiogenesis and preeclampsia. Saito et al. determine structures of human ENG alone and in complex with the physiological ligand BMP9, shedding light onto the molecular basis of BMP signaling. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCUfie Sequential Steps of CRAC Channel Activation Author(s): Raz Palty, Zhu Fu, Ehud Y. Isacoff Interaction between the endoplasmic reticulum protein STIM1 and the plasma membrane channel ORAI1 generates calcium signals that are central for diverse cellular functions. How STIM1 binds and activates ORAI1 remains poorly understood. Using electrophysiological, optical, and biochemical techniques, we examined the effects of mutations in the STIM1-ORAI1 activating region (SOAR) of STIM1. We find that SOAR mutants that are deficient in binding to resting ORAI1 channels are able to bind to and boost activation of partially activated ORAI1 channels. We further show that the STIM1 binding regions on ORAI1 undergo structural rearrangement during channel activation. The results suggest that activation of ORAI1 by SOAR occurs in multiple steps. In the first step, SOAR binds to ORAI1, partially activates the channel, and induces a rearrangement in the SOAR-binding site of ORAI1. That rearrangement of ORAI1 then permits sequential steps of SOAR binding, via distinct molecular interactions, to fully activate the channel. How the interaction between STIM1 and the ORAI1 CRAC channel activates ORAI1 is poorly understood. Palty et al. identify mutations in STIM1 that disrupt partial activation but support the transition from partial to full activation. The study reveals the existence of sequential modes of STIM1-ORAI1 interaction. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rDhh8i Effect of Human Genetic Variability on Gene Expression in Dorsal Root Ganglia and Association with Pain Phenotypes Author(s): Marc Parisien, Samar Khoury, Anne-Julie Chabot-Doré, Susana G. Sotocinal, Gary D. Slade, Shad B. Smith, Roger B. Fillingim, Richard Ohrbach, Joel D. Greenspan, William Maixner, Jeffrey S. Mogil, Inna Belfer, Luda Diatchenko Dorsal root ganglia (DRG) relay sensory information to the brain, giving rise to the perception of pain, disorders of which are prevalent and burdensome. Here, we mapped expression quantitative trait loci (eQTLs) in a collection of human DRGs. DRG eQTLs were enriched within untranslated regions of coding genes of low abundance, with some overlapping with other brain regions and blood cell cis-eQTLs. We confirm functionality of identified eQTLs through their significant enrichment within open chromatin and highly deleterious SNPs, particularly at the exon level, suggesting substantial contribution of eQTLs to alternative splicing regulation. We illustrate pain-related genetic association results explained by DRG eQTLs, with the strongest evidence for contribution of the human leukocyte antigen (HLA) locus, confirmed using a mouse inflammatory pain model. Finally, we show that DRG eQTLs are found among hits in numerous genome-wide association studies, suggesting that this dataset will help address pain components of non-pain disorders. Parisien et al. present a database of expression quantitative trait loci in human dorsal root ganglia. The dataset represents a tool for interpreting human GWAS with sensory components. Its analysis demonstrates contributions of the HLA locus to pain phenotypes. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rDhgBg Quantitative Cell Cycle Analysis Based on an Endogenous All-in-One Reporter for Cell Tracking and Classification Author(s): Thomas Zerjatke, Igor A. Gak, Dilyana Kirova, Markus Fuhrmann, Katrin Daniel, Magdalena Gonciarz, Doris Müller, Ingmar Glauche, Jörg Mansfeld Cell cycle kinetics are crucial to cell fate decisions. Although live imaging has provided extensive insights into this relationship at the single-cell level, the limited number of fluorescent markers that can be used in a single experiment has hindered efforts to link the dynamics of individual proteins responsible for decision making directly to cell cycle progression. Here, we present fluorescently tagged endogenous proliferating cell nuclear antigen (PCNA) as an all-in-one cell cycle reporter that allows simultaneous analysis of cell cycle progression, including the transition into quiescence, and the dynamics of individual fate determinants. We also provide an image analysis pipeline for automated segmentation, tracking, and classification of all cell cycle phases. Combining the all-in-one reporter with labeled endogenous cyclin D1 and p21 as prime examples of cell-cycle-regulated fate determinants, we show how cell cycle and quantitative protein dynamics can be simultaneously extracted to gain insights into G1 phase regulation and responses to perturbations. Zerjatke et al. present endogenously tagged PCNA as an all-in-one cell cycle reporter for living cells to classify all cell cycle phases and quiescence using a single fluorescent channel. Visualizing endogenous cyclin D1 in living cells, they show that cyclin D1 maintains G1 phase and prevents the transition into quiescence. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rD4nHH Smoking not only burns lungs, it affects your ENT health too - Times of India Smoking not only burns lungs, it affects your ENT health too "Tobacco, apart from being a leading causative factor for cancer, also causes multiple ENT related diseases. The lining of the nose and sinuses is similar to the lining in the lungs. There are cilia, or tiny hair-like structures, that clean the nose ... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2skr6oa Development of a 12-item short version of the HIV stigma scale Valid and reliable instruments for the measurement of enacted, anticipated and internalised stigma in people living with HIV are crucial for mapping trends in the prevalence of HIV-related stigma and tracking ... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2skmDlb "Fibromyalgia and quality of life: mapping the revised fibromyalgia impact questionnaire to the preference-based instruments" The revised version of the Fibromyalgia Impact Questionnaire (FIQR) is one of the most widely used specific questionnaires in FM studies. However, this questionnaire does not allow calculation of QALYs as it i... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rm4s1g Cross-cultural measurement invariance in the satisfaction with food-related life scale in older adults from two developing countries Nutrition is one of the major determinants of successful aging. The Satisfaction with Food-related Life (SWFL) scale measures a person's overall assessment regarding their food and eating habits. The SWFL scal... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2skmEWh Targeting of super-enhancers and mutant BRAF can suppress growth of BRAF-mutant colon cancer cells via repression of MAPK signaling pathway Bromodomain and extra-terminal (BET) inhibitors suppress super-enhancers and show cytotoxicity against multiple types of tumors. However, early clinical trials with BET inhibitors showed severe hematopoietic toxicities, highlighting the need for sensitive tumors and rational combination strategies to enhance their therapeutic potential. Here, we identified colon cancer-specific super-enhancers that were associated with multiple oncogenic pathways, including the mitogen-activated protein kinase (MAPK) signaling pathway. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCNkph BZML, a novel colchicine binding site inhibitor, overcomes multidrug resistance in A549/Taxol cells by inhibiting P-gp function and inducing mitotic catastrophe Multidrug resistance (MDR) interferes with the efficiency of chemotherapy. Therefore, developing novel anti-cancer agents that can overcome MDR is necessary. Here, we screened a series of colchicine binding site inhibitors (CBSIs) and found that 5-(3, 4, 5-trimethoxybenzoyl)-4-methyl-2-(p-tolyl) imidazol (BZML) displayed potent cytotoxic activity against both A549 and A549/Taxol cells. We further explored the underlying mechanisms and found that BZML caused mitosis phase arrest by inhibiting tubulin polymerization in A549 and A549/Taxol cells. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2sc90FK Response to "Circular RNA profile identifies circPVT1 as a proliferative factor and prognostic marker in gastric cancer," Cancer Lett. 2017 Mar 1; 388(2017): 208-219 We much appreciated for the comment on our manuscript[1]. However, there are some misunderstanding in this comment. Firstly, the authors concluded that circPVT1 played a cancer suppressor role in vivo (from the clinical correlation), while was proved to act an oncogene in vitro (from the functional assays). However, the clinical correlation (biomarker) may have no functional implication. We were also very curious about the contradiction between clinical implication and cellular function of circPVT1 in GC. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2sc4kQ2 A combinatorial strategy using YAP and pan-RAF inhibitors for treating KRAS-mutant pancreatic cancer KRAS mutation is the most common genetic event in pancreatic cancer. Whereas KRAS itself has proven difficult to inhibit, agents that target key downstream signals of KRAS, such as RAF, are possibly effective for pancreatic cancer treatment. Because selective BRAF inhibitors paradoxically induce downstream signaling activation, a pan-RAF inhibitor, LY3009120 is a better alternate for KRAS-mutant tumor treatment. Here we explored a new combinational strategy using a YAP inhibitor and LY3009120 in pancreatic cancer treatment. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCEx6J Tamoxifen acceptance and adherence among patients with ductal carcinoma in situ (DCIS) treated in a multidisciplinary setting. Tamoxifen and other endocrine agents have proven benefits for women with ductal carcinoma in situ (DCIS) but low patient acceptance is widely reported. We examined factors associated with tamoxifen acceptance and adherence among DCIS patients who received a recommendation for therapy in a multidisciplinary setting. <p>Using our institutional database, we identified women diagnosed with DCIS, 1998-2009 who were offered tamoxifen. We recorded data on demographics, tumor and therapy variables, tamoxifen acceptance, and adherence to therapy for > 4 years. Univariable and multivariable analyses were conducted using logistic regression, to identify factors specific to each group that were related to acceptance and adherence.</p> <p>555 eligible women identified, of whom 369 were offered tamoxifen; 298 (81%) accepted, among whom 214 (72%) were adherent, 59/298 (20%) were nonadherent, and for 25 (8%) adherence was undetermined. After stepwise elimination in adjusted logistic regression models, acceptance of breast radiotherapy was associated with acceptance of tamoxifen (odds ratio [OR] 2.22, 95% Confidence Interval [CI] 1.26-3.90, p<0.01), as was a medical oncology consultation (OR 1.76, 95% CI 0.99-3.15, p=0.05). Insured patients were more likely to adhere to tamoxifen, (OR 6.03, 95% CI 2.60-13.98, p<0.01). The majority of nonadherent women (n=38/56, 68%) discontinued the drug during the first year of treatment with 48 (86%) citing adverse effect(s) as the reason.</p> <p>In a multidisciplinary, tertiary care setting, we observed relatively high rates of acceptance and adherence of tamoxifen. Acceptance of tamoxifen and radiotherapy were associated, and adherence was influenced by insurance status. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2qEhraG Development of a cancer risk prediction tool for use in the UK Primary Care and community settings. Several multivariable risk prediction models have been developed to asses an individual's risk of developing specific cancers. Such models can be used in a variety of settings for prevention, screening and guiding investigations and treatments. Models aimed at predicting future disease risk that contains lifestyle factors may be of particular use for targeting health promotion activities at an individual level. This type of cancer risk prediction is not yet available in the UK. We have adopted the approach used by the well-established U.S. derived "YourCancerRisk" model for use in the UK population which allow users to quantify their individual risk of developing individual cancers relative to the population average risk. The UK version of YourCancerRisk" computes 10 year cancer risk estimates for 11 cancers utilising UK figures for prevalence of risk factors and cancer incidence. Since the prevalence of risk factors and the incidence rates for cancer are different between the US and the UK population, this UK model provides more accurate estimates of risks for a UK population. Using an example of breast cancer and data from UK Biobank cohort we demonstrate that the individual risk factor estimates are similar for the US and UK populations. Assessment of the performance and validation of the multivariate model predictions based on a binary score confirm the model's applicability. The model can be used to estimate absolute and relative cancer risk for use in Primary Care and community settings and is being used in the community to guide lifestyle change. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2r9yZPk The impact of a multifaceted intervention including sepsis electronic alert system and sepsis response team on the outcomes of patients with sepsis and septic shock Compliance with the clinical practice guidelines of sepsis management has been low. The objective of our study was to describe the results of implementing a multifaceted intervention including an electronic al... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCYtq2 Kr-POK (ZBTB7c) regulates cancer cell proliferation through glutamine metabolism Publication date: Available online 30 May 2017 Source:Biochimica et Biophysica Acta (BBA) - Gene Regulatory Mechanisms Author(s): Man-Wook Hur, Jae-Hyeon Yoon, Min-Young Kim, Hyeonseok Ko, Bu-Nam Jeon Kr-POK (ZBTB7c) is a kidney cancer-related POK transcription factor that not only represses transcription of CDKN1A but also increases expression of FASN. However, precisely how Kr-POK affects cell metabolism by controlling gene expression in response to an energy source in rapidly proliferating cells remains unknown. In this study, we characterized the molecular and functional features of Kr-POK in the context of tumor growth and glutamine metabolism. We found that cells expressing Kr-POK shRNA exhibited more severe cell death than control cells in glucose-deprived medium, and that knockdown of Kr-POK decreased glutamine uptake. Glutamine is critical for tumor cell proliferation. Glutaminase (GLS1), which is activated by p-STAT1, catalyzes the initial reaction in the pathway of glutaminolysis. Kr-POK interacts with PIAS1 to disrupt the interaction between PIAS1 and p-STAT1, and free p-STAT1 can activate GLS1 transcription through an interaction with p300. Kr-POK can be also sumoylated by PIAS1, facilitating Kr-POK degradation by the ubiquitin-mediated proteasomal pathway. Finally, we showed that repression of Kr-POK inhibited tumor growth in vivo in a xenograft model by repressing GLS1 expression. Taken together, our data reveal that Kr-POK activates GLS1 transcription and increases glutamine uptake to support rapid cancer cell proliferation. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rSA7rY Acetylation of MKL1 by PCAF regulates pro-inflammatory transcription Author(s): Liming Yu, Zilong Li, Mingming Fang, Yong Xu Inflammation is considered a fundamental host defense mechanism and, when aberrantly activated, contributes to a host of human diseases. Previously we have reported that the transcriptional regulator megakaryocytic leukemia 1 (MKL1) plays a role programming cellular inflammatory response by modulating NF-κB activity. Here we report that MKL1 was acetylated in vivo and pro-inflammatory stimuli (TNF-α and LPS) augmented MKL1 acetylation accompanying increased MKL1 binding to NF-κB target promoters. Further analysis revealed that the lysine acetyltransferase PCAF mediated MKL1 acetylation: TNF-α and LPS promoted the interaction between MKL1 and PCAF while depletion of PCAF abrogated the induction of MKL1 acetylation by TNF-α and LPS. Acetylation of MKL1 was necessary for MKL1 to activate the transcription of pro-inflammatory genes because mutation of four conserved lysine residues in MKL1 attenuated its capacity as a trans-activator of NF-κB target genes. Mechanistically, MKL1 acetylation served to promote MKL1 nuclear enrichment, to enhance the MKL1-NF-κB interaction, and to stabilize the binding of MKL1 on target promoters. In conclusion, our data unveil an important pathway that contributes to the transcriptional regulation of inflammatory response. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rTgqAq Woman credits faith and exercise with helping her beat cancer - Gaffney Ledger (subscription) Woman credits faith and exercise with helping her beat cancer Gaffney Ledger (subscription) Jeanne Hames has beaten death twice after the nonsmoker was diagnosed with tongue and lung cancer over the past five years. She credits her faith and exercise with helping her in a successful battle against the disease. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2sklpGX Un reggiseno speciale per riconoscere il tumore al seno - Emilia-Romagna Mamma (Comunicati Stampa) (Blog) Emilia-Romagna Mamma (Comunicati Stampa) (Blog) Un reggiseno speciale per riconoscere il tumore al seno Con la sua Higia Tecnhologies, Julian vuole aiutare le donne a prevenire il cancro al seno e lo fa, appunto, con un reggiseno – denominato "Eva" – in grado di rilevare i sintomi di questo tipo di tumore. ... Il docetaxel, che può essere utilizzato da ... An 18-year-old boy has invented a bra that detects breast cancerThe indy100 Teenager invents a bra that could detect BREAST CANCER after watching his mum's battle with diseaseThe Sun 18-year-old Mexican student designs bra that can detect breast cancerThe Independent from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rm5HxB Nicotine is more addictive and deadlier than cocaine - Daily News & Analysis Daily News & Analysis Nicotine is more addictive and deadlier than cocaine Chewing tobacco in forms such as paan, sopari, etc causes cancers of the tongue, cheek, various parts of the mouth and the throat. These oral cancers make up almost 25 per cent of all the male cancers in India. Lung cancers make up an additional 8 per ... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rmjxjA In vitro quality evaluation of leading brands of ciprofloxacin tablets available in Bangladesh Ciprofloxacin is a broad-spectrum antibiotic that acts against a number of bacterial infections. The study was carried out to examine the in vitro quality control tests for ten leading brands of ciprofloxacin ... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rm2WMI Molecular dynamics simulations reveal the conformational dynamics of Arabidopsis thaliana BRI1 and BAK1 receptor-like kinases [Protein Structure and Folding] The structural motifs responsible for activation and regulation of eukaryotic protein kinases in animals have been studied extensively in recent years, and a coherent picture of their activation mechanisms has begun to emerge. On the other hand, non-animal eukaryotic protein kinases are not as well understood from a structural perspective, representing a large knowledge gap. To this end, we investigated the conformational dynamics of two key Arabidopsis thaliana receptor-like kinases, brassi- nosteroid insensitive 1 (BRI1) and BRI1-associated kinase 1 (BAK1), through extensive molecular dynamics (MD) simulations of their fully phosphorylated kinase domains. MD simulations calculate the motion of each atom in a protein based on classical approximations of interatomic forces, giving researchers insight into protein function at unparalleled spatial and temporal resolutions. We found that in an otherwise ″active″ BAK1, the αC helix is highly disordered, a hallmark of deactivation, while the BRI1 αC helix is moderately disordered and displays swinging behavior similar to numerous animal kinases. An analysis of all known sequences in the A. thaliana kinome found that αC helix disorder may be a common feature of plant kinases. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rm7Yc9 The autism-linked UBE3A T485A mutant E3 ubiquitin ligase activates the Wnt/{beta}-catenin pathway by inhibiting the proteasome [Genomics and Proteomics] UBE3A is a HECT domain E3 ubiquitin ligase whose dysfunction is linked to autism, Angelman syndrome, and cancer. Recently, we characterized a de novo autism-linked UBE3A mutant (UBE3AT485A) that disrupts phosphorylation control of UBE3A activity. Through quantitative proteomics and reporter assays, we found that the UBE3AT485A protein ubiquitinates multiple proteasome subunits, reduces proteasome subunit abundance and activity, stabilizes nuclear β-catenin, and stimulates canonical Wnt signaling more effectively than wild-type UBE3A. We also found that UBE3AT485A activates Wnt signaling to a greater extent in cells with low levels of ongoing Wnt signaling, suggesting that cells with low basal Wnt activity are particularly vulnerable to UBE3AT485A mutation. Ligase-dead UBE3A did not stimulate Wnt pathway activation. Overexpression of several proteasome subunits reversed the effect of UBE3AT485A on Wnt signaling. We also observed that subunits that interact with UBE3A and affect Wnt signaling are located along one side of the 19S regulatory particle, indicating a previously unrecognized spatial organization to the proteasome. Altogether, our findings indicate that UBE3A regulates Wnt signaling in a cell context-dependent manner and that an autism-linked mutation exacerbates these signaling effects. Our study has broad implications for human disorders associated with UBE3A gain or loss-of-function, and suggest that dysfunctional UBE3A might affect additional proteins and pathways that are sensitive to proteasome activity. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2skqFKE Hepatitis C virus induces a pre-diabetic state by directly impairing hepatic glucose metabolism in mice [Metabolism] Virus-related type-2 diabetes is commonly observed in individuals infected with the hepatitis C virus (HCV). However, the underlying molecular mechanisms remain unknown. Our aims were to unravel these mechanisms using FL-N/35 transgenic mice expressing the full HCV-ORF. We observed that these mice displayed glucose intolerance and insulin resistance. We also found that Glut-2 membrane expression was reduced in FL-N/35 mice and that hepatocyte glucose uptake was perturbed, partly accounting for the HCV-induced glucose intolerance in these mice. Early steps of the hepatic insulin signaling pathway, from IRS2 to PDK1 phosphorylation, were constitutively impaired in FL-N/35 primary hepatocytes, via deregulation of TNFα/SOCS3. Higher hepatic glucose production was observed in the HCV mice, despite higher fasting insulinemia, concomitantly with decreased expression of hepatic gluconeogenic genes. Akt kinase activity was higher in HCV mice than in WT mice, but Akt-dependent phosphorylation of the forkhead transcription factor FoxO1 at serine 256, which triggers its nuclear exclusion, was lower in HCV mouse livers. These findings indicate an uncoupling of the canonical Akt/FoxO1 pathway in HCV proteins-expressing hepatocytes. Thus, the expression of HCV proteins in the liver is sufficient to induce in- sulin resistance by impairing insulin signaling and glucose uptake. In conclusion, we observed a complete set of events leading to a pre-diabetic state in HCV-transgenic mice, providing a valuable mechanistic explanation for HCV-induced diabetes in humans. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rmlCvE Uridine monophosphate synthetase enables eukaryotic de novo NAD+ biosynthesis from quinolinic acid [Bioenergetics] NAD+ biosynthesis is an attractive and promising therapeutic target for influencing healthspan and obesity-related phenotypes as well as tumor growth. Full and effective use of this target for therapeutic benefit requires a complete understanding of NAD+ biosynthetic pathways. Here we report a previously unrecognized role for a conserved phosphoribosyltransferase in NAD+ biosynthesis. Because a required quinolinic acid phosphoribosyltransferase (QPRTase) is not encoded in its genome, Caenorhabditis elegans are reported to lack a de novo NAD+ biosynthetic pathway. However, all the genes of the kynurenine pathway required for quinolinic acid (QA) production from tryptophan are present. Thus, we investigated the presence of de novo NAD+ biosynthesis in this organism. By combining isotope-tracing and genetic experiments, we have demonstrated the presence of an intact de novo biosynthesis pathway for NAD+ from tryptophan via QA, highlighting the functional conservation of this important biosynthetic activity. Supplementation with kynurenine pathway intermediates also boosted NAD+ levels and partially reversed NAD+-dependent phenotypes caused by mutation of pnc-1, which encodes a nicotinamidase required for NAD+ salvage biosynthesis, demonstrating contribution of de novo synthesis to NAD+ homeostasis. By investigating candidate phosphoribosyltransferase genes in the genome, we determined that the conserved uridine monophosphate phosophoribosyltransferase (UMPS), which acts in pyrimidine biosynthesis, is required for NAD+ biosynthesis in place of the missing QPRTase. We suggest that similar underground metabolic activity of UMPS may function in other organisms. This mechanism for NAD+ biosynthesis creates novel possibilities for manipulating NAD+ biosynthetic pathways, which is key for the future of therapeutics. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2skhJof Crystal structure of the thioesterification conformation of Bacillus subtilis o-succinylbenzoyl-CoA synthetase reveals a distinct substrate binding mode [Protein Structure and Folding] o-Succinylbenzoyl-CoA (OSB-CoA) synthetase (MenE) is an essential enzyme in bacterial vitamin K biosynthesis and an important target in the development of new antibiotics. It is a member of the adenylating enzymes (ANL) family, which reconfigure their active site in two different active conformations, one for the adenylation half-reaction and the other for a thioesterification half-reaction, in a domain-alternation catalytic mechanism. Although several aspects of the adenylating mechanism in MenE have recently been uncovered, its thioesterification conformation remains elusive. Here, using a catalytically competent Bacillus subtilis mutant protein complexed with an OSB-CoA analog, we determined MenE high-resolution structures to 1.76 and 1.90 Å resolution in a thioester-forming conformation. By comparison with the adenylation conformation, we found that MenE C-domain rotates around the Ser384 hinge by 139.5° during domain-alternation catalysis. The structures also revealed a thioesterfication active site specifically conserved among MenE orthologues and a substrate-binding mode distinct from those of many other acyl/aryl-CoA synthetases. Of note, using site-directed mutagenesis, we identified several residues that specifically contribute to the thioesterification half-reaction without affecting the adenylation half-reaction. Moreover, we observed a substantial movement of the activated succinyl group in the thioesterification half- reaction. These findings provide new insights into the domain-alternation catalysis of a bacterial enzyme essential for vitamin K biosynthesis and of its adenylating homologues in the ANL enzyme family. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rmpyN2 The higher plant plastid NAD(P)H dehydrogenase-like complex (NDH) is a high efficiency proton pump that increases ATP production by cyclic electron flow [Bioenergetics] Cyclic electron flow around photosystem I (CEF) is critical for balancing the photosynthetic energy budget of the chloroplast, by generating ATP without net production of NADPH. We demonstrate that the chloroplast NADPH dehydrogenase complex (NDH), a homolog to respiratory Complex I, pumps approximately two protons from the chloroplast stroma to the lumen per electron transferred from ferredoxin to plastoquinone, effectively increasing the efficiency of ATP production via CEF by two-fold compared to CEF pathways involving non-proton-pumping plastoquinone reductases. By virtue of this proton-pumping stoichiometry, we hypothesise that NDH not only efficiently contributes to ATP production, but operates near thermodynamic reversibility, with potentially important consequences for remediating mismatches in the thylakoid energy budget. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rmbfrJ The UbiK protein is an accessory factor necessary for bacterial ubiquinone (UQ) biosynthesis and forms a complex with the UQ biogenesis factor UbiJ [Bioenergetics] Ubiquinone (UQ), also referred to as coenzyme Q, is a widespread lipophilic molecule in both prokaryotes and eukaryotes in which it primarily acts as an electron carrier. Eleven proteins are known to participate in UQ biosynthesis in Escherichia coli, and we recently demonstrated that UQ biosynthesis requires additional, nonenzymatic factors, some of which are still unknown. Here, we report on the identification of a bacterial gene, yqiC, which is required for efficient UQ biosynthesis, and which we have renamed ubiK. Using several methods, we demonstrated that the UbiK protein forms a complex with the C-terminal part of UbiJ, another UQ biogenesis factor we previously identified. We found that both proteins are likely to contribute to global UQ biosynthesis rather than to a specific biosynthetic step, since both ubiK and ubiJ mutants accumulated octaprenylphenol, an early intermediate of the UQ biosynthetic pathway. Interestingly, we found that both proteins are dispensable for UQ biosynthesis under anaerobiosis, even though they were expressed in the absence of oxygen. We also provide evidence that the UbiK-UbiJ complex interacts with palmitoleic acid, a major lipid in E. coli. Last, in Salmonella enterica, ubiK was required for proliferation in macrophages and virulence in mice. We conclude that although the role of the UbiK-UbiJ complex remains unknown, our results support the hypothesis that UbiK is an accessory factor of Ubi enzymes and facilitates UQ biosynthesis by acting as an assembly factor, a targeting factor, or both. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2skicqG The p90 ribosomal S6 kinase-UBR5 pathway controls toll-like receptor signaling via miRNA-induced translational inhibition of TNF receptor-associated factor 3 [Cell Biology] MicroRNAs (miRNAs) are small, noncoding RNAs that post-transcriptionally regulate gene expression. For example, miRNAs repress gene expression by recruiting the miRNA−induced silencing complex (miRISC), a ribonucleoprotein complex that contains miRNA−engaged Argonaute (Ago) and the scaffold protein GW182. Recently, ubiquitin protein ligase E3 component N−recognin 5 (UBR5) has been identified as a component of miRISC. UBR5 directly interacts with GW182 proteins and participates in miRNA silencing by recruiting downstream effectors, such as the translation regulator DEAD−box helicase 6 (DDX6) and transducer of ERBB2.1/2 (Tob1/2), to the Ago−GW182 complex. However, the regulation of miRISC-associated UBR5 remain largely elusive. In the present study, we show that UBR5 down−regulates the levels of TNF receptor−associated factor 3 (TRAF3), a key component of toll-like receptor signaling, via the miRNA pathway. We further demonstrate that p90 ribosomal S6 kinase (p90RSK) is an upstream regulator of UBR5. p90RSK phosphorylates UBR5 at Thr637, Ser1227, and Ser2483, and the phosphorylation is required for the translational repression of TRAF3 mRNA. Phosphorylated UBR5 co−localized with GW182 and Ago2 in cytoplasmic speckles, which implicated that miRISC is affected by phospho−UBR5. Collectively, these results indicate that the p90RSK−UBR5 pathway stimulates miRNA−mediated translational repression of TRAF3. Our work adds another layer to the regulation of miRISC. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rma37Q Cryptococcus neoformans ADS lyase in an enzyme essential for virulence whose crystal structure reveals features exploitable in antifungal drug design [Metabolism] There is significant clinical need for new antifungal agents to manage infections with pathogenic species such as Cryptococcus neoformans. Because the purine biosynthesis pathway is essential for many metabolic processes, such as synthesis of DNA and RNA and energy generation, it may represent a potential target for developing new antifungals. Within this pathway, the bifunctional enzyme adenylosuccinate (ADS) lyase plays a role in the formation of the key intermediates inosine monophosphate and AMP involved in the synthesis of ATP and GTP, prompting us to investigate ADS lyase in C. neoformans. Here, we report that ADE13 encodes ADS lyase in C. neoformans. We found that an ade13Δ mutant is an adenine auxotroph and is unable to successfully cause infections in a murine model of virulence. Plate assays revealed that production of a number of virulence factors essential for dissemination and survival of C. neoformans in a host environment was compromised even with the addition of exogenous adenine. Purified recombinant C. neoformans ADS lyase shows catalytic activity similar to its human counterpart, and its crystal structure, the first fungal ADS lyase structure determined, shows a high degree of structural similarity to that of human ADS lyase. Two potentially important amino acid differences are identified in the C. neoformans crystal structure, in particular a threonine residue that may serve as an additional point of binding for a fungal enzyme-specific inhibitor. Besides serving as an antimicrobial target, C. neoformans ADS lyase inhibitors may also serve as potential therapeutics for metabolic disease; rather than disrupt ADS lyase, compounds that improve the stability the enzyme may be used to treat ADS Lyase Deficiency disease. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2skkcix Promoting Mental Health in Italian Middle and High School: A Pilot Study Aim. In Italy, a handbook has been developed based on the principles of cooperative learning, life skills, self-effectiveness, and problem-solving at high school level. Early studies have shown the handbook's effectiveness. It has been hypothesized that the revised handbook could be more effective in middle schools. Method. The study design is a "pre- and posttest" that compares the results obtained from 91 students of the high schools with those of the 38 students from middle schools. The assessment was made through "self-reporting" questionnaires of (a) learning skills including problem-solving and (b) perceived self-efficacy in managing emotions, dysfunctional beliefs, and unhealthy behaviours (i.e., drinking/smoking). Results. Significant improvements were observed in both groups with the exceptions of perceived self-efficacy in managing emotions. The improvement of dysfunctional beliefs and the learning of problem-solving skills were better in middle schools. Conclusion. The results confirm the authors' hypothesis that the use of this approach is much more promising in middle school. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2qzuV8h In Vivo 3T Magnetic Resonance Imaging Using a Biologically Specific Contrast Agent for Prostate Cancer: A Nude Mouse Model We characterized in vivo a functional superparamagnetic iron-oxide magnetic resonance contrast agent that shortens the relaxation time in magnetic resonance imaging (MRI) of prostate cancer xenografts. The agent was developed by conjugating Molday ION™ carboxyl-6 (MIC6), with a deimmunized mouse monoclonal antibody (muJ591) targeting prostate-specific membrane antigen (PSMA). This functional contrast agent could be used as a noninvasive method to detect prostate cancer cells that are PSMA positive and more readily differentiate them from surrounding tissues for treatment. The functional contrast agent was injected intravenously into mice and its effect was compared to both MIC6 (without conjugated antibody) and phosphate-buffered saline (PBS) injection controls. MR imaging was performed on a clinical 3T MRI scanner using a multiecho spin echo (MESE) sequence to obtain relaxation time values. Inductively coupled plasma atomic emission spectroscopy was used to confirm an increase in elemental iron in injected mice tumours relative to controls. Histological examination of H&E stained tissues showed normal morphology of the tissues collected. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rT01vu Fresh Snack Food Channel Evaluation Model for Integrating Customers' Perception of Transaction Costs in Taiwan The primary purpose of this study was to explore how food dealers develop methods that facilitate transaction efficiency and how they select the optimal food channels. This study establishes a model according to the impact of transaction cost factors on consumers' decision-making regarding purchase of fresh snack foods. Using fresh snack foods in Taiwan as an example, this study employed a fuzzy analytic network process to solve decision-making problems with multiple criteria by comparing the interaction between each transaction cost factor to obtain the factor weightings as well as the weightings of the transaction costs at each decision stage. This study found that food safety assurance and providing sufficient nutrition information were the most essential topics; thus, the optimal choice for snack food producers is to develop retail outlets. This study construction process proposed is innovative and operational, and the results may provide a reference for snack food dealers or microfood enterprises to assist them in developing their food channels. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2qzBnMC Corrigendum to "A Study on the Combustion Performance of Diesel Engines with O2 and CO2 Suction" from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rSPgJK Small Engines as Bottoming Cycle Steam Expanders for Internal Combustion Engines Heat recovery bottoming cycles for internal combustion engines have opened new avenues for research into small steam expanders (Stobart and Weerasinghe, 2006). Dependable data for small steam expanders will allow us to predict their suitability as bottoming cycle engines and the fuel economy achieved by using them as bottoming cycles. Present paper is based on results of experiments carried out on small scale Wankel and two-stroke reciprocating engines as air expanders and as steam expanders. A test facility developed at Sussex used for measurements is comprised of a torque, power and speed measurements, electronic actuation of valves, synchronized data acquisition of pressure, and temperatures of steam and inside of the engines for steam and internal combustion cycles. Results are presented for four engine modes, namely, reciprocating engine in uniflow steam expansion mode and air expansion mode and rotary Wankel engine in steam expansion mode and air expansion mode. The air tests will provide base data for friction and motoring effects whereas steam tests will tell how effective the engines will be in this mode. Results for power, torque, and diagrams are compared to determine the change in performance from air expansion mode to steam expansion mode. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2qzjVIb Surgical Orthodontic Treatment of a Patient Affected by Type 1 Myotonic Dystrophy (Steinert Syndrome) Myotonic dystrophy, or Steinert's disease, is the most common form of muscular dystrophy that occurs in adults. This multisystemic form involves the skeletal muscles but affects also the eye, the endocrine system, the central nervous system, and the cardiac system. The weakness of the facial muscles causes a characteristic facial appearance frequently associated with malocclusions. Young people with myotonic dystrophy, who also have severe malocclusions, have bad oral functions such as chewing, breathing, and phonation. We present a case report of a 15-year-old boy with anterior open bite, upper and lower dental crowding, bilateral crossbite, and constriction of the upper jaw with a high and narrow palate. The patient's need was to improve his quality of life. Because of the severity of skeletal malocclusion, it was necessary to schedule a combined orthodontic and surgical therapy in order to achieve the highest aesthetic and functional result. Although therapy caused an improvement in patient's quality of life, the clinical management of the case was hard. The article shows a balance between costs and benefits of a therapy that challenges the nature of the main problem of the patient, and it is useful to identify the most appropriate course of treatment for similar cases. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rSPfWc Combining Carcinoembryonic Antigen and Platelet to Lymphocyte Ratio to Predict Brain Metastasis of Resected Lung Adenocarcinoma Patients We aimed to evaluate the role of pretreatment carcinoembryonic antigen (CEA) and platelet to lymphocyte ratio (PLR) in predicting brain metastasis after radical surgery for lung adenocarcinoma patients. The records of 103 patients with completely resected lung adenocarcinoma between 2013 and 2014 were reviewed. Clinicopathologic characteristics of these patients were assessed in the Cox proportional hazards regression model. Brain metastasis occurred in 12 patients (11.6%). On univariate analysis, N2 stage (P = 0.013), stage III (P = 0.016), increased CEA level (P = 0.006), and higher PLR value (P = 0.020) before treatment were associated with an increased risk of developing brain metastasis. In multivariate model analysis, CEA above 5.2 ng/mL (P = 0.014) and PLR ≥ 120 (P = 0.036) remained as the risk factors for brain metastasis. The combination of CEA and PLR was superior to CEA or PLR alone in predicting brain metastasis according to the receiver operating characteristic (ROC) curve analysis (area under ROC curve, AUC 0.872 versus 0.784 versus 0.704). Pretreatment CEA and PLR are independent and significant risk factors for occurrence of brain metastasis in resected lung adenocarcinoma patients. Combining these two factors may improve the predictability of brain metastasis. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2qzr7nB Agroforestry and Management of Trees in Bunya County, Mayuge District, Uganda Woody plant resources continue to disappear in anthropogenic landscapes in Uganda. To slow down further loss of these resources requires the collaboration of farmers in tree planting in agroforestry systems. Tree planting interventions with the collaboration of farmers require a good understanding of tree management practices as well as trees that best satisfy farmers' needs. We carried out this research to determine (1) the most preferred tree species and reasons why they are preferred, (2) the species conservation statuses, and (3) existing tree management practices and challenges to tree planting. Fourteen priority species valued because they yield edible fruits and timber have been prioritised in this study. Farmers are interested in managing trees but are constrained by many factors, key among which is scarcity of land and financial capital to manage tree planting. Trees are managed in crop fields and around the homestead. From farmers' reports, the highly valued species are increasing in the landscape. In conclusion, the potential to manage trees in agroforestry systems exists but is hampered by many challenges. Secondly, the liking of trees that supply edible fruits seems to support the welfare maximisation theory which ideally states that rural people manage trees with the aim of having regular access to products that satisfy their household needs and not for income generation. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rSJqbt Femoral nerve and lumbar plexus injury after minimally invasive lateral retroperitoneal transpsoas approach: electrodiagnostic prognostic indicators and a roadmap to recovery Injury to the lumbosacral (LS) plexus is a well-described complication after lateral retroperitoneal transpsoas approaches to the spine. The prognosis for functional recovery after lumbosacral plexopathy or femoral/obturator neuropathy is unclear. We designed a retrospective case-control study with patients undergoing one-level lateral retroperitoneal transpsoas lumbar interbody fusion (LLIF) between January 2011 and June 2016 to correlate electrodiagnostic assessments (EDX) to physiologic concepts of nerve injury and reinnervation, and attempt to build a timeline for patient evaluation and recovery. Cases with post-operative obturator or femoral neuropathy were identified. Post-operative MRI, nerve conduction studies (NCS), electromyography (EMG), and physical examinations were performed at intervals to assess clinical and electrophysiologic recovery of function. Two hundred thirty patients underwent LLIF. Six patients (2.6%) suffered severe femoral or femoral/obturator neuropathy. Five patients (2.2%) had immediate post-operative weakness. One of the six patients developed delayed weakness due to a retroperitoneal hematoma. Five out of six patients (83%) demonstrated EDX findings at 6 weeks consistent with axonotmesis. All patients improved to at least MRC 4/5 within 12 months of injury. In conclusion, neurapraxia is the most common LS plexus injury, and complete recovery is expected after 3 months. Most severe nerve injuries are a combination of neurapraxia and variable degrees of axonotmesis. EDX performed at 6 weeks and 3, 6, and 9 months provides prognostic information for recovery. In severe injuries of proximal femoral and obturator nerves, observation of proximal to distal progression of small-amplitude, short-duration (SASD) motor unit potentials may be the most significant prognostic indicator. http://ift.tt/2r9lmja Navigation-guided clipping of a de novo aneurysm associated with superficial temporal artery-middle cerebral artery bypass combined with indirect pial synangiosis in a patient with moyamoya disease De novo aneurysms associated with superficial temporal artery (STA)-middle cerebral artery (MCA) bypass are an extremely rare complication of direct revascularization surgery for moyamoya disease (MMD). The basic pathology of MMD includes fragility of the intracranial arterial wall characterized by medial layer thinness and waving of the internal elastic lamina. However, the incidence of newly formed aneurysms at the site of anastomosis currently remains unknown. Among 317 consecutive direct/indirect combined revascularization surgeries performed for MMD, we encountered a 52-year-old woman manifesting a de novo aneurysm adjacent to the site of anastomosis 11 years after successful STA-MCA bypass with encephalo-duro-myo-synangiosis (EDMS). Although the patient remained asymptomatic, the aneurysm gradually increased in diameter to more than 6 mm with the formation of a daughter sac, and a computational fluid dynamic study revealed low wall shear stress at the aneurysm dome. The patient underwent microsurgical clipping of the aneurysm using a neuro-navigation system that permitted the minimally invasive dissection of the temporal muscle flap used for EDMS at the site of the aneurysm without affecting pial synangiosis. The aneurysm was successfully occluded using a titanium clip without complications. The postoperative course was uneventful, and the patient was discharged without neurological deficits. De novo aneurysms associated with STA-MCA bypass for MMD may be safely treated with microsurgical clipping, even in cases initially managed by a combined revascularization procedure that includes complex pial synangiosis. We recommend the application of the neuro-navigation system for the maximum preservation of pial synangiosis during this procedure. http://ift.tt/2saRYYy Improved performance of organic light-emitting diode with vanadium pentoxide layer on the FTO surface Vanadium pentoxide layer deposited on the fluorine-doped tin oxide (FTO) anode by vacuum deposition has been investigated in organic light-emitting diode (OLED). With 12 nm optimal thickness of \(\hbox {V}_{2}\hbox {O}_{5}\) , the luminance efficiency is increased by 1.66 times compared to the single FTO-based OLED. The improvement of current efficiency implies that there is a better charge injection and better controlling of hole current. To investigate the performance of OLED by the buffer layer, \(\hbox {V}_{2}\hbox {O}_{5}\) films of different thicknesses were deposited on the FTO anode and their J–V and L–V characteristics were studied. Further analysis was carried out by measuring sheet resistance, optical transmittance and surface morphology with the FE-SEM images. This result indicates that the \(\hbox {V}_{2}\hbox {O}_{5}\) (12 nm) buffer layer is a good choice for increasing the efficiency of FTO-based OLED devices within the tunnelling region. Here the maximum value of current efficiency is found to be 2.83 cd / A. http://ift.tt/2qxZkIc Importance of polaron effects for charge carrier mobility above and below pseudogap temperature in superconducting cuprates Polaron effects and charge carrier mobility in high- \(T_\mathrm{c}\) cuprate superconductors (HTSCs) have been investigated theoretically. The appropriate Boltzmann transport equations under relaxation time approximation were used to calculate the mobility of polaronic charge carriers and bosonic Cooper pairs above and below the pseudogap (PG) temperature \(T^*\) . It is shown that the scattering of polaronic charge carriers and bosonic Cooper pairs at acoustic and optical phonons are responsible for the charge carrier mobility above and below the PG temperature. We show that the energy scales of the binding energies of large polarons and polaronic Cooper pairs can be identified by PG cross-over temperature on the cuprate phase diagram. http://ift.tt/2ri4N3h Dynamics of N th-order rogue waves in ( $$2+1$$ 2 + 1 )-dimensional Hirota equation Inspired by the works of Ohta and Yang, we construct general high-order rogue wave solutions for the \((2+1)\) -dimensional Hirota equation using the bilinear transformation method. The formula of the solutions can be represented in terms of determinants. It is shown that the order of rogue waves will depend on the roots of determinants. These rogue waves are line rogue waves, which arise from the constant background with a line profile and then disappear into the constant background again. In addition, some interesting dynamic patterns of rogue waves are exhibited in the (x, y) and (x, t) planes. http://ift.tt/2qy2AU7 Measurement-based local quantum filters and their ability to transform quantum entanglement We introduce local filters as a means to detect the entanglement of bound entangled states which do not yield to detection by witnesses based on positive maps which are not completely positive. We demonstrate how such non-detectable bound entangled states can be locally filtered into detectable bound entangled states. Specifically, we show that a bound entangled state in the orthogonal complement of the unextendible product bases (UPB), can be locally filtered into another bound entangled state that is detectable by the Choi map. We reinterpret these filters as local measurements on locally extended Hilbert spaces. We give explicit constructions of a measurement-based implementation of these filters for \(2 \otimes 2\) and \(3 \otimes 3\) systems. This provides us with a physical mechanism to implement such local filters. http://ift.tt/2rhVgsF Nonplanar electrostatic shock waves in an opposite polarity dust plasma with nonextensive electrons and ions A rigorous theoretical investigation has been carried out on the propagation of nonplanar (cylindrical and spherical) dust-acoustic shock waves (DASHWs) in a collisionless four-component unmagnetized dusty plasma system containing massive, micron-sized, positively and negatively charged inertial dust grains along with q (nonextensive) distributed electrons and ions. The well-known reductive perturbation technique has been used to derive the modified Burgers equation (which describes the shock wave properties) and its numerical solution. It has been observed that the effects of charged dust grains of opposite polarity, nonextensivity of electrons and ions, and different dusty plasma parameters have significantly modified the fundamental properties (viz., polarity, amplitude, width, etc.) of the shock waves. The properties of DASHWs in nonplanar geometry are found to be significantly different from those in one-dimensional planar geometry. The findings of our results from this theoretical investigation may be useful in understanding the nonlinear features of localized electrostatic disturbances in both space and laboratory dusty plasmas. http://ift.tt/2qxGqRW Quark number density and susceptibility calculation under one-loop correction in the mean-field potential We calculate quark number density and susceptibility under one-loop correction in the mean-field potential. The calculation shows continuous increase in the number density and susceptibility up to the temperature \(T=0.4\) GeV. Then the values of number density and susceptibility approach the very weakly result with higher values of temperature. The result indicates that the calculated values fit well with increase in temperature to match the lattice QCD simulations of the same quantities. http://ift.tt/2rigIho Flexible Multiferroic Bulk Heterojunction with Giant Magnetoelectric Coupling via van der Waals Epitaxy ACS Nano from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rmnGUx Nanomechanically Visualizing Drug–Cell Interaction at the Early Stage of Chemotherapy from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rm8O8T Berberine regulates the protein expression of multiple tumorigenesis-related genes in hepatocellular carcinoma cell lines Hepatocellular carcinoma (HCC) is the seventh most common malignancy and the third leading cause of cancer-related death worldwide with an extremely grim prognosis. Berberine (BBR) has been found to inhibit pr... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCJtIE Similar cardiometabolic effects of high- and moderate-intensity training among apparently healthy inactive adults: a randomized clinical trial Metabolic syndrome (MetS) increases the risk of morbidity and mortality from cardiovascular disease, and exercise training is an important factor in the treatment and prevention of the clinical components of M... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rD36QT Benefits of local tumor excision and pharyngectomy on the survival of nasopharyngeal carcinoma patients: a retrospective observational study based on SEER database There is ongoing debate about surgery of primary site in nasopharyngeal carcinoma patients. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2sc7F1q Non-caloric sweetener provides magnetic resonance imaging contrast for cancer detection Image contrast enhanced by exogenous contrast agents plays a crucial role in the early detection, characterization, and determination of the precise location of cancers. Here, we investigate the feasibility of... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rCZJJz Differential expression of microRNAs following cardiopulmonary bypass in children with congenital heart diseases Children with congenital heart defects (CHDs) are at high risk for myocardial failure after operative procedures with cardiopulmonary bypass (CPB). Recent studies suggest that microRNAs (miRNA) are involved in... from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2sccvfd Basal subtype is predictive for response to cetuximab treatment in patient derived xenografts of squamous cell head and neck cancer Cetuximab is the single targeted therapy approved for the treatment of head and neck cancer (HNSCC). Predictive biomarkers have not been established and patient stratification based on molecular tumor profiles has not been possible. Since EGFR pathway activation is pronounced in basal subtype, we hypothesized this activation could be a predictive signature for an EGFR directed treatment. From our patient derived xenograft platform of HNSCC, 28 models were subjected to Affymetrix gene expression studies on HG U133+ 2.0. Based on the expression of 821 genes, the subtype of each of the 28 models was determined by integrating gene expression profiles through centroid-clustering with previously published gene expression data by Keck et al. The models were treated in groups of 5-6 animals with docetaxel, cetuximab, everolimus, cis- or carboplatin and 5-fluorouracil. Response was evaluated by comparing tumor volume at treatment initiation and after three weeks of treatment (RTV). Tumors distributed over the 3 signature-defined subtypes: 5 mesenchymal/inflamed phenotype (MS), 15 basal type (BA), 8 classical type (CL). Cluster analysis revealed a strong correlation between response to cetuximab and the basal subtype. RTV MS 3.32 vs BA 0.78 (MS vs BA, unpaired t test p0.0002). Cetuximab responders were distributed as following: 1/5 in MS, 5/8 in CL and 13/15 in the BA group. Activity of classical chemotherapies did not differ between the subtypes. In conclusion basal subtype was associated with response to EGFR directed therapy in head and neck squamous cell cancer patient derived xenografts. This article is protected by copyright. All rights reserved. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rSO21g Risk Stratification and Long-term Risk Prediction of E6 Oncoprotein in a Prospective Screening Cohort in China E6 oncoprotein is a necessary agent of HPV driven oncogenic transformation. This study is aimed at evaluating the risk stratification potency of HPV 16/18 E6 oncoprotein (E6) as a triage method for HPV positivity. Moreover, it also acts as a predictor of cervical intraepithelial neoplasia grade 3 or worse (CIN3+). The screening cohort of 1,997 women was followed for a 15 year period in approximate five-year intervals. Participants were concurrently screened by HPV DNA testing (HC2), liquid based cytology (LBC), visual inspection with acetic acid (VIA), and were referred to colposcopy and biopsy if any tests reflected positive. E6 was performed on cervical samples collected from this cohort in 2005 and 2014. The ability of E6 to predict CIN3+ risk after the five and ten year interval was evaluated. Among HPV positive women in 2005, E6 indicated the lowest positive rate (9.9%) compared to LBC (48.4%) and VIA (28.0%), however, a higher prevalence rate (10.3%) and ten year cumulative incidence rate (53.0%) of CIN3+ were detected among women who were E6 positive. Meanwhile, only approximately 4.2% and 2.9% of women with abnormal LBC and positive VIA were diagnosed as prevalent CIN3+ in 2005, 23.0% and 16.5% developed to CIN3+ after year ten, respectively. Strong associations were found between precedent and subsequent HPV persistence and E6 oncoprotein expression (ORadjusted=40.0 and 21.2 respectively). E6 oncoprotein can serve as a low-cost, highly specific, strongly indicative point-of-care method in the triage and treatment of HPV positive women. This article is protected by copyright. All rights reserved. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rSLbFq What has preoperative radio(chemo)therapy brought to localized rectal cancer patients in terms of perioperative and long-term outcomes over the past decades? A systematic review and meta-analysis based on 41,121 patients We asked what preoperative radiotherapy/chemoradiotherapy (PRT/PCRT) has brought to patients in terms of perioperative and long-term outcomes over the past decades. A systematic review and meta-analysis was conducted using PubMed, Embase and Web of Science databases. All original comparative studies published in English that were related to PRT/PCRT and surgical resection and which analyzed survival, postoperative and quality of life outcomes were included. Data synthesis and statistical analysis were carried out using Stata software. Data from 106 comparative studies based on 80 different trials enrolling 41,121 patients were included in our study. Based on our overall analyses, PRT/PCRT significantly improved patients' local recurrence-free survival (LRFS), but neither overall survival (OS) nor metastasis-free survival (MFS) showed improvement. In addition, PRT significantly increased the postoperative morbidity and mortality but PCRT did not have a significant effect. Furthermore, PRT/PCRT significantly increased the risk of postoperative wound complications but not anastomotic leakage and bowel obstruction. Our comprehensive subgroup analyses further supported the aforementioned results. Meanwhile, long-term anorectal symptoms (impaired squeeze pressures, use of pads, incontinence and urgency) and erectile dysfunction were also significantly increased in patients after PRT/PCRT. The benefits of PRT/PCRT as applied over the last several decades have not been sufficient to improve OS. Metastases of primary tumor and postoperative adverse effects were the two primary obstacles for an improved OS. In fact, the greatest advantage of PRT/PCRT is still local tumor control and a significantly improved LRFS. This article is protected by copyright. All rights reserved. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2qzfY6j Combination of chemotherapy and gefitinib as first-line treatment for patients with advanced lung adenocarcinoma and sensitive EGFR mutations: A randomized controlled trial To explore the optimal treatment strategy for patients who harbor sensitive EGFR mutations, a head-to-head study was performed to compare chemotherapy and gefitinib in combination or with either agent alone as first-line therapy, in terms of efficacy and safety. A total 121 untreated patients with advanced lung adenocarcinoma who harbored sensitive EGFR mutations were randomly assigned to receive gefitinib combined with pemetrexed and carboplatin, pemetrexed plus carboplatin, or gefitinib alone. The progression-free survival (PFS) of patients in the combination group (17.5 months, 95% CI, 15.3–19.7) was longer than that of patients in the chemotherapy group (5.7 months, 95% CI, 5.2–6.3) or gefitinib (11.9 months, 95% CI, 9.1–14.6) group. The (hazard ratios) HRs of PFS for the combination group versus chemotherapy and gefitinib groups were 0.16 (95% CI, 0.09–0.29, P < 0.001) and 0.48 (95% CI, 0.29–0.78, P = 0.003), respectively. The overall response rate (ORR) in the combination therapy group, chemotherapy group, and the gefitinib group was 82.5%, 32.5%, and 65.9%, respectively. The combinational strategy resulted in longer overall survival (OS) than chemotherapy (HR = 0.46, P = 0.016) or gefitinib (HR = 0.36, P = 0.001) alone. Our finding suggested that treatment with pemetrexed plus carboplatin combined with gefitinib could provide better survival benefits for patients with lung adenocarcinoma harboring sensitive EGFR mutations. This article is protected by copyright. All rights reserved. from #AlexandrosSfakianakis via Alexandros G.Sfakianakis on Inoreader http://ift.tt/2rSVQA4 ▼ Μαΐου (12924) ▼ Μαϊ 30 (580) Traitement endoscopique des carcinomes épidermoïde... Exploring an adapted Risk Behaviour Diagnosis Scal... Healthcare costs of asthma comorbidities: a system... Physical pain is common and associated with nonmed... Gender differences in discharge dispositions of em... Structural Basis for Regulation of ESCRT-III Compl... The Histone Variant MacroH2A1 Is a BRCA1 Ubiquitin... Synaptic Remodeling Depends on Signaling between S... A Glio-Protective Role of mir-263a by Tuning Sensi... ER Stress Inhibits Liver Fatty Acid Oxidation whil... Progressive Motor Neuron Pathology and the Role of... Oxysterol-Binding Protein-Related Protein 1L Regul... Replication-Coupled Dilution of H4K20me2 Guides 53... Mad2 Overexpression Uncovers a Critical Role for T... 2-HG Inhibits Necroptosis by Stimulating DNMT1-Dep... Cancer-Associated IDH1 Promotes Growth and Resista... A TLR3-Specific Adjuvant Relieves Innate Resistanc... Genome-wide Analysis of STAT3-Mediated Transcripti... Specification and Diversification of Pericytes and... Structural Basis of the Human Endoglin-BMP9 Intera... Effect of Human Genetic Variability on Gene Expres... Quantitative Cell Cycle Analysis Based on an Endog... Smoking not only burns lungs, it affects your ENT ... Development of a 12-item short version of the HIV ... "Fibromyalgia and quality of life: mapping the rev... Cross-cultural measurement invariance in the satis... Targeting of super-enhancers and mutant BRAF can s... BZML, a novel colchicine binding site inhibitor, o... Response to "Circular RNA profile identifies circP... A combinatorial strategy using YAP and pan-RAF inh... Tamoxifen acceptance and adherence among patients ... Development of a cancer risk prediction tool for u... The impact of a multifaceted intervention includin... Kr-POK (ZBTB7c) regulates cancer cell proliferatio... Acetylation of MKL1 by PCAF regulates pro-inflamma... Woman credits faith and exercise with helping her ... Un reggiseno speciale per riconoscere il tumore al... Nicotine is more addictive and deadlier than cocai... In vitro quality evaluation of leading brands of c... Molecular dynamics simulations reveal the conforma... The autism-linked UBE3A T485A mutant E3 ubiquitin ... Hepatitis C virus induces a pre-diabetic state by ... Uridine monophosphate synthetase enables eukaryoti... Crystal structure of the thioesterification confor... The higher plant plastid NAD(P)H dehydrogenase-lik... The UbiK protein is an accessory factor necessary ... The p90 ribosomal S6 kinase-UBR5 pathway controls ... Cryptococcus neoformans ADS lyase in an enzyme ess... Promoting Mental Health in Italian Middle and High... In Vivo 3T Magnetic Resonance Imaging Using a Biol... Fresh Snack Food Channel Evaluation Model for Inte... Corrigendum to "A Study on the Combustion Performa... Small Engines as Bottoming Cycle Steam Expanders f... Surgical Orthodontic Treatment of a Patient Affect... Combining Carcinoembryonic Antigen and Platelet to... Agroforestry and Management of Trees in Bunya Coun... Femoral nerve and lumbar plexus injury after minim... Navigation-guided clipping of a de novo aneurysm a... Improved performance of organic light-emitting dio... Importance of polaron effects for charge carrier m... Dynamics of N th-order rogue waves in ( $$2+1$$ 2 ... Measurement-based local quantum filters and their ... Nonplanar electrostatic shock waves in an opposite... Quark number density and susceptibility calculatio... Flexible Multiferroic Bulk Heterojunction with Gia... Nanomechanically Visualizing Drug–Cell Interaction... Berberine regulates the protein expression of mult... Similar cardiometabolic effects of high- and moder... Benefits of local tumor excision and pharyngectomy... Non-caloric sweetener provides magnetic resonance ... Differential expression of microRNAs following car... Basal subtype is predictive for response to cetuxi... Risk Stratification and Long-term Risk Prediction ... What has preoperative radio(chemo)therapy brought ... Combination of chemotherapy and gefitinib as first... The changes in the proteolysis activity and the ac... Characterization of amorphous granular starches pr... Evolution of the antioxidant capacity and phenolic... Effects of microwave-discharged cold plasma on syn... Bostrycin production by agro-industrial residues a... Different cell disruption methods for obtaining ca... Influence of autoclave treatment and enzymatic hyd... Effects of varying concentrations of sodium chlori... Improved synthesis of isomaltooligosaccharides usi... Sleep-inducing effect of lettuce ( Lactuca sativa ... Effect of fermentation times and extracting solven... In vitro anti-inflammatory activity of Pothos scan... Lipid oxidation-related characteristics of gim bug... Effects of l -lysine on thermal gelation propertie... Nanosuspended branched chain amino acids: the infl... Relative sweetness and sweetness quality of Xylobi... Efficiency of dietary sodium alginate coating inco... Influence of commercial culture composition and co... Retrogradation kinetics of chestnut starches culti... Etiology and clinical presentation of birth defect... A review on actuation and sensing techniques for M... Risikoadjustierte Nachsorge nach endovaskulärer Be... Tobacco: The Silent Killer of Meghalaya - The Shil...
CommonCrawl
Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): genomics A partially function-to-topic model for protein function prediction Lin Liu1, Lin Tang2, Mingjing Tang3 & Wei Zhou4 BMC Genomics volume 19, Article number: 883 (2018) Cite this article Proteins are a kind of macromolecules and the main component of a cell, and thus it is the most essential and versatile material of life. The research of protein functions is of great significance in decoding the secret of life. In recent years, researchers have introduced multi-label supervised topic model such as Labeled Latent Dirichlet Allocation (Labeled-LDA) into protein function prediction, which can obtain more accurate and explanatory prediction. However, the topic-label corresponding way of Labeled-LDA is associating each label (GO term) with a corresponding topic directly, which makes the latent topics to be completely degenerated, and ignores the differences between labels and latent topics. To achieve more accurate probabilistic modeling of function label, we propose a Partially Function-to-Topic Prediction (PFTP) model for introducing the local topics subset corresponding to each function label. Meanwhile, PFTP not only supports latent topics subset within a given function label but also a background topic corresponding to a 'fake' function label, which represents common semantic of protein function. Related definitions and the topic modeling process of PFTP are described in this paper. In a 5-fold cross validation experiment on yeast and human datasets, PFTP significantly outperforms five widely adopted methods for protein function prediction. Meanwhile, the impact of model parameters on prediction performance and the latent topics discovered by PFTP are also discussed in this paper. All of the experimental results provide evidence that PFTP is effective and have potential value for predicting protein function. Based on its ability of discovering more-refined latent sub-structure of function label, we can anticipate that PFTP is a potential method to reveal a deeper biological explanation for protein functions. Proteins are the main component of a cell, which explain the basic activity of cellular life. The research of protein functions is of great significance in elucidating the phenomena of life [1]. Although there have been amount of protein sequences in biological database in recent years [2, 3], a small percentage of these proteins have experimental function annotations because of the high cost of biochemical experiment. In comparison with biochemical experiment, computational methods predict the functional annotations of proteins by using known information, such as sequence, structure, and functional behavior, which reduce time and effort, and have become important long-standing research works in post-genomic era [4]. The earlier computational approach for predicting protein function is to utilize the protein sequence or structure similarity to transfer functional information, such as BLAST. [5]With the rapid development of computational algorithms, an increasing types of algorithms have been introduced into the studies of predicting protein function. At present, computational methods of protein function prediction can be classified as two types: classification-based approaches and graph-based approaches. In classification-based approaches, proteins are viewed as instances to be classified, and function annotations (such as Gene Ontology (GO) [6] terms) are regarded as labels. Each protein has a feature space composed by classification feature extracted from amino acid sequence, textual repositories, and so on. Based on these annotated proteins and their attribute features, we can train the classifier on training dataset and then predict function labels for unannotated proteins. For graph-based approaches, the network structure information of proteins is used to compute the distance between proteins, and then the closely related proteins are considered to have similar functional annotations [7, 8]. In classification-based approaches, since each protein is annotated with several functions, various multi-label classifiers can be adopted. Yu et.al [9] proposed a multiple kernels (ProMK) method to process multiple heterogeneous protein data sources for predicting protein functions; Fodeh et.al [10] used the binary-relevance for different classifiers to automatically assign molecular functions to genes; a new ant colony optimization algorithm is proposed in reference [11], which has applied to protein function dataset; Wang et.al [12] applied a new multi-label linear discriminant analysis approach to address protein function prediction problem; Liu et.al [4] introduced a multi-label supervised topic model called Labeled-LDA into protein function prediction, whose experimental results on yeast and human datasets demonstrated the effectiveness of Labeled-LDA on protein function prediction. This research is the first effort to apply a multi-label supervised topic model to protein function prediction. Besides, Pinoli et.al [13,14,15] applied two standard topic models, including latent Dirichlet allocation (LDA) and probabilistic latent semantic analysis (PLSA) [16, 17], to predict GO terms of proteins on the basic of available GO annotations. In the topic modeling process of reference [4], each protein is viewed as a mixture of 'topics', where each 'topic' is also viewed as the mixture of amino acid blocks. In comparison with discriminative model, such as support vector machine (SVM), a multi-label supervised topic model can transform the word-level statistics of each document to its label-level distribution, and model all labels simultaneously rather than treating each label independently. Specially, topic model can provide the function label probability distribution over proteins as an output, and each function label is explained as a probability distribution over amino acid blocks. Nonetheless, in the study of Liu et.al [4], Labeled-LDA associates each label (GO term) with a corresponding topic directly, which makes the latent topics to be completely degenerated, and ignores the differences between labels and latent topics. Therefore, Labeled-LDA isn't able to discover the topic that represents common semantic of protein functions. For interpretable text mining, Ramage et.al [18] proposed a partially labeled LDA (PLDA), which associates each label with a topic subset partitioned from global topics set. PLDA overcame the shortfalls of Labeled-LDA, and improved the precision of text classification in experimental research. Inspired by the application of multi-label topic model in protein function prediction and PLDA model, we introduce a Partially Function-to-Topic Prediction model (called PFTP). Firstly, we describe the related definitions by contrasting text data and protein function data. Then the topic modeling process of PFTP is described in detail, including the generative process and parameter estimation of PFTP. In a 5-fold cross validation experiment on predicting protein function, PFTP significantly outperforms five algorithms compared. All of the experimental results provide evidence that PFTP is effective and have potential value for predicting protein function. Related definitions and notations To better understand related objects of topic model, the corresponding relationship between protein function prediction and multi-label classification of text is first depicted in Fig. 1. The corresponding relationship between protein function prediction and multi-label supervised topic modeling of text. The protein function data is shown on the left side, and the text data is shown on the right side Several topic modeling concepts of protein function data and text data are displayed in Fig. 1, one on the left and the other on the right. First of all, the text dataset is composed of several documents numbered D1 to Dn, and the protein function dataset is composed of several protein sequences numbered P1 to Pn. Obviously, words are the main component of document, such as word 'table' and 'database'. But for protein sequence, we consider a protein sequence to be a text string, which is defined on a fixed 20 amino acids alphabet (G,A,V,L,I,F,P, Y,S,C,M,N,Q,T,D,E,K,R,H,W). Then amino acid blocks are the main component of protein sequence, such as 'MS' and 'TS'. Besides, a protein annotated by GO terms is equivalent to a document labeled by tags, so each GO term or tag can be viewed as a label, such as 'GO0003673' and 'language'. According to above statements, there are three types of equivalence relations between protein function data and text data: protein sequence and document, amino acid block and word, GO term and document tag. In general, the GO term (document tag), protein sequence (document) and amino acid block (word) are observable data for dataset. As the input for topic model, the bag of words (BoW) is constructed by computing the word-document matrix, where matrix element is obtained by counting the times of word in each document. As an instance, the word 'table' appears two times in document D1. Likewise for protein function data, an amino acid block - protein sequence matrix is computed for the construction of protein BoW. As an example, the amino acid block 'MS' appears one times in protein P1. Besides, the fixed amino acid blocks set or words set is also called 'vocabulary'. For topic model, a 'topic' is viewed as a probability distribution over a fixed vocabulary. Taking the text data as an example, the probabilities of word 'table' over 'topic 1' are 0.05. For the protein data, the probabilities of amino acid block 'MS' over 'topics 1' are 0.21. Obviously, topics are latent and needed to be inferred by topic modeling. Finally, in order to establish the connection between labels and topics, the latent topics discovered by our PFTP are divided into several non-overlapping subsets, each of which associates a label. As can be seen in Fig. 1, we split whole topic set into several groups: 'label1' connects with 'topic1' to 'topic3'; 'lable2' connects with 'topic 4' to 'topic 5', and so on. It is worth noting that our PFTP define a special type of topics as background topics. The background topics are divided from whole latent topics set, and don't associate any observable label, which express the common sematic of documents. For instance, the background topic on text dataset may be some topics with a high probability on several universal words, such as 'text', 'other' and so on. To formalize the above description, the related notations are given below. Suppose there are D proteins in the protein set which compose the protein space \( \mathbb{D}=\left\{1,\dots, D\right\} \), and the vocabulary of amino acid blocks is in a space of \( \mathbb{W}=\left\{1,\dots, W\right\} \), then W is the size of vocabulary. The topic space including Ktopics is represented by \( \mathbb{K}=\left\{1,\dots, K\right\} \), which is shared by whole protein set. Therefore, \( \mathbb{K} \) is also called global topic space. The protein function label space is expressed as \( \mathbb{L}=\left\{1,\dots, L\right\} \). In PFTP model, the global topic space \( \mathbb{K} \) is divided into L groups without overlap, and each group corresponds to a subspace of topic \( {\mathbb{K}}_l \). Besides, there is a 'background subspace of topics' \( {\mathbb{K}}_B \). $$ {\displaystyle \begin{array}{l}\mathbb{K}=\left({\cup}_{l\in {\mathbb{L}}_d}{\mathbb{K}}_l\right)\cup {\mathbb{K}}_B,\kern1.5em {\mathbb{K}}_l,{\mathbb{K}}_B\subset \mathbb{K},\kern1em \\ {}\kern0.5em {\mathbb{K}}_l\ne \varnothing \kern1em \left(l\in \mathbb{L}\right),\kern1.5em {\mathbb{K}}_B\ne \varnothing, \\ {}\forall {\mathbb{K}}_i,{\mathbb{K}}_j\subset \mathbb{K},\kern1em i,j\in \mathbb{L},\kern1em i\ne j\kern1.5em \Rightarrow \\ {}{\mathbb{K}}_i\cap {\mathbb{K}}_j=\varnothing, \kern1em {\mathbb{K}}_i\cap {\mathbb{K}}_B=\varnothing \end{array}} $$ Then, each of labels is assigned a subspace of topic \( {\mathbb{K}}_l \), the background topic subspace \( {\mathbb{K}}_B \) associates a background label lB.In this case, the label space is expanded to L + 1 dimensions and expressed as \( {\mathbb{L}}^{\prime } \). Similar to topic modeling of text in Labeled-LDA, each of topics can be represented as a multinomial distribution of parameter \( {\boldsymbol{\uptheta}}_k={\left\{{\theta}_{kw}\right\}}_{w=1}^W \) (the equivalent of the topic-word matrix in Fig. 1) on the vocabulary \( \mathbb{W} \), and θk obeys a Dirichlet prior distribution of hyper parameter \( \boldsymbol{\uplambda} ={\left\{{\lambda}_w\right\}}_{w\in \mathbb{W}} \). But what is different about our PFTP is that each of labels l is represented as a multinomial distribution of parameter \( {\boldsymbol{\uppi}}_l={\left\{{\pi}_{lk}\right\}}_{k\in {\mathbb{K}}_l} \) (the equivalent of the label-topics probability in Fig. 1) on its topic subspace, where πlk is the probabilities of topic k among topic subspace \( {\mathbb{K}}_l \) corresponding to label l. Suppose πl obeys a Dirichlet prior distribution of hyper parameter α. $$ {\boldsymbol{\uppi}}_l\sim \mathrm{Dir}\left(\boldsymbol{\upalpha} \right),\kern0.5em \boldsymbol{\upalpha} ={\left\{{\alpha}_k\right\}}_{k\in \mathbb{K}},\kern0.5em \mid \boldsymbol{\upalpha} \mid =\left|{\mathbb{K}}_l\right|={K}_l $$ We utilize a binary vector Λd to map global label space \( {\mathbb{L}}^{\prime } \) to \( {\mathbb{L}}_d \): $$ {\displaystyle \begin{array}{l}{\mathbb{L}}_d={\left\{l{\Lambda}_{dl}\right\}}_{l=1}^{L+1}=\mathbb{L}{\boldsymbol{\Lambda}}_d\kern1em \\ {}{\boldsymbol{\Lambda}}_d={\left\{{\Lambda}_{dl}\right\}}_{l\in {\mathbb{L}}^{\prime }}=\left\{{\Lambda}_{d1},{\Lambda}_{d2},\dots, {\Lambda}_{dL},1\right\},\kern1em \\ {}{\Lambda}_{dl}=\left\{\begin{array}{l}1,\kern1em l\in {\mathbb{L}}_d\\ {}0,\kern1em l\notin {\mathbb{L}}_d\end{array}\right.\end{array}} $$ Λd, L + 1 = 1 illustrates that latent background label lB is assigned to each protein d. Then, the probabilities of \( {L}_d=\left|{\mathbb{L}}_d\right| \) labels of protein d is represented by a weight of protein-label \( {\boldsymbol{\uppsi}}_d={\left\{{\psi}_{dl}\right\}}_{l\in {\mathbb{L}}_d}={\left\{{\psi}_{dl}{\Lambda}_{dl}\right\}}_{l\in {\mathbb{L}}^{\prime }} \), and ψd obeys a Dirichlet prior distribution of hyper parameter βd constrained by β and Λd: $$ {\boldsymbol{\upbeta}}_d={\left\{{\beta}_l\right\}}_{l\in {\mathbb{L}}_d}={\left\{{\beta}_l{\Lambda}_{dl}\right\}}_{l\in {\mathbb{L}}^{\prime }}={\boldsymbol{\upbeta} \boldsymbol{\Lambda}}_d,\kern0.5em \boldsymbol{\upbeta} ={\left\{{\beta}_l\right\}}_{l\in {\mathbb{L}}^{\prime }} $$ In this paper, the shared parameters of whole protein sets is called global parameter in this paper, and the parameter facing one protein is called local parameter. The topic modeling process of PFTP Based on above expression, the process of PFTP topic modeling is divided into three steps: BoW construction, the description of model (the generative process or graphic model) and parameter estimation (model training and predicting).These steps are depicted in Fig. 2. The topic modeling process of PFTP. The process of PFTP topic modeling is divided into three steps: BoW construction, the description of model and parameter estimation As shown in Fig. 2, PFTP model takes BoW as input. As we construct BoW of protein in exactly the same way as reference [4], this step will not repeat in this paper. There are two ways to describe our topic model, including the generative process and the graphic model. After identifying the model structure, the joint distribution of whole model is obtained. Based on this joint distribution, we can learn and infer unknown parameters of our model, which are also the output of PFTP. In fact, unknown parameters represent several matrixes. For instance, \( {\boldsymbol{\uptheta}}_k={\left\{{\theta}_{kw}\right\}}_{w=1}^W \) represents the topic-word matrix in Fig. 2, and \( {\boldsymbol{\uppi}}_l={\left\{{\pi}_{lk}\right\}}_{k\in {\mathbb{K}}_l} \) represents the label-topics matrix in Fig. 2. The second and third steps are discussed in next sections. It is worth noting that the third step includes two sub-steps for realizing function prediction: model training and predicting. Both of these two sub-steps need adopt learning and inference algorithm to estimate parameters of model, and are described with more detail as follows. The process of model training PFTP takes a training protein set with known function as an input of training model. The unknown parameter includesπl, θk and ψd. The local hidden variables include the label number and topic number of each word sample. The unknown parameter and local hidden variables can be estimated by inferring algorithm in model training. The process of model predicting For unannotated proteins, based on the estimated parameters and local hidden variables, unknown local parameter ψd and hidden variables are updating by constraining the global parameter πl and θk. Then, the label probabilities over protein are obtained. The description of PFTP model According to the above definitions, the whole word sample x is composed by protein set, where \( {x}_d={\left\{{\mathbf{x}}_{dn}\right\}}_{n=1}^{N_d} \). It illustrates that there are Nd word samples in protein d, xdn represents one word sample. At this point, word sample xdn not only associates a word number wdn(\( {\mathbf{w}}_{dn}\in \mathbb{W} \)), but also is assigned a label number ldn(\( {\mathbf{l}}_{dn}\in \mathbb{L} \)) and a topic number\( {\mathbf{z}}_{dn}\left({\mathbf{z}}_{dn}\in \mathbb{K}\right) \). The generative process of word sample can be described as follows. The corresponding graphical model is shown in Fig. 3. For each global label \( l\in {\mathbb{L}}^{\prime }=\left\{1,\dots, L,L+1\right\} \) The graphic model of PTPF. Box indicates repeated contents, and the number in the bottom right corner is the times of repetition Sample multinomial parameter vector πl from Kl dimensions Dirichlet distribution: $$ {\boldsymbol{\uppi}}_l={\left\{{\pi}_{lk}\right\}}_{k\in \mathbb{K}}\sim \mathrm{Dir}\left(\boldsymbol{\upalpha} \right),\kern0.5em \boldsymbol{\upalpha} ={\left\{{\alpha}_k\right\}}_{k\in \mathbb{K}} $$ For each global topic \( k\in \mathbb{K}=\left\{1,\dots, K\right\} \) Sample multinomial parameter vector θk from W dimensions Dirichlet distribution: $$ {\boldsymbol{\uptheta}}_k={\left\{{\theta}_{kw}\right\}}_{w\in \mathbb{W}}\sim \mathrm{Dir}\left(\boldsymbol{\uplambda} \right),\kern0.5em \boldsymbol{\uplambda} ={\left\{{\lambda}_w\right\}}_{w\in \mathbb{W}} $$ For each local protein \( d\in \mathbb{D}=\left\{1,\dots, D\right\} \) Sample label weight vector of protein d from Ld dimensions Dirichlet distribution: $$ {\boldsymbol{\uppsi}}_d={\left\{{\psi}_{dl}\right\}}_{l\in {\mathbb{L}}_d}\sim \mathrm{Dir}\left({\boldsymbol{\upbeta}}_d\right),\kern0.5em {\boldsymbol{\upbeta}}_d={\left\{{\beta}_l\right\}}_{l\in {\mathbb{L}}_d}={\left\{{\beta}_l{\Lambda}_{dl}\right\}}_{l\in {\mathbb{L}}^{\prime }}={\boldsymbol{\upbeta} \boldsymbol{\Lambda}}_d $$ $$ \boldsymbol{\upbeta} ={\left\{{\beta}_l\right\}}_{l\in {\mathbb{L}}^{\prime }},{\boldsymbol{\Lambda}}_d={\left\{{\Lambda}_{dl}\right\}}_{l\in {\mathbb{L}}^{\prime }},{\Lambda}_{dl}=\left\{\begin{array}{l}1,\kern1em l\in {\mathbb{L}}_d\\ {}0,\kern1em l\notin {\mathbb{L}}_d\end{array}\right.,{\Lambda}_{d,L+1}\equiv 1 $$ For each word sample xdn, Sample label number ldn of xdn from Ld dimensions multinomial distribution of parameter ψd: $$ {\mathbf{l}}_{dn}\sim {\boldsymbol{\uppsi}}_d\kern0.5em \mathrm{or}\kern0.5em {L}_d={\left\{{\mathbf{l}}_{dn}\right\}}_{n=1}^{N_d}\sim \mathrm{Mul}\left({\boldsymbol{\uppsi}}_d,{N}_d\right) $$ Sample topic number zdn of xdn from K dimensions multinomial distribution of parameter\( {\boldsymbol{\uppi}}_{{\mathbf{l}}_{dn}} \): $$ {\mathbf{z}}_{dn}\sim {\boldsymbol{\uppi}}_{{\mathbf{l}}_{dn}}\kern0.5em \mathrm{or}\kern0.5em {\mathrm{Z}}_d={\left\{{\mathbf{z}}_{dn}\right\}}_{n=1}^{N_d}\sim \mathrm{Mul}\left({\boldsymbol{\uppi}}_{d\;{\mathbf{l}}_{dn}},{N}_d\right) $$ Sample word number wdn of xdn from W dimensions multinomial distribution of parameter \( {\boldsymbol{\uptheta}}_{{\mathbf{z}}_{dn}} \): $$ {\mathbf{w}}_{dn}\sim {\boldsymbol{\uptheta}}_{{\mathbf{z}}_{dn}}\kern0.5em \mathrm{or}\kern0.5em {W}_d={\left\{{\mathbf{w}}_{dn}\right\}}_{n=1}^{N_d}\sim \mathrm{Mul}\left({\boldsymbol{\uptheta}}_{Z_d},{N}_{d{\mathbf{z}}_{dn}}\right) $$ In PFTP model, the unknown parameters to be estimated are the global label multinomial parameters \( \boldsymbol{\uppi} ={\left\{{\boldsymbol{\uppi}}_l\right\}}_{l\in {\mathbb{L}}^{\prime }}={\left\{{\pi}_{lk}\right\}}_{l\in {\mathbb{L}}^{\prime },k\in {\mathbb{K}}_l} \), the global topic multinomial parameters \( \boldsymbol{\uptheta} ={\left\{{\boldsymbol{\uptheta}}_k\right\}}_{k\in \mathbb{K}}={\left\{{\theta}_{kw}\right\}}_{k\in \mathbb{K},w\in \mathbb{W}} \) and the local document label weight \( {\boldsymbol{\uppsi}}_d={\left\{{\psi}_{dl}\right\}}_{l\in {\mathbb{L}}_d} \); the local hidden variables are document label \( {L}_d={\left\{{\mathbf{l}}_{dn}\right\}}_{n=1}^{N_d} \) and topic \( {Z}_d={\left\{{\mathbf{z}}_{dn}\right\}}_{n=1}^{N_d} \); the known information are the observed label vector Λd, word samples \( {W}_d={\left\{{\mathbf{w}}_{dn}\right\}}_{n=1}^{N_d} \) and their joint distribution. As shown in Eq. (11): $$ {\displaystyle \begin{array}{l}\kern1em p\left(\boldsymbol{\uppi}, \boldsymbol{\uptheta}, \boldsymbol{\uppsi}, L,Z,W|\boldsymbol{\Lambda}, \boldsymbol{\upalpha}, \boldsymbol{\uplambda}, \boldsymbol{\upbeta} \right)\\ {}=p\left(\boldsymbol{\uppi} |\boldsymbol{\upalpha} \right)p\left(\boldsymbol{\uptheta} |\boldsymbol{\uplambda} \right)\prod \limits_{d\in \mathbb{D}}\kern0em p\left({\boldsymbol{\uppsi}}_d|{\boldsymbol{\Lambda}}_d,{\boldsymbol{\upbeta}}_d\right)\\ {}\kern1em p\left({L}_d|{\boldsymbol{\uppsi}}_d\right)p\left({Z}_d|{L}_d,\boldsymbol{\uppi} \right)p\left({W}_d|{Z}_d,\boldsymbol{\uptheta} \right)\\ {}=\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em p\left({\boldsymbol{\uppi}}_l|\boldsymbol{\upalpha} \right)\prod \limits_{k\in \mathbb{K}}\kern0em p\left({\boldsymbol{\uptheta}}_k|\boldsymbol{\uplambda} \right)\prod \limits_{d\in \mathbb{D}}\kern0em p\left({\boldsymbol{\uppsi}}_d|{\boldsymbol{\Lambda}}_d,{\boldsymbol{\upbeta}}_d\right)\\ {}\kern1em \prod \limits_{n=1}^{N_d}\kern0em p\left({\mathbf{l}}_{dn}|{\boldsymbol{\uppsi}}_d\right)p\left({\mathbf{z}}_{dn}|{\boldsymbol{\uppi}}_{{\mathbf{l}}_{dn}}\right)p\left({\mathbf{w}}_{dn}|{\boldsymbol{\uptheta}}_{{\mathbf{z}}_{dn}}\right)\end{array}} $$ Based on the joint distribution, several parameter estimations can be obtained, including p(π, θ, ψ, L, Z| W, Λ, α, λ, β), the posterior distribution of unknown model parameters and hidden variables. In this paper, we use the Collapsed Gibbs sampling (CGS) to train a PFTP model. By marginalizing the model parameters (π, θ, ψ) from the joint distribution (11), the collapsed joint distribution of (L, Z, W) is obtained. The collapsed inference is as follows. In the joint distribution Eq. (11), function label weight ψd only appears in p(ψd| Λd, βd) and p(Ld| ψd): $$ {\displaystyle \begin{array}{c}p\left({\boldsymbol{\uppsi}}_d,{L}_d|{\boldsymbol{\Lambda}}_d,{\boldsymbol{\upbeta}}_d\right)=p\left({\boldsymbol{\uppsi}}_d|{\boldsymbol{\Lambda}}_d,{\boldsymbol{\upbeta}}_d\right)p\left({L}_d|{\boldsymbol{\uppsi}}_d\right)\\ {}=\frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}\right)}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0em \Gamma \left({\beta}_l{\Lambda}_{dl}\right)}\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em {\left({\psi}_{dl}{\Lambda}_{dl}\right)}^{\beta_l{\Lambda}_{dl}-1}\\ {}\kern0.62em \cdot \frac{\left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {N}_{dl}{\Lambda}_{dl}\right)!}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0em \left({N}_{dl}{\Lambda}_{dl}\right)!}\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em {\left({\psi}_{dl}{\Lambda}_{dl}\right)}^{N_{dl}{\Lambda}_{dl}}\\ {}={\mathrm{C}}_1\frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}\right)}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0em \Gamma \left({\beta}_l{\Lambda}_{dl}\right)}\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em {\left({\psi}_{dl}{\Lambda}_{dl}\right)}^{\Lambda_{dl}\left({\beta}_l+{N}_{dl}\right)-1}\end{array}} $$ Ndl is the number of samples assigned to observed label \( l\in {\mathbb{L}}_d \) of protein d; C1 is the constant of multinomial distribution coefficient: $$ {\mathrm{C}}_1=\frac{\left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {N}_{dl}{\Lambda}_{dl}\right)!}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0em \left({N}_{dl}{\Lambda}_{dl}\right)!}=\frac{\left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {N}_{dl}\right)!}{\prod_{l\in {\mathbb{L}}_d}\kern0em {N}_{dl}!} $$ Suppose\( {\widehat{\beta}}_{dl}={\Lambda}_{dl}\left({\beta}_l+{N}_{dl}\right) \), \( {\widehat{\psi}}_{dl}={\psi}_{dl}{\Lambda}_{dl} \). This parameter is eliminated by doing the integral of ψd in Eq. (11), the marginal distribution of local hidden variable Ld is shown in below: $$ {\displaystyle \begin{array}{c}p\left({L}_d|{\boldsymbol{\Lambda}}_d,\boldsymbol{\upbeta} \right)={\int}_{\Psi_d}\kern0em p\left({\boldsymbol{\uppsi}}_d,{L}_d|{\boldsymbol{\Lambda}}_d,\boldsymbol{\upbeta} \right)\mathrm{d}{\boldsymbol{\uppsi}}_d\\ {}={\int}_{\Psi_d}\kern0em {\mathrm{C}}_1\frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}\right)}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0em \Gamma \left({\beta}_l{\Lambda}_{dl}\right)}\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em {\left({\psi}_{dl}{\Lambda}_{dl}\right)}^{\Lambda_{dl}\left({\beta}_l+{N}_{dl}\right)-1}\mathrm{d}{\boldsymbol{\uppsi}}_d\\ {}={\mathrm{C}}_1\frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}\right)}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0em \Gamma \left({\beta}_l{\Lambda}_{dl}\right)}{\left(\frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\widehat{\beta}}_{dl}\right)}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0.1em \Gamma \left({\widehat{\beta}}_{dl}\right)}\right)}^{\hbox{-} 1}\\ {}\kern0.6em \cdot {\int}_{{\widehat{\Psi}}_d}\kern0em \frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\widehat{\beta}}_{dl}\right)}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0.1em \Gamma \left({\widehat{\beta}}_{dl}\right)}\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em {{\widehat{\psi}}_{dl}}^{{\widehat{\beta}}_{dl}-1}\mathrm{d}{\widehat{\boldsymbol{\uppsi}}}_d\\ {}\propto \frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}\right)}{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}+{N}_d\right)}\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em \frac{\Gamma \left({\beta}_l{\Lambda}_{dl}+{N}_{dl}{\Lambda}_{dl}\right)}{\Gamma \left({\beta}_l{\Lambda}_{dl}\right)}\end{array}} $$ \( {N}_d={\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {N}_{dl}{\Lambda}_{dl}={\sum}_{l\in {\mathbb{L}}_d}\kern0em {N}_{dl} \) is the number of observed samples of protein d. The integral of Eq. (14) satisfies probabilistic completeness: $$ {\int}_{{\widehat{\Psi}}_d}\kern0em \frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\widehat{\beta}}_{dl}\right)}{\prod_{l\in {\mathbb{L}}^{\prime }}\kern0.1em \Gamma \left({\widehat{\beta}}_{dl}\right)}\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em {{\widehat{\psi}}_{dl}}^{{\widehat{\beta}}_{dl}-1}\mathrm{d}{\widehat{\boldsymbol{\uppsi}}}_d={\int}_{{\widehat{\Psi}}_d}\kern0em p\left({\widehat{\boldsymbol{\uppsi}}}_d|{\widehat{\boldsymbol{\upbeta}}}_d\right)\mathrm{d}{\widehat{\boldsymbol{\uppsi}}}_d=1 $$ Therefore, deducing from Eq. (14), the predictive probability distribution for the label-assignment ldn = lof sample xdn is: $$ p\left({\mathbf{l}}_{dn}=l|{L}_d^{\left(\backslash dn\right)},{\boldsymbol{\Lambda}}_d,\boldsymbol{\upbeta} \right)\propto \frac{\left({\beta}_l+{N}_{dl}^{\left(\backslash dn\right)}\right){\Lambda}_{dl}}{\sum_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}+{N}_d^{\left(\operatorname{} dn\right)}} $$ \( {N}_{dl}^{\left(\backslash dn\right)} \) is the number of samples that were assigned to label l and word w in addition to the current sample xdn. By the same way, in the joint distribution Eq. (11), global label parameter only appears in p(π| α) and p(Zd| Ld, π). $$ {\displaystyle \begin{array}{c}p\left(\boldsymbol{\uppi}, Z|L,\boldsymbol{\upalpha} \right)=p\left(\boldsymbol{\uppi} |\boldsymbol{\upalpha} \right)p\left(Z|L,\boldsymbol{\uppi} \right)\\ {}=p\left(\boldsymbol{\uppi} |\boldsymbol{\upalpha} \right)\prod \limits_{d\in \mathbb{D}}\kern0em p\left({Z}_d|{L}_d,\boldsymbol{\uppi} \right)\\ {}=\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em p\left({\boldsymbol{\uppi}}_l|\boldsymbol{\upalpha} \right)\prod \limits_{d\in \mathbb{D}}\kern0em {\prod}_{n=1}^{N_d}\kern0em p\left({\mathbf{z}}_{dn}|{\mathbf{l}}_{dn}=l,{\boldsymbol{\uppi}}_l\right)\\ {}=\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em \frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0.1em {\alpha}_k\right)}{\prod_{k\in \mathbb{K}}\kern0.1em \Gamma \left({\alpha}_k\right)}\prod \limits_{k\in \mathbb{K}}\;{\pi_{lk}}^{\alpha_k-1}\frac{\left({\sum}_{k\in \mathbb{K}}\kern0em {N}_{lk}\right)!}{\prod_{k\in \mathbb{K}}\kern0em {N}_{lk}!}\prod \limits_{k\in \mathbb{K}}\;{\pi_{lk}}^{N_{lk}}\\ {}=\prod \limits_{l\in {\mathbb{L}}^{\prime }}\;{\mathrm{C}}_2\frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0.1em {\alpha}_k\right)}{\prod_{k\in \mathbb{K}}\kern0.1em \Gamma \left({\alpha}_k\right)}\prod \limits_{k\in \mathbb{K}}\;{\pi_{lk}}^{\alpha_k+{N}_{lk}-1}\end{array}} $$ Nlk represents the number of samples assigned to topic k of global label l; C2 is the constant of multinomial distribution coefficient: $$ {\mathrm{C}}_2=\frac{\left({\sum}_{k\in \mathbb{K}}\kern0em {N}_{lk}\right)!}{\prod_{k\in \mathbb{K}}\kern0em {N}_{lk}!} $$ Suppose \( {\widehat{\alpha}}_k={\alpha}_k+{N}_{lk} \). This parameter is eliminated by doing the integral of π in Eq. (17), the marginal distribution of local hidden variable Z is shown in below: $$ {\displaystyle \begin{array}{c}p\left(Z|L,\boldsymbol{\upalpha} \right)=\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em {\int}_{\Pi_l}\kern0em p\left({\boldsymbol{\uppi}}_l|\boldsymbol{\upalpha} \right)\prod \limits_{d\in \mathbb{D}}\kern0em {\prod}_{n=1}^{N_d}\kern0em p\left({\mathbf{z}}_{dn}|{\mathbf{l}}_{dn}=l,{\boldsymbol{\uppi}}_l\right)\mathrm{d}{\boldsymbol{\uppi}}_l\\ {}=\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em {\int}_{\Pi_l}\kern0em {\mathrm{C}}_2\frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0em {\alpha}_k\right)}{\prod_{k\in \mathbb{K}}\kern0em \Gamma \left({\alpha}_k\right)}\prod \limits_{k\in \mathbb{K}}\;{\pi_{lk}}^{\alpha_k+{N}_{lk}-1}\mathrm{d}{\boldsymbol{\uppi}}_l\\ {}=\prod \limits_{l\in {\mathbb{L}}^{\prime }}\;{\mathrm{C}}_2\frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0em {\alpha}_k\right)}{\prod_{k\in \mathbb{K}}\kern0em \Gamma \left({\alpha}_k\right)}{\left(\frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0em {\widehat{\alpha}}_k\right)}{\prod_{k\in \mathbb{K}}\kern0em \Gamma \left({\widehat{\alpha}}_{lk}\right)}\right)}^{\hbox{-} 1}\\ {}\cdot {\int}_{\Pi_l}\kern0em \frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0em {\widehat{\alpha}}_k\right)}{\prod_{k\in \mathbb{K}}\kern0em \Gamma \left({\widehat{\alpha}}_{lk}\right)}\prod \limits_{k\in \mathbb{K}}\;{\pi_{lk}}^{{\widehat{\alpha}}_{lk}-1}\mathrm{d}{\boldsymbol{\uppi}}_l\\ {}\propto \prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em \frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0em {\alpha}_k\right)}{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0em {\alpha}_k+{N}_l\right)}\prod \limits_{k\in \mathbb{K}}\kern0em \frac{\Gamma \left({\alpha}_k+{N}_{lk}\right)}{\Gamma \left({\alpha}_k\right)}\end{array}} $$ \( {N}_l={\sum}_{k\in \mathbb{K}}\kern0em {N}_{lk} \) is the number of observed samples assigned to global l in protein set. The integral of Eq. (19) satisfies probabilistic completeness: $$ {\int}_{\Pi_l}\kern0em \frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0em {\widehat{\alpha}}_k\right)}{\prod_{k\in \mathbb{K}}\kern0em \Gamma \left({\widehat{\alpha}}_{lk}\right)}\prod \limits_{k\in \mathbb{K}}\;{\pi_{lk}}^{{\widehat{\alpha}}_{lk}-1}\mathrm{d}{\boldsymbol{\uppi}}_l={\int}_{\Pi_l}\kern0em p\left({\boldsymbol{\uppi}}_l|{\widehat{\boldsymbol{\upalpha}}}_l\right)=1 $$ Therefore, deducing from Eq. (19), the predictive probability distribution for the topic-assignment k of sample xdn in label l is: $$ p\left({\mathbf{z}}_{dn}=k|{\mathbf{l}}_{dn}=l,{L}^{\left(\backslash dn\right)},{Z}^{\left(\backslash dn\right)},\boldsymbol{\upalpha} \right)\propto \frac{\alpha_k+{N}_{lk}^{\left(\backslash dn\right)}}{\sum_{k\in \mathbb{K}}\kern0.1em {\alpha}_k+{N}_l^{\left(\backslash dn\right)}} $$ \( {N}_{lk}^{\left(\backslash dn\right)} \) represents the number of samples that were assigned to the topic k of global label l in addition to the current sample xdn, \( {N}_l^{\left(\backslash dn\right)}={\sum}_{k\in \mathbb{K}}\kern0em {N}_{lk}^{\left(\backslash dn\right)} \). The integral of θ is same as LDA in Eq. (11): $$ {\displaystyle \begin{array}{c}p\left(W|Z,\boldsymbol{\uplambda} \right)=\prod \limits_{k\in \mathbb{K}}\kern0em {\int}_{\Theta_k}\kern0em p\left({\boldsymbol{\uptheta}}_k|\boldsymbol{\uplambda} \right)\prod \limits_{d\in \mathbb{D}}\kern0em {\prod}_{n=1}^{N_d}\kern0em p\left({\mathbf{w}}_{dn}|{\mathbf{z}}_{dn}=k,{\boldsymbol{\uptheta}}_k\right)\mathrm{d}{\boldsymbol{\uptheta}}_k\\ {}\propto \prod \limits_{k\in \mathbb{K}}\kern0em {\int}_{\Theta_k}\kern0em \frac{\Gamma \left({\sum}_{w\in \mathbb{W}}\kern0.1em {\lambda}_w\right)}{\prod_{w\in \mathbb{W}}\kern0.1em \Gamma \left({\lambda}_w\right)}\prod \limits_{w\in \mathbb{W}}\kern0em {\theta}_{kw}^{\lambda_w+{N}_{kw}-1}\mathrm{d}{\boldsymbol{\uptheta}}_k\\ {}\propto \prod \limits_{k\in \mathbb{K}}\kern0em \frac{\Gamma \left({\sum}_{w\in \mathbb{W}}\kern0.1em {\lambda}_w\right)}{\Gamma \left({\sum}_{w\in \mathbb{W}}\kern0.1em {\lambda}_w+{N}_k\right)}\prod \limits_{w\in \mathbb{W}}\kern0em \frac{\Gamma \left({\lambda}_w+{N}_{kw}\right)}{\Gamma \left({\lambda}_w\right)}\end{array}} $$ Then the predictive probability distribution over the word-assignment wof topic k for observed sample xdn is: $$ p\left({\mathbf{w}}_{dn}=w|{\mathbf{z}}_{dn}=k,{Z}^{\left(\backslash dn\right)},{W}^{\left(\backslash dn\right)},\boldsymbol{\uplambda} \right)\propto \frac{\lambda_w+{N}_{kw}^{\left(\backslash dn\right)}}{\sum_{w\in \mathbb{W}}\kern0.1em {\lambda}_w+{N}_k^{\left(\backslash dn\right)}} $$ \( {N}_{kw}^{\left(\backslash dn\right)} \) is the number of samples that were assigned to the word w of topic k in addition to the current sample xdn, \( {N}_k^{\left(\backslash dn\right)}={\sum}_{w\in \mathbb{W}}\kern0em {N}_{kw}^{\left(\backslash dn\right)} \). Given the above, the collapsed joint distribution of (L, Z, W) is obtained by doing the integral of (π, θ, ψ) in Eqs. (14), (19) and (22). $$ {\displaystyle \begin{array}{l}\kern1em p\left(L,Z,W|\boldsymbol{\Lambda}, \boldsymbol{\upbeta}, \boldsymbol{\upalpha}, \boldsymbol{\uplambda} \right)=p\left(L|\boldsymbol{\Lambda}, \boldsymbol{\upbeta} \right)p\left(Z|L,\boldsymbol{\upalpha} \right)p\left(W|Z,\boldsymbol{\uplambda} \right)\\ {}\propto \prod \limits_{d\in \mathbb{D}}\kern0em \frac{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0.1em {\beta}_l{\Lambda}_{dl}\right)}{\Gamma \left({\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0.1em {\beta}_l{\Lambda}_{dl}+{N}_d\right)}\prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em \frac{\Gamma \left({\beta}_l{\Lambda}_{dl}+{N}_{dl}{\Lambda}_{dl}\right)}{\Gamma \left({\beta}_l{\Lambda}_{dl}\right)}\\ {}\kern1.12em \cdot \prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em \frac{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0.1em {\alpha}_k\right)}{\Gamma \left({\sum}_{k\in \mathbb{K}}\kern0.1em {\alpha}_k+{N}_l\right)}\prod \limits_{k\in \mathbb{K}}\kern0em \frac{\Gamma \left({\alpha}_k+{N}_{lk}\right)}{\Gamma \left({\alpha}_k\right)}\\ {}\kern1em \cdot \prod \limits_{k\in \mathbb{K}}\kern0em \frac{\Gamma \left({\sum}_{w\in \mathbb{W}}\kern0.2em {\lambda}_w\right)}{\Gamma \left({\sum}_{w\in \mathbb{W}}\kern0.2em {\lambda}_w+{N}_k\right)}\prod \limits_{w\in \mathbb{W}}\kern0em \frac{\Gamma \left({\lambda}_w+{N}_{kw}\right)}{\Gamma \left({\lambda}_w\right)}\end{array}} $$ To simplify computation, the Dirichlet prior distributions are symmetric Dirichlet distributions: $$ {\displaystyle \begin{array}{l}\boldsymbol{\upbeta} ={\left\{{\beta}_l\right\}}_{l\in {\mathbb{L}}^{\prime }}=\left\{\underset{\left|{\mathbb{L}}^{\prime}\right|=L+1}{\underbrace{\beta, \dots, \beta }}\right\}\\ {}\boldsymbol{\upalpha} ={\left\{{\alpha}_k\right\}}_{k\in \mathbb{K}}=\left\{\underset{\mid \mathbb{K}\mid =K}{\underbrace{\alpha, \dots, \alpha }}\right\}\\ {}\boldsymbol{\uplambda} ={\left\{{\lambda}_w\right\}}_{w\in \mathbb{W}}=\left\{\underset{\mid \mathbb{W}\mid =W}{\underbrace{\lambda, \dots, \lambda }}\right\}\end{array}} $$ \( {\sum}_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}={\sum}_{l\in {\mathbb{L}}_d}\kern0em {\beta}_l=\beta {L}_d \), \( {\sum}_{k\in \mathbb{K}}\kern0.1em {\alpha}_k=\alpha K \) and \( {\sum}_{w\in \mathbb{W}}\kern0.1em {\lambda}_w=\lambda W \) can be substituted to Eq. (24): $$ {\displaystyle \begin{array}{l}\kern1em p\left(L,Z,W|\boldsymbol{\Lambda}, \boldsymbol{\upbeta}, \boldsymbol{\upalpha}, \boldsymbol{\uplambda} \right)=p\left(L|\boldsymbol{\Lambda}, \boldsymbol{\upbeta} \right)p\left(Z|L,\boldsymbol{\upalpha} \right)p\left(W|Z,\boldsymbol{\uplambda} \right)\\ {}\kern10em \propto \prod \limits_{d\in \mathbb{D}}\kern0em \frac{\Gamma \left(\beta {L}_d\right)}{\Gamma \left(\beta {L}_d+{N}_d\right)}\prod \limits_{l\in {\mathbb{L}}_d}\kern0em \frac{\Gamma \left(\beta +{N}_{dl}\right)}{\Gamma \left(\beta \right)}\\ {}\kern10.5em \cdot \prod \limits_{l\in {\mathbb{L}}^{\prime }}\kern0em \frac{\Gamma \left(\alpha K\right)}{\Gamma \left(\alpha K+{N}_l\right)}\prod \limits_{k\in \mathbb{K}}\kern0em \frac{\Gamma \left(\alpha +{N}_{lk}\right)}{\Gamma \left(\alpha \right)}\\ {}\kern10em \cdot \prod \limits_{k\in \mathbb{K}}\kern0em \frac{\Gamma \left(\lambda W\right)}{\Gamma \left(\lambda W+{N}_k\right)}\prod \limits_{w\in \mathbb{W}}\kern0em \frac{\Gamma \left(\lambda +{N}_{kw}\right)}{\Gamma \left(\lambda \right)}\end{array}} $$ Then, the prediction probability distribution of hidden variable zdn and ldncan be computed from that collapsed joint distribution as a transition probability of state space in the Markov chain. Through Gibbs Sampling iteration, Markov chain converges to the target stationary distribution after the burn-in time. Finally, collecting sufficient statistic samples from the converged Markov chain state space and averaging among the samples, we can get a posteriori estimates of corresponding parameters. Deducing from Eqs. (16), (21) and (23), the predictive probability distribution for the word-assignment wof topic k in label l for sample xdn is: $$ {\displaystyle \begin{array}{l}\kern1em p\left({\mathbf{l}}_{dn}=l,{\mathbf{z}}_{dn}=k,{\mathbf{x}}_{dn}=w|{L}^{\left(\backslash dn\right)},{Z}^{\left(\backslash dn\right)},{W}^{\left(\backslash dn\right)},{\boldsymbol{\Lambda}}_d,\boldsymbol{\upbeta}, \boldsymbol{\upalpha}, \boldsymbol{\uplambda} \right)\\ {}\propto p\left({\mathbf{l}}_{dn}=l|{L}_d^{\left(\backslash dn\right)},{\boldsymbol{\Lambda}}_d,\boldsymbol{\upbeta} \right)\cdot p\left({\mathbf{z}}_{dn}=k|{\mathbf{l}}_{dn}=l,{L}^{\left(\backslash dn\right)},{Z}^{\left(\backslash dn\right)},\boldsymbol{\upalpha} \right)\cdot \\ {}\kern1em p\left({\mathbf{w}}_{dn}=w|{\mathbf{z}}_{dn}=k,{Z}^{\left(\backslash dn\right)},{W}^{\left(\backslash dn\right)},\boldsymbol{\uplambda} \right)\\ {}\propto \frac{\left({\beta}_l+{N}_{dl}^{\left(\backslash dn\right)}\right){\Lambda}_{dl}}{\sum_{l\in {\mathbb{L}}^{\prime }}\kern0em {\beta}_l{\Lambda}_{dl}+{N}_d^{\left(\operatorname{} dn\right)}}\cdot \frac{\alpha_k+{N}_{lk}^{\left(\backslash dn\right)}}{\sum_{k\in \mathbb{K}}\kern0.1em {\alpha}_k+{N}_l^{\left(\backslash dn\right)}}\cdot \frac{\lambda_w+{N}_{kw}^{\left(\backslash dn\right)}}{\sum_{w\in \mathbb{W}}\kern0.1em {\lambda}_w+{N}_k^{\left(\backslash dn\right)}}\end{array}} $$ To investigate the performance of the proposed method, we utilize two types of datasets. The first one is S.cerevisiae dataset (S.C) proposed in [19], and the second one is human dataset constructed by ourselves. In S.C dataset, there are several sub datasets that constructed from different characteristics of yeast genome. Meanwhile, each sub dataset use two kinds of function annotation standard, FunCat and GO. We mainly use the sub dataset that depends on the amino acid sequence of protein and GO. What's more, to compare the performance of PFTP between difference label numbers, we construct a dataset named S.C-CC from S.C, which only includes GO terms belonging to cellular component. Then, there are two datasets constructed from S.C. The human dataset is constructed from the Universal Protein Resource (UniProt) databank [2] and constructed by the similar way of reference [4]. Meanwhile, we construct two Human datasets for different word length, where the max word length of Human1 dataset is two alphabet, and which of Human2 dataset is three alphabet. Due to the large number of GO terms in protein function dataset, we adopted a label space dimension reduction (LSDR) method to overcome the classification difficulty of classifiers. Boolean Matrix Decomposition (BMD) has been studied for LSDR recently, which can recovery the label space after classification conveniently. Therefore, a BMD method proposed in reference [20] has conducted in S.C and Human dataset. The statistics of above two datasets is displayed in Table 1. 'L' represents the number of GO terms after BMD; 'D' denotes the number of proteins in each dataset; 'W' denotes the size of vocabulary. Table 1 The statistic of four datasets Parameter settings PFTP model involves three parameters: α, λ and K. α and λ are the parameters of two Dirichlet distribution, where the larger the value of λ, the more balanced the probabilistic of word in a topic. According to the experience, we set α = 50/K,λ = 200/W. The settings and impact of K value are explained later. In the Gibbs sampling process of model training, we set the number of Markov chain as 1, the maximum number of iterations as 2000 times, where the number of iteration of burn-in time is set to 1000. We record the state space at intervals of 50 times on converged Markov chain, and 20 times of record is conducted. In the process of model predicting, we set the number of iterations as 1000 times. After 500 times of iterations for burn-in time, we record the state space at intervals of 50 times. Evaluation criterias In all of our experiments, we use three representative multi-label learning evaluation criteria, including Hamming loss(HL), Average precision(AP) and One Error. Besides, we also use three kinds of area under Precision-Recall curve proposed in reference [19], including \( \overline{AUPRC} \), \( AU\left(\overline{PRC}\right) \) and \( \overline{AUPRCw} \). Meanwhile, the 5-fold cross validation is adopted to assess the performance of PFTP and contrast methods. The average results of 5 independent rounds are reported in following sections. The impact of topic number on experimental results K denotes the number of global topics. The analysis about impact of K on model performance is discussed in this section. According to the description of Section 2, as PFTP allocates one or more latent topics to each GO term, then the value of K should range from Lto infinity in theory. Specifically, if we allocate only one topic to each GO term (K = L), then the model reduces to Labeled-LDA. Obviously, setting K < Lmakes our PFTP have no ability to discover the sub-structure of function. In our experiment, each function is assigned exactly the same number of topics for the simplicity of computation. For example, we set K = 3L, then each GO term corresponds to a topic set with three topics. In view of above reason, the lower bounded of K value is set to 2L. On the other hand, although theory insists that the larger K value equals to the more refined sub-structure of label, incorporating more latent topics per function will increase the computational load. In reference [18], the impact of K value on the effectiveness of PLDA model has been discussed in several texts collections. Along with the growth of topic size, the performance of PLDA model approaches a fixed value which was obtained by a non-parametric model. In other words, the infinitely larger size of topics doesn't equal to an infinitely greater performance, but an unbearable running time. Therefore, we set the upper bound of K value as 5Lbased on our empirical experience and the acceptable level of time overhead. In sum, the Kvalue should be set to an integer between 2L and5L. Then, the performance of PFTP under different Kvalue is shown in Fig. 4. The performance comparison of different K setting. For AP, \( AU\left(\overline{PRC}\right) \), \( \overline{AUPRC} \) and \( \overline{AUPRCw} \), the larger the value, the better the performance; for HL and One-Error, the smaller the value, the better the performance; The red background represents the best value range As shown in Fig. 4, all of the evaluation criteria value is relatively stable when Kis set to2L~4L. Nonetheless, when Kvalue is greater than 4L, the values of AP,\( \overline{AUPRC} \),\( AU\left(\overline{PRC}\right) \) and \( \overline{AUPRCw} \) decrease with the increase of K, the value of Hamming loss and One Error slowly increase with the increase of K. These results suggest that the optimum value range of K is 2L to4L. This was due to that the lower K value makes the fewer topics allocated to each label, and the higher K value makes the small difference of word distribution between topics. What's more, the problem of huge labels is particularly obvious in protein function dataset, even if a BMD method has applied to reduce the label dimension. Therefore, we set K as 3L in our experiment. Evaluation against widely adopted method Firstly, we compare PFTP with Labeled-LDA [4] and multi-label K-nearest neighbor (MLKNN) [21] on four datasets. MLKNN is a representative multi-label classifier and can be applied by an open source tool called Mulan [22]. Figure 5 shows the HL, AP, One Error, \( AU\left(\overline{PRC}\right) \), \( \overline{AUPRC} \) and \( \overline{AUPRCw} \) values of these three models in SC, SC-CC, Human1 and Human2 dataset, respectively. For AP, \( AU\left(\overline{PRC}\right) \), \( \overline{AUPRC} \) and \( \overline{AUPRCw} \), the larger the value, the better the performance. Conversely, for HL and One-Error, the smaller the value, the better the performance. The red asterisk of Fig. 4 represents the best result on each dataset. The comparison results with PTFP and Labeled-LDA. For AP, \( \overline{AUPRC} \), \( AU\left(\overline{PRC}\right) \) and \( \overline{AUPRCw} \), the larger the value, the better the performance; for HL and One-Error, the smaller the value, the better the performance; the red asterisk on bar represents the best result on each dataset As shown in Fig. 5, we can observe that PTPF shown more advantages in contrast to Labeled-LDA and MLKNN in four datasets. Concrete analysis is as follows: For Human1 dataset, PFTP obtain a better performance in all evaluation criteria. On HL, PTPF achieves 9.7 and 2% improvements over Labeled-LDA and MLKNN. On One-Error, PTPF achieves 80 and 99% improvements over Labeled-LDA and MLKNN. On AP, \( AU\left(\overline{PRC}\right) \), \( \overline{AUPRC} \) and \( \overline{AUPRCw} \), PFTP achieves 2.5, 0.2, 47 and 18% improvements over Labeled-LDA, and achieves 48, 40, 43 and 41% improvements over MLKNN. Obviously, the improvements on \( \overline{AUPRC} \) and \( \overline{AUPRCw} \) is more significant than \( AU\left(\overline{PRC}\right) \). For Human2 dataset, PFTP obtain a better performance in four evaluation criteria except \( AU\left(\overline{PRC}\right) \) and \( \overline{AUPRC} \). On HL, PTPF achieves 30 and 7.9% improvements over Labeled-LDA and MLKNN. On One-Error, PTPF achieves 66 and 99% improvements over Labeled-LDA and MLKNN. On AP and \( \overline{AUPRCw} \), PFTP achieves 3.3 and 0.2% improvements over Labeled-LDA, and achieves 40 and 29% improvements over MLKNN. Nevertheless, on \( AU\left(\overline{PRC}\right) \) and \( \overline{AUPRC} \), MLKNN and Labeled-LDA get better results respectively. For S.C dataset, PFTP obtain a better performance in four evaluation criteria except HL and One-Error. On AP, \( \overline{AUPRC} \) and \( \overline{AUPRCw} \), PTPF achieves 2.8%, 22 and 16% improvements over Labeled-LDA, and achieves 48, 17 and 32% improvements over MLKNN; on \( AU\left(\overline{PRC}\right) \), the results of Labeled-LDA and PFTP are almost the same. Nevertheless, on HL, MLKNN gets better results than PFTP; on One-Error, almost identical results were obtained by these three methods. For S.C-CC dataset, PFTP obtain a better performance on AP, \( \overline{AUPRC} \) and \( \overline{AUPRCw} \). On AP, PTPF achieves 2.6 and 27% improvements over Labeled-LDA and MLKNN. On \( \overline{AUPRC} \), PTPF achieves 14 and 32% improvements over Labeled-LDA and MLKNN. On \( \overline{AUPRCw} \), PTPF achieves 7.8 and 41% improvements over Labeled-LDA and MLKNN. Besides, we compare PFTP with three hierarchal multi-label classification (HMC) algorithm based on decision tree, namely HMC/SC (single-label classification)/HSC (hierarchical single-label classification) [19]. These three algorithms have been studied on protein function prediction dataset and proved to be a kind of multi-label classifiers with great performance. Since the results of CLUS-HMC/SC/HSC in reference [19] are only on S.C dataset, the comparison results with our PFTP are also on S.C dataset, and are plotted in Fig. 6. The comparison results with PTFP and HMC/SC/HSC. For three evaluation criteria, the larger the value, the better the performance, and the red asterisk on bar represents the best result on each dataset On \( \overline{AUPRC} \), our method exhibits dominant advantage against all of the three comparison methods. The performance improvements are 85, 85 and 84% against CLUS-SC, CLUS-HSC and CLUS-HMC, respectively. On \( AU\left(\overline{PRC}\right) \), PTPF achieves 65, 51 and 32% improvements over CLUS-SC, CLUS-HSC and CLUS-HMC. Nonetheless, on \( \overline{AUPRCw} \), CLUS-HMC gets better results than PFTP. The topics discovered by PFTP The greatest strength of our protein function topic modeling is that, it can not only provide the function label probability distribution over proteins as an output, but also each function label can be explained as a probability distribution over topic subset, where each topic is represented as the probability distribution over amino acid blocks. To better understand this topic modeling process, we take GO term 'GO0016020' as an example, whose corresponding topics are shown in Table 2. Table 2 The topics discovered by two models As shown in Table 2, the 2-mers BoW is used in this example. For Labeled-LDA, the one-to-one correspondence between label and word is the key design consideration. Therefore, 'GO0016020' only corresponds with a topic numbered 288, and also corresponds with a probability distribution over word. The top 20 words are listed from large to small order. For PFTP model, each GO term is a partition of global topics set. Such as for S.C-CC dataset, the number of function label is 319, while the number of global topics is three times that of the labels, that's a total of 958(including a background topic). Therefore, each GO term corresponds with four topics (including three local topics and one background topic). The topic number 863,864,865 and 1 are the four topics corresponded by 'GO0016020', where the number 1 is a background topic. Likewise, the top 20 words of these four topics are listed from large to small order. The results in Figs. 5 and 6 indicate that PFTP has the significant advantage against several widely adopted multi-label classifiers. Compared with traditional multi-label classifiers(non-topic model), our method can further improve the accuracy of protein function prediction by introducing topics subset into supervised topic model, which can discover the topic that represents common semantic of documents and reflect the differences between labels and latent topics. Especially for CLUS-HMC/SC/HSC, our method exhibit the dominant advantage on \( \overline{AUPRC} \). We attribute this success of our method to its utilization of BMD method on dataset. As the computation of \( \overline{AUPRC} \) doesn't bias toward the accuracy of function label annotating more proteins, and focus on the average of whole accuracy. The GO term annotating fewer proteins will be deleted after BMD processing, and recovered after predicting, but the prediction accuracy don't reduce. In other words, the combination of PFTP and BMD can improve the average accuracy of protein function prediction. Compared with Labeled-LDA, PFTP is able to discovery more-refined latent sub-structure of function label than Labeled-LDA. By introducing topic subset for each label in PTPF, the relationship between functions and variety words, labels and topics were disclosed. Therefore, we can anticipate that PFTP is a potential method to reveal a deeper biological explanation for protein functions. Meanwhile, the performance comparison of different dataset is also shown in Fig. 4. For S.C-CC dataset, six evaluation criteria values vary relatively smoothly. It may be due to the fewer labels of S.C-CC dataset, then changing the K value doesn't lead to great impact on prediction effect. In the comparison of S.C and S.C-CC dataset, we find that the value of AP, \( AU\overline{(PRC)} \), \( \overline{AUPRC} \) and \( \overline{AUPRCw} \) on S.C is lower than S.C-CC, and the value of One-Error and HL is almost equal between S.C and S.C-CC. This is due to the same word space and different label number between these two dataset. The fewer labels of S.C-CC can make a higher classifying performance. In the comparison of Human1 and Human2 dataset, we find that the value of \( \overline{AUPRC} \) and \( \overline{AUPRCw} \) on Human1 is higher than Human2; the value of AP on Human1 is lower than Human2; the value of One-Error, HL and \( AU\overline{(PRC)} \) is almost equal on Human1 and Human2. These results show that, the classification performance of PFTP on Human1 and Human2 is almost the same, which reveal that the larger word space might not obtain a better classifying performance. In this paper, we introduced an improved multi-label supervised topic model for predicting protein function. In our previous study, a multi-label supervised topic model Labeled-LDA has been applied to protein function prediction, which associates each label (GO term) with a corresponding topic directly. This way makes the latent topics to be completely degenerated, and ignores the differences between labels and latent topics. To address the faultiness, we proposed a Partially Function-to-Topic Prediction model for introducing the local topic subset corresponding to each function label. PFTP not only supports latent topics subsets within given function labels but also a background topic corresponding to a 'fake' function label. In a 5-fold cross validation experiment on predicting protein function, PFTP significantly outperforms compared methods. Due to the more-refined way of function label modeling, PFTP shows the effectiveness and potential value in predicting protein function through experimental studies. Meanwhile, there are several problems in topic modeling of protein function prediction to be improved, such as the introduction of protein extra features and hierarchical function label structure. However, multi-label topic model is a potential method in many applications of bioinformatics. BMD: Boolean Matrix Decomposition BoW: Bag of Words CGS: Collapsed Gibbs sampling HL: Hamming loss, AP: Average precision HMC: Hierarchal Multi-label Classification HSC: Hierarchical Single-label Classification LSDR: Label Space Dimension Reduction MLKNN: Multi-label K-nearest neighbor PFTP: Partially Function-to-Topic Prediction PLDA: Partially Labeled LDA PLSA: Probabilistic Latent Semantic Analysis S.C: S.cerevisiae Single-label Classification UniProt: Universal Protein Resource Weaver RF. Molecular biology (WCB Cell & Molecular Biology). 5th ed. New York: cGraw-hill Education; 2011. Consortium UP. UniProt: the universal protein knowledgebase. Nucleic Acids Res. 2016;45(D1):D158–69. Berman HM, Battistuz T, Bhat TN. The protein data Bank. Berlin: Atomic evidence: Springer International Publishing; 2016. p. 218–22. Liu L, Tang L, He L, Wei Z, Shaowen Y. Pedicting protein function via multi-label supervised topic model on gene ontology. Biotechnol. Biotechnol. Equip. 2017;31(1):1–9. Altschul SF, Madden TL, Schäffer AA, Zhang J, Zhang Z, Miller W, Lipman DJ. Gapped BLAST and PSIBLAST: a new generation of protein database search programs. Nucleic Acids. 1997;25:3389–402. Gene Ontology Consortium. The gene ontology (GO) database and informatics resource. Nucleic Acids Res. 2004;32(Suppl 1):D258–61. Cao R, Cheng J. Integrated protein function prediction by mining function associations, sequences, and protein–protein and gene–gene interaction networks. Methods. 2016;93:84–91. Erdin S, Venner E, Lisewski AM, Lichtarge O. Function prediction from networks of local evolutionary similarity in protein structure. BMC bioinformatics. 2013;14(3):S6. Yu G, Rangwala H, Domeniconi C, Zhang G, Zhang Z. Predicting protein function using multiple kernels. IEEE/ACM Trans Comput Biol Bioinf. 2015;12(1):219–33. Fodeh S, Tiwari A, Yu H. Exploiting PubMed for protein molecular function prediction via NMF based multi-label classification. In: Proceeding of international conference on data mining workshops. 2017 IEEE conference on; 2017. p. 446–51. However. Orderly roulette selection based ant Colony algorithm for hierarchical multilabel protein function prediction. Math Probl Eng. 2017;2017(2):1–15. Wang H, Yan L, Huang H, Ding C. From protein sequence to protein function via multi-label linear discriminant analysis. IEEE/ACM Trans Comput Biol Bioinform. 2017;14(3):503–13. Pinoli P, Chicco D, Masseroli M. Enhanced probabilistic latent semantic analysis with weighting schemes to predict genomic annotations. In: Proceeding of the 13th international conference on bioinformatics and bioengineering (BIBE). 2013 IEEE conference on; 2013. p. 1–4. Masseroli M, Chicco D, Pinoli P. Probabilistic latent semantic analysis for prediction of gene ontology annotations. In: Proceeding of international joint conference on neural networks (IJCNN). 2012 IEEE conference on; 2012. p. 1–8. Pinoli P, Chicco D, Masseroli M. Latent Dirichlet allocation based on Gibbs sampling for gene function prediction. In: Proceeding of international conference on computational intelligence in bioinformatics and computational biology. 2014 IEEE conference on; 2014. p. 1–8. Dumais ST. Latent semantic analysis. Ann Rev Inf Sci Technol. 2004;38(1):188–230. Blei DM, Ng AY, Jordan MI. Latent Dirichlet allocation. J Mach Learn Res. 2003;3:993–1022. Ramage D, Manning CD, Dumais S. Partially labeled topic models for interpretable text mining. In: International conference on knowledge discovery and data mining, 2011 ACM conference on; 2011. p. 457–65. Vens C, Struyf J, Schietgat L, Džeroski S, Blockeel H. Decision trees for hierarchical multi-label classification. Mach Learn. 2008;73(2):185–214. Sun Y, Ye S, Sun Y, Kameda T. Improved algorithms for exact and approximate Boolean matrix decomposition. In: International conference on data science and advanced analytics, 2015 IEEE conference on; 2015. p. 1–10. Zhang M, Zhou Z. ML-KNN : a lazy learning approach to multi-label learning. Pattern Recogn. 2007;40(7):2038–48. Tsoumakas G, Katakis I, Vlahavas I. Mining multi-label data. In: Maimonn O, Rokach L, editors. Data mining and knowledge discovery handbook. New York: Springer US; 2009. p. 667–85. We would like to thank the researchers in State Key Laboratory of Conservation and Utilization of Bio-resources, Yunnan University, Kunming, China. Their very helpful comments and suggestions have led to an improved version of paper. This research was supported by the National Natural Science Foundation of China (no. 61862067, no. 61363021), and the Doctor Science Foundation of Yunnan normal university (no. 01000205020503090, no. 2016zb009). Publication costs are funded by the Doctor Science Foundation of Yunnan normal university (no. 2016zb009). The data and source code is available upon request. About this supplement This article has been published as part of BMC Genomics Volume 19 Supplement 10, 2018: Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-19-supplement-10. School of Information, Yunnan Normal University, Kunming, 650500, Yunnan, China Lin Liu Key Laboratory of Educational Informatization for Nationalities Ministry of Education, Yunnan Normal University, Kunming, 650500, Yunnan, China Lin Tang President's Office, Yunnan Normal University, Kunming, 650500, Yunnan, China Mingjing Tang School of Software, Yunnan University, Kunming, 650091, Yunnan, China Wei Zhou LT and WZ conceived the study, and revised the manuscript. LL analyzed materials and literatures, and drafted the manuscript. LT and MT participated in the literatures analyses. All authors have read and approved the final manuscript. Correspondence to Lin Tang or Wei Zhou. Liu, L., Tang, L., Tang, M. et al. A partially function-to-topic model for protein function prediction. BMC Genomics 19, 883 (2018). https://doi.org/10.1186/s12864-018-5276-7 Multi-label classification Topic model
CommonCrawl
Number Theory & Analysis Session code: nta Harald Andres Helfgott (Universitat Gottingen) Roberto Miatello (Universidad Nacional de Cordoba) Maksym Radziwill (McGill University) Thursday, Jul 27 [McGill U., Rutherford Physics Building, Room 115] 11:45 Henryk Iwaniec (Rutgers University), Critical zeros of L-functions 12:15 John Friedlander (University of Toronto), On Dirichlet $L$-functions 14:15 Kannan Soundararajan (Stanford University), Value distribution of L-functions. 14:45 Emanuel Carneiro (IMPA), Bandlimited approximations and estimates for the Riemann zeta-function 15:45 Lola Thompson (Oberlin College), Bounded gaps between primes and the length spectra of arithmetic hyperbolic $3$-orbifolds 16:15 Henry Kim (University of Toronto), The least prime in a conjugacy class 17:00 Emilio Lauret (Universidad Nacional de Cordoba (Argentina)), One-norm spectrum of a lattice 17:30 Alex Kontorovich (Rutgers University), Beyond Expansion and Arithmetic Chaos Friday, Jul 28 [McGill U., Rutherford Physics Building, Room 115] 12:15 Amir Mohammadi (University of California at San Diego), Effective equidistribution of certain adelic periods 14:15 Misha Belolipetsky (IMPA), Lehmer's problem and triangulations of arithmetic hyperbolic 3-orbifolds 14:45 Clara Aldana (Universite du Luxembourg), Determinants of Laplacians on surfaces with singularities 15:45 Alireza Golsefidy (University of California at San Diego), Super-approximation 16:15 Corentin Perret-Gentil (CRM, Montreal), Quotients of elliptic curves over finite fields 17:00 Anthony Varilly-Alvarado (Rice University), On a uniform boundedness conjecture for Brauer groups of K3 surfaces Henryk Iwaniec Critical zeros of L-functions I will discuss various issues related with the zeros on the critical line of families of L-functions. These include: the Riemann zeta function, the Dirichlet L-functions, the Hecke L-functions for quadratic number fields and the L-functions of elliptic curves. Location: McGill U., Rutherford Physics Building, Room 115 John Friedlander On Dirichlet $L$-functions We discuss some relations among their values at $s=1$, their character values at primes, and their zero-free regions. (joint with H. Iwaniec) Kannan Soundararajan Value distribution of L-functions. I will discuss some work with Radziwill motivated by the Keating-Snaith conjectures that central values of L-functions should be log normal. Emanuel Carneiro Bandlimited approximations and estimates for the Riemann zeta-function We provide explicit upper and lower bounds for the argument of the Riemann zeta-function and its antiderivatives on the critical line and in the critical strip, under the assumption of the Riemann hypothesis. Our tools come not only from number theory, but also from Fourier analysis and approximation theory. An important element in our strategy is the ability to solve a Fourier optimization problem with constraints, namely, the problem of majorizing certain real-valued even functions by bandlimited functions, optimizing the $L^1(\mathbb{R})-$error. Deriving explicit formulae for the Fourier transforms of such optimal approximations plays a crucial role in our approach. The most recent works are joint with A. Chirre and M. Milinovich. Lola Thompson Bounded gaps between primes and the length spectra of arithmetic hyperbolic $3$-orbifolds In 1992, Reid posed the question of whether hyperbolic $3$-manifolds with the same geodesic length spectra are necessarily commensurable. While this is known to be true for arithmetic hyperbolic $3$-manifolds, the non-arithmetic case is still open. Building towards a negative answer to Reid's question, Futer and Millichap have recently constructed infinitely many pairs of non-commensurable, non-arithmetic hyperbolic $3$-manifolds which have the same volume and whose length spectra begin with the same first $n$ geodesic lengths. In the present talk, we show that this phenomenon is surprisingly common in the arithmetic setting. In particular, given any arithmetic hyperbolic $3$-orbifold derived from a quaternion algebra and any finite subset $S$ of its geodesic length spectrum, we produce, for any $k\geq 2$, infinitely many $k$-tuples of arithmetic hyperbolic $3$-orbifolds which are pairwise non-commensurable, have geodesic length spectra containing $S$, and have volumes lying in an interval of (universally) bounded length. The main technical ingredient in our proof is a bounded gaps result for prime ideals in number fields lying in Chebotarev sets. This talk is based on joint work with B. Linowitz, D. B. McReynolds, and P. Pollack. Henry Kim The least prime in a conjugacy class Let $K$ be a number field with the discriminant $d_K$ with its Galois closure $\hat K$. Let $C$ be a conjugacy class of $Gal(\hat K/\Bbb Q)$. Let $n_{K,C}$ be the least prime $p$ which is ramified or whose Frobenius automorphism Frob$_p$ does not belong to $C$. Then under GRH, $n_{K,C}$ is $O((\log |d_K|)^2)$. We prove two unconditional results regarding $n_{K,C}$. First, the average of $n_{K,C}$ in a family of $S_n$-fields ($n=3,4,5$) is a constant. Second, in a family of $S_n$-fields ($n=3,4,5$), except for a density zero set, $n_{K,C}=O(\log |d_K|)$. Emilio Lauret Universidad Nacional de Cordoba (Argentina) One-norm spectrum of a lattice In 1964, John Milnor gave the first example of isospectral non-isometric compact Riemannian spaces. To do this, he related the spectrum of the Laplace operator on a torus, and the (Euclidean) norm of the vectors of the (corresponding) dual lattice. Consequently, a pair of lattices with the same theta function induces a pair of isospectral tori. In this talk, we will introduce a new relation between the spectrum of a lens space (a sphere over a cyclic group), and the one-norm (sum of the absolute values of the entries) of the vectors in an associated lattice. We associate to each lattice, the one-norm generating function defined as follows: the power series whose $k$-th term is the number of vectors in the lattice with one-norm equal to $k$. We will show that two lens spaces are isospectral if and only if their corresponding lattices have the same one-norm generating function. Furthermore, we will prove that the generating function is a rational function, and consequently, a finite part of the spectrum determines the whole spectrum. This is a joint work with Roberto Miatello and Juan Pablo Rossetti. Alex Kontorovich Beyond Expansion and Arithmetic Chaos We will describe recent progress in our ongoing program with Jean Bourgain to understand a number of different problems through the lens of thin orbits. An important role will be played by the production of levels of distribution (in certain Affine Sieves) which go "beyond expansion." Amir Mohammadi Effective equidistribution of certain adelic periods We will present a quantitative equidistribution result for adelic homogeneous subsets whose stabilizer is maximal and semisimple. Some number theoretic applications will also be discussed. This is based on a joint work with Einsiedler, Margulis and Venkatesh. Misha Belolipetsky Lehmer's problem and triangulations of arithmetic hyperbolic 3-orbifolds A triangulation of a hyperbolic orbifold is called good if all the simplices are geodesic and $l$-dimensional skeleton of the singular set is contained in the $l$-skeleton of the triangulation for every $l$. The purpose of the talk is show how the known quantitative results towards Lehmer's problem on the Mahler measure of non-cyclotomic polynomials can be applied to produce good triangulations of arithmetic hyperbolic 3-orbifolds with small number of simplices. More precisely, we show that for any $\epsilon > 0$, there is a constant $V_0 = V_0(\epsilon)$ such that any closed orientable arithmetic hyperbolic $3$-orbifold of volume $V_{hyp} \ge V_0$ has a good triangulation with at most $V_{hyp}^{1+\epsilon}$ simplices and vertex degree bounded above by an absolute constant. Clara Aldana Universite du Luxembourg Determinants of Laplacians on surfaces with singularities I will talk about certain aspects of determinants of Laplace operators on surfaces. I will consider two settings: surfaces with cusps and funnels, and surfaces with conical singularities. I will mention some of the results that we obtained for determinants of Laplacians in these two cases, the technical difficulties that appeared there and how we solved them. The results for surfaces with cusps and funnels are joint work with Pierre Albin and Frederic Rochon, and the part about surfaces with conical singularities are joint work with Julie Rowlett. Alireza Golsefidy Super-approximation Suppose G is a finitely generated linear group over a global field. Super-approximation results imply that, under some conditions on the Zariski-closure of G, the Cayley graphs of (certain) finite congruence quotients of G are expanders, that means roughly highly connected. In this talk I will present some of the best known super-approximation results. If time permits, some of the applications of super-approximation will be mentioned. Corentin Perret-Gentil CRM, Montreal Quotients of elliptic curves over finite fields In fixed characteristic $p>0$, there are only finitely many supersingular elliptic curves. Given a finite subgroup of such a curve, we can form the quotient, which is still a supersingular elliptic curve, isogenous to the base curve. For a family of subgroups of growing size (for example all cyclic subgroups of given cardinality), we would like to know how these quotients distribute in the isomorphism classes of supersingular elliptic curves. This question is related to the study of the security of recent cryptographic schemes using isogenies. The techniques involve applying the Riemann hypothesis over finite fields (in a general version) to exponential sums having high degree or to Jacobians of modular curves. Similar questions can be addressed for ordinary curves. Anthony Varilly-Alvarado On a uniform boundedness conjecture for Brauer groups of K3 surfaces Brauer groups of K3 surfaces behave in many ways like torsion points of elliptic curves. In 1996, Merel showed that torsion groups of elliptic curves are uniformly bounded across elliptic curves defined over number fields of fixed degree. I will discuss a conjecture pointing towards an analogous statement for K3 surfaces, and survey recent mounting evidence for it.
CommonCrawl
A next generation sequencing based approach to identify extracellular vesicle mediated mRNA transfers between cells Methodology article Jialiang Yang ORCID: orcid.org/0000-0003-4689-86721,2, Jacob Hagen1,2, Kalyani V. Guntur3, Kimaada Allette1,2, Sarah Schuyler1,2, Jyoti Ranjan3, Francesca Petralia1,2, Stephane Gesta3, Robert Sebra1,2, Milind Mahajan1,2, Bin Zhang1,2, Jun Zhu1,2, Sander Houten1,2, Andrew Kasarskis1,2, Vivek K. Vishnudas3, Viatcheslav R. Akmaev3, Rangaprasad Sarangarajan3, Niven R. Narain3, Eric E. Schadt1,2, Carmen A. Argmann1,2 & Zhidong Tu1,2 Exosomes and other extracellular vesicles (EVs) have emerged as an important mechanism of cell-to-cell communication. However, previous studies either did not fully resolve what genetic materials were shuttled by exosomes or only focused on a specific set of miRNAs and mRNAs. A more systematic method is required to identify the genetic materials that are potentially transferred during cell-to-cell communication through EVs in an unbiased manner. In this work, we present a novel next generation of sequencing (NGS) based approach to identify EV mediated mRNA exchanges between co-cultured adipocyte and macrophage cells. We performed molecular and genomic profiling and jointly considered data from RNA sequencing (RNA-seq) and genotyping to track the "sequence varying mRNAs" transferred between cells. We identified 8 mRNAs being transferred from macrophages to adipocytes and 21 mRNAs being transferred in the opposite direction. These mRNAs represented biological functions including extracellular matrix, cell adhesion, glycoprotein, and signal peptides. Our study sheds new light on EV mediated RNA communications between adipocyte and macrophage cells, which may play a significant role in developing insulin resistance in diabetic patients. This work establishes a new method that is applicable to examining genetic material exchanges in many cellular systems and has the potential to be extended to in vivo studies as well. Cell-to-cell communication plays a key role in maintaining the integrity of multicellular systems. The most studied cell-to-cell communication mechanisms include chemical or hormone-mediated signaling and direct cell-to-cell contacts. In late 1990's, exchange of cellular information was also demonstrated to occur via release of intracellular contents packaged in lipid bilayer vesicles called extracellular vesicles (EVs) or exosomes as reviewed by Colombo et al. [1]. Various types of EVs exist and are generally classified according to their sub-cellular origins. Microvesicles (MVs) are EVs formed and released by budding from cells plasma membrane and display a diverse range of sizes (100–1000 nm in diameter). Exosomes, in contrast, are of endosomal origin and are released as a consequence of multivesicular endosomes fusing with the plasma membrane and are generally small in size (30–150 nm). Importantly, EV secretion appears to be conserved throughout evolution and is a characteristic of most cell types including adipocytes, macrophages, hematopoietic, neuronal, fibroblastic, and various tumour cells [2, 3]. Furthermore, these EVs have been found to contain various types of cargo, such as mRNA, microRNA, proteins, lipids, and DNA [4,5,6], representing diverse ways of mediating cell-to-cell communications. As EVs contain surface molecules that can be recognized by recipient cells, these molecules can be readily shuttled from one cell to another and thereby influence the biological state of the recipient cell in multiple ways [1]. With the potential of EVs as critical mediators of cell-to-cell communication, they have become a key focus of research for numerous pathological settings as biomarkers as well as mediators of disease. A role for EVs has been subsequently identified in various disease settings, including diabetes, cardiovascular disease, inflammation and pain, degenerative brain disorders and cancer, to name a few. For example, Deng et al. demonstrated that exosome-like vesicles obtained from obese mice was sufficient to induce insulin resistance when injected into wild-type C57BL/6 mice [2]. Ibrahim et al. pinpointed exosomes secreted by human cardio sphere-derived cells (CDCs) as critical agents of regeneration of injured heart muscle and cardio protection. They found that the injection of exosomes into injured mouse hearts recapitulated the regenerative and functional effects produced by CDC transplantation, whereas inhibition of exosome production by CDCs blocks these benefits [7]. With respect to cancers, exosomal microRNAs and RNA are in fact being used as diagnostic biomarkers for several cancers such as ovarian cancer [8]. Moreover, exosomes have been adopted as carriers to load MHC class I and class II peptides for vaccinating metastatic melanoma patients [9] and to deliver antitumor microRNA let-7a to treat breast cancer in mice [10]. The readers are referred to recent review articles for detailed roles of exosomes in disease processes [11, 12]. Despite the seemingly extensive characterization of EVs to date, however, limitations exist. First, though several studies have identified many genetic materials within EVs, they were not detailed as necessarily being transported and released into recipient cells [13]. Secondly, there is a lack of estimation of the transferring rate, e.g., the proportion of total genetic materials (in the form of mRNA, miRNA, protein, etc.) being transferred from one cell to another. Mittelbrumn et al. demonstrated the existence of antigen-driven unidirectional miRNA transfer from the T cell to the antigen-presenting cells [14]. However, it is not clear if the unidirectional transfer is true between other cell types. For these reasons, a more systematic method is needed to identify the genetic materials that are potentially transferred during cell-to-cell communication through EVs in an unbiased manner. In this study, we jointly consider data from RNA sequencing and genotype array to systematically discover mRNA exchanges between two co-cultured cell lines of different genetic backgrounds. We relate those exchanges to potential EV mediated transfer by verifying them with mRNA sequencing of purified EVs. Given the precedent for potential crosstalk between macrophages and adipocytes in adipose tissue under obese conditions leading to insulin resistance [15], we chose to apply our novel methodology to co-cultures of human differentiated adipocytes and macrophages. Experimental design and data generation Our main experiment was performed on an in vitro co-culture cellular system, in which two types of human cells, namely, the adipocytes and macrophages were cultured in transwell plates with porous membrane inserts (pore size of 0.4 μm) to prevent them from being mixed together (Corning, Inc. Costar, NY, USA, see Additional file 1). The porous membrane allows small particles (size less than 0.4 μm) to pass through, making EV mediated mRNA exchanges between the two cell lines possible, which has been demonstrated by Garcia et al. [16] and Zheng et al. [17] (Fig. 1a and Additional file 1: Figure S1). The two cell lines were derived from donors with no known familial relationships. Therefore, a large number of single nucleotide polymorphic (SNP) markers between the two cell lines were expected which would allow us to easily distinguish cell origins. With this experimental setting, we planned to address: if there are mRNA exchanges detectable between the two cell lines; what the genes being transferred and their functions are; and finally if any transferred mRNAs are likely mediated by EVs in the media. The experimental workflow used to identify mRNA transfers between adipocytes and macrophages in a co-culture system: (a) A schema illustrating the experimental design for cell culture and sample collection including (a) adipocyte cells cultured alone, from which two cell pellet samples ADaloneN1 and ADaloneN2 were retrieved for RNA-seq, adipocyte B1 and B2 were technical duplicates of genotyping using Illumina Omni2.5 SNP array, and an exosome extraction ADexosome was prepared for RNA-seq; (b) same as (a) but for macrophage cells; (c) co-culture of two cell lines, from which three adipocyte cell pellet samples ADcoN1-N3, three macrophage cell pellet samples MOcoN1-N3, and an exosome sample ADMOexosome were profiled by RNA-seq. b The analytical pipeline including the pre-processing of (a) genotype data and (b) RNA-seq data, (c) the Bayesian model to call mRNA transfers between two cell lines, and (d) the final output of this pipeline We performed genotype profiling using the Illumina HumanOmni2.5Exome-8 BeadChip for the two cell lines with two technical replicates for each cell line (Adipocyte (AD) B1/B2 and Macrophage (MO) B1/B2). We also performed RNA-seq profiling for both single cultured and co-cultured cells. For single cultured cells, we generated RNA-seq data from two replicate cell pellet samples (ADaloneN1/N2, and MOaloneN1/N2). For the co-culture cells, we obtained the RNA-seq data from triplicate cell pellet samples in the adipocyte layer (ADcoN1-N3) and triplicate cell pellet samples in the macrophage layer (MOcoN1-N3). EVs were isolated using sequential ultracentrifugation and western blotting demonstrated enrichment for CD9 expression, a marker of endosomal origin. Considering the precedence set in the field, where a mixture of exosomes and MV is likely, we are being conservative and referring to what we isolated in the media broadly as EVs also referred to as 'exosome-like vesicles'. We performed RNA-sequencing on isolated EVs from media of single cultured and co-cultured cells and profiled the data of ADexosome, MOexosome, and ADMOexosome (Fig. 1a). The readers are referred to Additional file 1 for details regarding genotyping and RNA sequencing on both cells and EVs. Data processing and quality control NGS data processing We first conducted a multiple-step quality control (QC). Briefly, we filtered out ribosomal RNAs (rRNAs) using SortMeRNA [18] and trimmed Illumina adaptors using Trimmomatic [19] on raw pair-ended RNA-seq reads (of length 100 bps) for each sample. 11~90 M pairs of reads were obtained after these two QC steps for all samples except ADexosome which generated much fewer reads (528 K pairs reads). These reads were then mapped to reference human genome (hg38) using an annotation file (GENCODE V21) by STAR (2.4.0.1) [20]. The mapping rates were over 90% for all cell samples and over 80% for exosomes (Additional file 1: Table S1). The uniquely mapped reads were further processed by several steps, such as marking duplicates, Split'N'Trim, reassigning mapping quality, base call recalibration, etc., similar to the protocol used in [21] (Fig. 1b). SAMtools mpileup [22] was used to retrieve the base profile at each locus (see Additional file 1 for more details). We disregarded mapped indel, placeholder, and reference skip bases and only considered remaining bases for our analysis. We adopted TopHat, HTSeq, and edgeR/DEseq protocols [23] to identify differentially expressed genes between single-cultured and co-cultured samples. Cufflinks protocol [24] was used to quantify gene expression, and GATK [21] to call variants in each sample. For cross sample quality control, we applied Principal Component Analysis (PCA) and the results based on the first two PCs for gene expressions were plotted in Additional file 1: Figure S2. We also showed the consistency of genotypes on common SNPs across 10 cell samples and 2 exosome samples in Additional file 1: Figure S3. Both gene expression levels and variants were consistent with sample annotations, suggesting no sample mislabelling occurred. To maximize the power of detecting mRNA transfer, we also merged RNA-seq reads from the same cell lines and denoted them as ADalone, ADco, MOalone, and MOco, respectively. Analytical procedures for identifying transferred mRNAs At a SNP locus, five typical scenarios of nucleotide base composition exist as illustrated in Fig. 1b (c). We define a SNP as a "tag" SNP if it satisfies the following two criteria: (1) shows a polymorphism between the two cell lines; and (2) it has a homozygous genotype in the recipient cell. Tag SNPs are informative to us to infer if donor cells have provided some copies of their mRNAs to the recipient cells using RNA-seq data. Two candidate tag SNPs are highlighted by grey background shading. Using the first highlighted case as an example, the genotype at this locus is "CT" for macrophages and "CC" for adipocytes. For adipocytes under co-culture condition, the mapped reads data contain both "C" and "T" bases, in which the "T" bases may come from macrophages during in vitro co-culture. Alternatively, it is possible that the "T"s in the adipocytes may originate from other mechanisms such as sequencing error, mapping error, and RNA editing. We developed a Bayesian model to distinguish mRNA transfers from the alternative mechanisms. Considering adipocytes co-cultured with macrophages, for a given locus, the Bayesian model takes the following as inputs: (1) the genotype information of donor (macrophage) and recipient (adipocyte) cell lines; (2) the counts of the nucleotides at the locus, and (3) base qualities of the reads mapped to the locus. Specifically as shown in Additional file 1: Figure S4, we denote the mapped RNA-seq read data at a particular genome position from four profiles by A I , A C , M I , and M C , corresponding to adipocyte alone, adipocyte co-cultured, macrophage alone, and macrophage co-cultured cells, respectively. Let G d and G r be the genotypes of donor cells and receptor cell respectively. When examining the sequence data of adipocyte co-cultured with macrophage, G d would be the genotype of macrophage (donor cell) and G r would be the genotype of adipocyte (receptor cell). We assume that the read depth of A C is N and denote t r (0 ≤ t r ≤ N) as the number of reads in A C that were transferred from donor cells. Obviously, a genetic material transfer happens at the position of consideration if t r ≥ 1. We frame the question as a Bayesian inference. We calculate the Bayes factor r to measure the confidence ratio of observing the data under two hypotheses, i.e., there exists at least one transfer vs. there is no transfer (null hypothesis). $$ r=\frac{\sum \limits_{i=1}^NP\left[{t}_r=i,{A}_C,{G}_r,{G}_d\right]}{P\left[{t}_r=0|{A}_C\right]} $$ We reject null hypothesis, and claim a positive genetic material transfer if ris greater than a predefined threshold. We assume that the true genotype of the donor and receptor cells are known, i.e., G r and G d are predefined constants (based on genotype array data). We next calculate P[t r = i| A C ] with 0 ≤ i ≤ N, that is, the posterior probability of exactly i nucleotides being transferred given A C . To calculate P[t r = i| A C , G r , G d ], based on Bayesian rule, $$ {\displaystyle \begin{array}{l}P\left[{t}_r=i|{A}_C,{G}_r,{G}_d\right]=\frac{P\left[{A}_C|{t}_r=i,{G}_r,{G}_d\right]P\left[{t}_r=i|{G}_r,{G}_d\right]}{P\left[{A}_C,{G}_r,{G}_d\right]}\\ {}\kern7.75em =\frac{P\left[{A}_C|{t}_r=i,{G}_r,{G}_d\right]P\left[{t}_r=i\right]P\left[{G}_r,{G}_d\right]}{P\left[{A}_C,{G}_r,{G}_d\right]}\kern1.25em \end{array}} $$ where the denominator P[A C , G r , G d ] is a constant and is often omitted in Bayesian calculation. The P[t r = i, G r , G d ] = P[t r = i]P[G r , G d ] holds as we assume that the number of reads being transferred is independent of the genotype of donor or acceptor cells. It is of note that we derive G r and G d based on genotype array data, but given we also have the RNA-seq data, an alternative approach is to estimate P[G r ] and P[G d ] by P[G r | A I ] and P[G d | M I ], which could be calculated in the same way as McKenna et al. [25]. This alternative approach can be useful when genotype information is not available. P[t r = i] is the prior "belief" that i reads were from transfer at the position under consideration. Without much prior knowledge of transfer, we assume a uniform prior, so that, \( P\left[{t}_r=i\right]=\left\{\begin{array}{l}\frac{1}{2}, if\ i=0\\ {}\frac{1}{2N}, if\ i\ne 0\end{array}\right. \). That is to assume equal probabilities of having genetic material transfer and not having a transfer, and further assume that the probability of transferring i nucleotides to be equal for any i with 1 ≤ i ≤ N. To calculateP[A C | t r = i, G r , G d ], we assume that reads are independent to each other, thus \( P\left[{A}_C|{t}_r=i,{G}_r,{G}_d\right]=\prod \limits_{j=1}^NP\left[{b}_j|{t}_r=i,{G}_r,{G}_d\right] \), where b j is the nucleotide observed for read j. For N reads in A C , if i of them were from transfer, each read has a probability of \( \frac{i}{N} \) for coming from a transfer and a probability of \( \frac{N-i}{N} \) for not from transfer. Thus, \( P\left[{b}_j|{t}_r=i,{G}_r,{G}_d\right]=\frac{i}{N}P\left[{b}_j|{G}_d\right]+\frac{N-i}{N}P\left[{b}_j|{G}_r\right] \). In addition, for those transferred reads, they are from either one of the parental chromosomes in the donor cell. By assuming that the probability of transferring from maternal chromosome is equal to the probability from paternal chromosome, (i.e., we ignore the situation of allele specific expression), we get\( P\left[{b}_j|{G}_d\right]=P\left[{b}_j|\left\{{A}_1,{A}_2\right\}\right]=\frac{1}{2}P\left[{b}_j|{A}_1\right]+\frac{1}{2}P\left[{b}_j|{A}_2\right] \), where the genotype G d = {A 1, A 2} is decomposed into its two alleles. The probability of observing a base given an allele is $$ P\left[{b}_j|A\right]=\left\{\begin{array}{l}1-{10}^{\frac{-Q}{10}}, if\ {b}_j=A\ \\ {}\frac{1}{3}{10}^{\frac{-Q}{10}}, if\ {b}_j\ne A\ \end{array}\right. $$ Where Q is the phred scaled recalibrated quality score at the base. Similarly, we can calculateP[b j | G r ]. Based on the method described above, we calculate the Bayesian factor for each locus. We call genetic material transfer occurred at a locus if the Bayes factor is greater than a predefined threshold β (β = 20). We performed the analysis on all ~605,000 tag SNPs (Fig. 2) with read depth equal to or greater than 10 in the co-cultured samples. Finally, we evaluated the possibility of mRNA transfers being mediated through exosomes by cross-referencing to the identity of mRNAs in the EVs (Fig. 1b). Genotyping quality control and comparison between adipocyte and macrophage cell lines: There are two cell lines each with two technical replicates for their genotyping profiling (Adipocyte B1/B2 and Macrophage B1/B2). The number in each eclipse denotes the number of SNPs kept at that step for the corresponding sample. There are 604,540 "tag" SNPs showing polymorphisms between the two cell lines It is of note that a very small fraction of loci showed inconsistency between the genotyping data and the RNA-seq reads profile. For example, at chr1:145,746,933, the genotype of adipocyte cell line is "GG", however all the 56 reads and all the 38 reads mapped to this locus are "C"s in ADaloneN1 and ADaloneN2, respectively (Additional file 2: Dataset S1). Such inconsistent loci were likely caused by genotyping errors, alignment errors, errors in the reference genome, etc., and could be problematic for our down-stream analysis. Therefore, we filtered out genotypes if in the single cultured recipient cells, the proportion of reads different from genotype was larger than a predefined threshold γ. With γ = 0.005, 9861 (2.26%) and 14,028 (3.22%) loci were filtered out for adipocytes and macrophages, respectively (Additional file 2: Dataset S1). Estimating FDR We calculated the Bayesian factors for triplicate co-cultured samples (i.e., ADcoN1-N3: adipocytes co-cultured with macrophages, and MOcoN1-N3: macrophages co-cultured with adipocytes) and ranked target loci accordingly (loci with larger Bayesian factor rank to the top) in each sample. We then adopted a robust rank aggregation method [26] to identify loci that were ranked consistently better than expected under null hypothesis of uncorrelated inputs and assigned a significance score (p-value) to each locus. Finally, we adjusted the p-values for multiple comparisons to estimate the false discovery rate (FDR). Identify mRNA transfers potentially mediated by EVs in co-cultured adipocytes and macrophages We explored mRNA transfers in two directions, i.e., from macrophage to adipocyte and from adipocyte to macrophage, respectively. From macrophage to adipocyte Ten thousand ninety-five loci survived the filtering steps in the analytical procedures. We further required the potential transfer loci to (1) have Bayesian factor larger than or equal to 20 in at least one sample (i.e., ADcoN1-N3), and (2) contain at least 2 reads that are potentially transferred from donor cells (e.g., "T" allele for the example in analytical procedures in ADco (the merged data from ADcoN1-N3)). Three hundred twenty-one loci satisfied such requirements (Additional file 3: Dataset S2). Among these loci, we identified 8 (corresponding to 8 unique genes) that are putative mRNA transfers from macrophage to adipocytes with high confidence (FDR < 0.1, Table 1). All these 8 genes were likely mediated by EVs as they were all expressed in both MOexosome and ADMOexosome (Table 1). It is of note that the number of total transcripts in GENCODE V21 is 60,566, among which 13,697 have FPKM larger than 0 and only 1966 larger than 1 in ADMOexosome RNA expression. The Fisher's exact test indicates that the overlap between inferred transferred genes and those expressed in ADMOexosome are very significant (p-value < 2.2E-16) (Table 2). We also used Integrative Genomics Viewer (IGV) [27] to visually verify the 8 genes and showed the alignment (of MOaloneN1-N2, ADcoN1-N3, and ADaloneN1-N2) at chr19:10,286,547 (ICAM1) in Additional file 1: Figure S5. As shown in Additional file 1: Figure S5, the read counts summarized by IGV were consistent with the read counts we calculated and listed in Table 1. Table 1 Top loci identified as transferred from macrophage to adipocyte possibly mediated through macrophage-derived exosomes Table 2 Number of genes involved in the transfer from adipocytes to macrophages, those possibly mediated by exosomes, and their overlap significance A close inspection of the individual genes showed that several of them belonged to the extracellular space (GO:0005615; PAPPA, ICAM1, CTSZ and COL6A3) and lysosome (GO:0005764 NPC1 and CTSZ) GO categories. One unifying theme identified from the top three transfers was related to impacting pathways associated with insulin resistance. For example, our top hit ICAM1 (Intercellular Adhesion Molecule 1) from macrophages to adipocytes is an endothelial- and leukocyte-associated transmembrane protein long known for its importance in stabilizing cell-cell interactions and facilitating leukocyte endothelial transmigration [28]. It has also been demonstrated to associate with insulin resistance and diabetic retinopathy in type 2 diabetes (T2D) mellitus [29, 30]. Interestingly, although ICAM-1 is often annotated as a transmembrane protein, two types of extracellular ICAM-1 have also been detected outside of cells or in serum including a soluble form and a membranous form associated with exosomes. In addition to inflammatory mediators like ICAM-1, factors related to extracellular matrix (ECM) components of the adipose tissue have recently emerged as important mediators in obesity-related pathogenesis. In particular one of the most abundantly expressed collagens in the adipose tissue forming part of the ECM structure is COL6 and its alpha 3 chain, COL6A3 has been associated with adipose tissue inflammation and fibrosis. In collagen VI knockout (KO) mouse for example on an ob/ob background, adipocytes of the knockout mice were larger than wildtype and blood glucose was normalized suggesting that elements of the ECM restrict expansion of adipocyte during obese insults. Relevance has also been observed in obese humans, with elevated levels of collagen VI being detected as well as significant correlations with macrophage infiltration. Observing mRNA for COL6A3 in exosomes from macrophages suggests another level by which the ECM may be influenced by the cells of the adipose tissue depots [31, 32]. Finally, rounding out the top three hits, is Pregnancy-associated plasma protein-A (PAPP-A), a secreted metalloproteinase. PAPP-A cleaves insulin-like growth factor binding proteins (IGFBPs), thereby functioning as a growth-promoting enzyme by releasing bioactive IGF in close proximity to the IGF receptor. As PAPP-A has demonstrated to have fat depot-specific expression in humans and mice, and as the IGF signalling is known to regulate various adipose tissue processes in part through influencing the insulin/insulin receptor signalling axis, exosomal derived PAPP-A mRNA may serve as another mechanism whereby PAPP-A elicits its autocrine/paracrine actions [33]. From adipocyte to macrophage Thirty-seven thousand eight hundred fifty-three loci survived the filtering steps in the analytical procedures, among which 599 are potentially involved in the mRNA transfer from adipocyte to macrophage (Additional file 4: Dataset S3). There are 21 high confidence loci with FDR < 0.1, among which 17 are likely mediated by EVs as listed in Table 3. The Fisher's exact test indicates that the overlap between our inferred genes and those expressed in ADMOexosome are also very significant (p-value <2.2E-16) (Table 2). Furthermore, GO ontology analysis of these genes indicated that several of these genes were annotated under the extracellular vesicular exosome (GO:0070062) category (FCER2, UBL3, AHNAK, DSTN, SOD2, HSPG2 and LAMC1). Similarly, we used Integrative Genomics Viewer [27] to visually check the 17 genes and an example at chr6:75,666,836 (SENP6) is shown in Additional file 1: Fig. S6. Table 3 Top loci identified as transferred from adipocyte to macrophage A search in PubMed for a role for our top 3 mRNA transfers, namely, PIEZO1, RMRP and LAMC1 indicate their relevance in cellular communication processes.PIEZO1 is an ion channel mediating mechanosensory transduction in mammalian cells [34]; RMRP is a lncRNA with multiple RNA targets [35] and LAMC1 is a member of laminins(subunit gamma 1), which are a family of extracellular matrix glycoproteins [36]. Moreover, LAMC1 is also a candidate gene for diabetic nephropathy [37]. Several other significant mRNA transfers include HSPG2 and SPTLC2. SPTLC2 encodes an enzyme involved in sphingolipid synthesis, which has been shown upon heterozygous deficiency to protect mice from insulin resistance [38]. Finally, the neuroblast differentiation-associated protein AHNAK is highlighted for its important role in the regulation of thermogenesis and lipolysis in WAT via b-adrenergic signalling. AHNAK−/− mice under a high-fat diet had enhanced insulin sensitivity and browning of the WAT depot [39]. Interestingly, AHNAK is a large plasma membrane protein of various functions including plasma membrane support, calcium signalling as well as regulated exocytosis via its key membership of a specialized vesicle, called enlargeosomes. Enlargeosomes are non-secretory, cytoplasmic vesicles competent of regulated exocytosis after rising intracellular calcium and contribute to plasma membrane repair and vesicle shedding [40, 41]. Finding that AHNAK mRNA is transferred from adipocyte to macrophages in exosomes suggests an additional level of complexity to the role of AHNAK in exocytosis. Although it remains to be ascertained whether these mRNAs become functional proteins, evidence from others suggest that mRNAs transferred via exosomes do become functional proteins [4]. Thus these 25 genes (from both directions) are viable candidates for further studies with respect to communication between adipocytes and macrophages. Estimate the mRNA transfer rate between the two cell lines To estimate the transfer rate and test the performance of our model at different transfer rates, we simulated mRNA transfers at various rates and performed analyses based on simulated data. mRNA transfer simulation The general work flow of the simulation process is shown in Fig. 3. Using simulated mRNA transfer from macrophages to adipocytes as an example, we randomly selected a predefined fraction of reads from MOalone and merged them with ADalone data. We then ran our previously described pipeline on the merged ADalone data to see how many loci with "transferred" reads could be correctly identified. For each predefined transfer rate, we performed 10 simulation runs (denoted as \( {AD}_{co}^{Si} \) for 1 ≤ i ≤ 10with S indicates simulated data). Similarly, we constructed \( {MO}_{co}^{Si} \) for 1 ≤ i ≤ 10 to study genetic transfer from adipocytes to macrophages. The simulation study has its unique advantage since we know exactly if a base mapped to a locus is from the donor cells or not, this would allow us to assess the performance of our pipeline of detecting mRNA transfer at different transfer rates. A pipeline to generate simulated data for adipocytes co-cultured with macrophages: (a) Steps used to generate simulated data. b A schematic to illustrate how we sampled reads from one type of cells and merged them with the reads from the other type of cells to simulate a sample with known mRNA transfers Calling genetic exchange on simulated data and calculating accuracy We calculated the Bayesian factor at each "target" locus for the simulated samples. By merging the results from these samples and using Bayesian factor as a binary classifier, we created receiver operating characteristic (ROC) curves and calculated the area under curves (AUCs) of our method at 4 different transfer rates, i.e., 0.0001, 0.001, 0.01, and 0.1 (Fig. 4). As shown in Fig. 4, our method performs very well on identifying mRNA transfers at high transfer rates. At transfer rate 0.1, we achieved AUCs of 0.95 for transfer from macrophages to adipocytes and 0.90 for transfer in the opposite direction. The performance of our method decreases as transfer rate decreases. This is not surprising since when transfer rate decreases, the number of "alien" reads at each locus decreases, making signal to noise ratio to drop. Performance of the Bayesian framework on simulated data with different transfer rates: (a) The ROC curves and AUCs for our method on simulated data from macrophages to adipocytes at 4 different transfer rates: 0.1, 0.01, 0.001, and 0.0001. b The ROC curves and AUCs for our method on simulated data from adipocytes to macrophages Estimation of mRNA transfer rate in the co-cultured samples We provided a rough estimation of the transfer rate in our co-culture system by examining the difference in the overall distribution of Bayesian factor statistics obtained from the real co-culture data and the ones from simulated samples. The rationale underlying this analysis is that when the simulated transfer rate is similar to the real transfer rate, the difference between the two distributions should be minimal. It is of note that the numbers of reads are different in simulated samples and real co-cultured samples; this may cause the distribution of Bayesian factor statistics to change even when the underlying transfer rates are the same. To adjust for variation in the sample total read numbers to achieve a fair comparison, we down-sampled the reads data in ADco (MOco) such that it had roughly the same number of reads as the simulated co-cultured samples. We performed 10 runs of down-sampling, and generated 10 reads profiles denoted by \( {AD}_{co}^{Ri} \) (\( {MO}_{co}^{Ri} \)) (1 ≤ i ≤ 10 with R denotes real data), respectively for each transfer rate, i.e., 0.0001, 0.001, 0.01, and 0.1. We then calculated the Bayesian factor at each "target" locus for the down-sampled data. At a specific transfer rate, the overall distribution of Bayesian factor statistics in real co-culture is estimated by merging all the Bayesian factors from \( {AD}_{co}^{Ri} \) (\( {MO}_{co}^{Ri} \)) (1 ≤ i ≤ 10), and similar procedures were performed for simulated data. The Kolmogorov–Smirnov (KS) test was applied to compare the two distributions and we estimated the transfer rate to be the one with the lowest KS statistics. For both transfer directions, the minimal KS-statistics were achieved at transfer rate close to 0.001 (Table 4), indicating that the transfer rate in our co-culture system under quiescent condition was around 0.001. Table 4 Estimate mRNA transfer rates in in vitro co-culture Differentially expressed genes between cell lines cultured alone and co-cultured To evaluate the impact of co-culturing to the transcriptome, we used cell lines cultured alone as control, and identified differentially expressed genes (DEGs) in the co-cultured system. We adopted two commonly used methods DESeq and EdgeR [23, 42] and summarized the results in Table 5. As can be seen in Table 5, the DEGs identified by the two methods were consistent even though DESeq inferred more differentially expressed genes than EdgeR. There were 575 and 142 DEGs (FDR ≤ 0.05) identified by both methods for adipocytes and macrophages respectively (Additional file 5: Dataset S4). Table 5 Number of DEGs between cell lines cultured alone and co-cultured by DESeq and EdgeR (FDR ≤ 0.05) We performed function enrichment analysis of the DEGs using David tools (Version 6.7) [43] and the full results were provided in supplementary materials (Additional file 1: Table S2 for adipocytes and Table S3 for macrophages). The DEGs for adipocytes were mostly enriched in type 1 repeats in thrombospondin family, which are multimeric multidomain glycoproteins that function at cell surfaces and in the extracellular matrix milieu. THBS1, which encodes thrombosponin 1 was one of the genes found within this pathway significantly down-regulated in the co-cultured cells relative to the adipocytes cultured alone. THBS1 is interesting with respect to type 2 diabetes as it has been shown to be elevated in the circulation in obese and insulin-resistance individuals [44] and loss of THBS1 in mice, protects them from diet-induced weight gain and adipocyte hypertrophy [45]. Interestingly, the top 1, 3 and 6 genes identified in our transfer study, i.e., ICAM1, PAPPA, and NPC1 are also significantly differentially expressed between ADalone and ADco with FDRs 1.02E-7, 1.36E-7, and 1.68E-5, respectively. ICAM1 expression has been shown to relate with obesity and insulin resistance [46]. NPC1 haploinsufficiency also promotes weight gain and metabolic features associated with insulin resistance [47]. In contrast to the adipose tissue, the DEGs identified by comparing MOco vs. MOalone are mostly enriched for transmembrane proteins, immune response pathways such as leukocyte activation, and EGF-like domains. Most occurrences of the EGF-like domain are found in the extracellular domain of membrane-bound proteins or in proteins known to be secreted. The EGF receptor family is in part involved in Notch signaling, which controls cell-cell communication [48]. It is of note that a lot of studies have shown that ECM modulates epidermal growth factor receptor activation and leukocytes [49, 50]. Thus, the differential expressed genes could be some down-stream effects of the transferred transcripts from adipocytes to macrophages. There is accumulating evidence that exosomes via horizontal transfer (i.e., the movement of genetic material between cells) of genetic information can play a key role in cell-to-cell communications [4,5,6, 51, 52]. Numerous studies have thus focused on providing a comprehensive characterization of the content of EVs and these efforts have led to the creation of databases, such as EVpedia and Vesiclepedia [53, 54], which record molecules (proteins, mRNAs, microRNAs or lipids) observed within these vesicles. However, being identified in exosomes is not necessarily indicating that the RNAs or proteins will be transferred into other cells. Thus in the present study, we complement these efforts by providing a computational approach to identify genetic material that has been transferred between two in vitro co-cultured cells mediated by EVs. Next generation sequencing of cellular genetic material which differed between the co-cultured cell types was used as the finger print to place donor-derived mRNAs at the scene of the recipient's cellular RNA pool. In comparison to other labelling technologies, using DNA sequence polymorphism as marker has its advantages since they are naturally occurring and introduce no artificial modifications to the biological systems. On the other hand, we are limited to loci with polymorphisms for mRNA transfer detection, and therefore this approach while at genome-scale provides semi-whole genome coverage. In comparison of our identified 25 genetic transfers (in both directions) to those catalogued in the exosome database, ExoCarta (~1700 distinct human mRNAs across various tissues sources) [55], we identified 7 mRNAs in common (ITGB1, NFATC3, SOD2, FOS, AHNAK, LAMC1, and SPTLC2), all of which are in the direction from adipocytes to macrophages (with p-value 2.45E-3). The overlap is significant despite the diversity seen in exosome cargo depending on the cell type under study as well as the cellular state. Nonetheless this serves to further underlie the importance of characterization of EV contents. While our method has general applicability, as described below, we perceptively chose to apply this to co-cultures of macrophage and adipocyte given the precedence in the literature that there is crosstalk between adipocytes and immune cells in the adipose tissue that contributes to metabolic dysregulation and obesity [56]. The function of the mRNAs transferred range from protein coding to transcriptional regulators and thus the impact on the recipient cell could vary greatly if they are translated into functional units. Although in our study we did not assess this, evidence from other studies suggest the transferred mRNA can be functional in the recipient cell. From this perspective, it is of great interest to see that one of the mRNAs we identify as transferred from adipocyte to macrophage, as well as confirmed in ExoCarta, is AHNAK. Given the most recent identification that AHNAK in the regulation of thermogenesis and lipolysis in WAT via β-adrenergic signalling makes this observation of AHNAK mRNA transfer in adipocyte exosomes to macrophages very relevant to the new field of extracellular vesicle biology and the possible impact for new therapeutics for T2D. It also highlights the value of this computational approach in generating novel hypothesis. Currently our method is optimized for detecting nucleic acid exchanges between two cells lines at loci with known polymorphisms. It is known that exosome RNA contains not only mRNA, but also non-coding RNA species such as small microRNAs. For example, by analysing miRNA expression levels in a variety of cell lines and their derived exosomes, Guduric-Fuchs et al. found that miRNAs like miR-150, miR-142-3p, and miR-451 preferentially enter exosomes [57]. Huang et al. characterized the human plasma-derived exosomal RNAs by deep sequencing [58]. Although we did sequence miRNAs from the cells alongside total mRNA, we could however, not find sufficient genetic diversity amongst the miRNAs between the adipocyte and macrophages in order to allow for the identification of transfers via genetic differences mainly due to short length and relatively small number of miRNA species that could be detected. Although accumulating evidence supports the important role of exosomes/EVs for mediating the cell-to-cell communication and exchange of genetic information, exosomes or lipid vesicles in general may not be the only mechanism for RNAs to transfer between cells. For example, it has been demonstrated that miRNA can be protected in the extracellular environment by forming complex with high-density lipoproteins (HDL) and RNA-binding proteins [59, 60]. Since our experimental design did not exclude other mechanisms of genetic material transfer besides EVs, we consider the identified transfers are likely mediated by exosomes but could also be contributed by other mechanisms. There are a few possibilities that can lead to false discoveries. First, reads from RNA-seq contain errors due to the technical limitation of the next generation sequencing technology [61]. For example, the raw quality scores of Illumina sequencing are calculated from the signal intensity, which do not always accurately represent the true error rate. Our study is particularly sensitive to this error rate since the transfer rate of transcripts is also quite low in this study as indicated by our simulation study. When the transfer rate is low (as seen in our experiment), the low signal/noise ratio could be the major factor for the high FDRs. Second, all the aligners may have some mapping errors. A comparison study with various sequence aligners shows that STAR-2pass with annotation has the best alignment performance but is still not completely error free [62]. Third, there are also genotyping error by SNP-arrays [63]. Finally, there are a few biological mechanisms such as RNA editing which will cause false discovery in our method. Our simulation study has shown that our method has its ideal performance when the transfer rates between the two in vitro co-cultured cell lines are high. Inducing exosome secretion could be done via chemical means, such as altering cellular ceramide levels [64]. Our methods also require a good level of genetic diversity considering the donor and recipient systems. Optimization of genetic diversity can be done using cultures of cells that are knowingly from different genetic individuals such as in our experimental design. Other experimentally relevant systems to optimize genetic diversity, include investigations surrounding human derived exosomes (e.g., from plasma) and their function by injecting into the mouse [65]. In this case, one could survey the mouse tissues in order to identify tissues which have been impacted by the nucleic acid cargo of the donor exosome. Another naturally expressing genetic diversity is seen in cancers. It has been known that cancer cells secrete exosomes including during cell migration [66] and invasion [67]. Importantly, tumour derived microvesicles have been identified to contain mutated and amplified oncogenic DNA sequences and potentially have a role in genetic communication between cells as well as provide a potential source of tumour biomarkers. Thus our approach, we predict, would be highly amenable to identifying the genetic materials transferred between cancer and surrounding or distant normal cells [52] and further highlights the overall potential impact of this methodology. In this study we present a novel systematic framework to call genes involved in the process of mRNA exchange between two co-cultured cell lines of different genetic backgrounds and investigate the role of exosomes as a vehicle in mediating the exchange. The systematic framework includes a protocol to perform quality control, alignment, mapping, and base call recalibration on the raw SNP array and RNA sequencing reads data, a Bayesian model to evaluate the significance of genotypic variation of a cell line under in vitro co-culture, and a method to estimate the rate of false discovered loci involving in the transfer process. By applying the framework to a co-culture between adipocyte and macrophage cell lines, we identified with high confidence, 8 mRNAs being transferred from macrophages to adipocytes and 21 mRNAs transferred in the opposite direction. These mRNAs represent biological functions including extracellular matrix, cell adhesion, glycoprotein, and signal peptides. We also estimate the transfer rate to be 0.001 in both directions. Our work provides a novel solution to studying EV mediated mRNA transfers between cells and this work is able to be extended to in vivo studies as well. Adipocyte AUCs: Area under curves CDCs: Cardio sphere-derived cells DEGs: Differentially expressed genes EVs: Extracellular vesicles exRNA: Extracellular RNA FDR: False discovery rate Fragments per kilobase of transcript per million HDL: High-density lipoproteins IGFBPs: IGV: Integrative genomics viewer KO: Kolmogorov–Smirnov MVs: Microvesicles NGS: Next generation of sequencing PAPP-A: Pregnancy-associated plasma protein-A PCA: Principal component analysis QC: rRNAs: Ribosomal RNAs T2D: Colombo M, Raposo G, Thery C. Biogenesis, secretion, and intercellular interactions of exosomes and other extracellular vesicles. Annu Rev Cell Dev Biol. 2014;30:255–89. Deng Z-b, Poliakov A, Hardy RW, Clements R, Liu C, Liu Y, Wang J, Xiang X, Zhang S, Zhuang X, et al. Adipose tissue exosome-like vesicles mediate activation of macrophage-induced insulin resistance. Diabetes. 2009;58(11):2498–505. Raposo G, Stoorvogel W. Extracellular vesicles: Exosomes, microvesicles, and friends. J Cell Biol. 2013;200(4):373–83. Valadi H, Ekstrom K, Bossios A, Sjostrand M, Lee JJ, Lotvall JO. Exosome-mediated transfer of mRNAs and microRNAs is a novel mechanism of genetic exchange between cells. Nat Cell Biol. 2007;9(6):654–9. Skog J, Wurdinger T, van Rijn S, Meijer DH, Gainche L, Curry WT, Carter BS, Krichevsky AM, Breakefield XO. Glioblastoma microvesicles transport RNA and proteins that promote tumour growth and provide diagnostic biomarkers. Nat Cell Biol. 2008;10(12):1470–6. Mathivanan S, Fahner CJ, Reid GE, Simpson RJ. ExoCarta 2012: database of exosomal proteins, RNA and lipids. Nucleic Acids Res. 2012;40(Database issue):D1241–4. Ibrahim AGE, Cheng K, Marban E. Exosomes as critical agents of cardiac regeneration triggered by cell therapy. Stem Cell Rep. 2014;2(5):606–19. Taylor DD, Gercel-Taylor C. MicroRNA signatures of tumor-derived exosomes as diagnostic biomarkers of ovarian cancer. Gynecol Oncol. 2008;110(1):13–21. Escudier B, Dorval T, Chaput N, Andre F, Caby MP, Novault S, Flament C, Leboulaire C, Borg C, Amigorena S, et al. Vaccination of metastatic melanoma patients with autologous dendritic cell (DC) derived-exosomes: results of thefirst phase I clinical trial. J Transl Med. 2005;3(1):10. Ohno S, Takanashi M, Sudo K, Ueda S, Ishikawa A, Matsuyama N, Fujita K, Mizutani T, Ohgi T, Ochiya T, et al. Systemically injected exosomes targeted to EGFR deliver antitumor microRNA to breast cancer cells. Molecular therapy : the journal of the American Society of Gene Therapy. 2013;21(1):185–91. Vlassov AV, Magdaleno S, Setterquist R, Conrad R. Exosomes: current knowledge of their composition, biological functions, and diagnostic and therapeutic potentials. Biochim Biophys Acta. 2012;1820(7):940–8. Lawson C, Vicencio JM, Yellon DM, Davidson SM. Microvesicles and exosomes: new players in metabolic and cardiovascular disease. J Endocrinol. 2016;228(2):R57–71. Budnik V, Ruiz-Canada C, Wendler F. Extracellular vesicles round off communication in the nervous system. Nat Rev Neurosci. 2016;17(3):160–72. Mittelbrunn M, Gutierrez-Vazquez C, Villarroya-Beltri C, Gonzalez S, Sanchez-Cabo F, Gonzalez MA, Bernad A, Sanchez-Madrid F. Unidirectional transfer of microRNA-loaded exosomes from T cells to antigen-presenting cells. Nat Commun. 2011;2:282. Suganami T, Ogawa Y. Adipose tissue macrophages: their role in adipose tissue remodeling. J Leukoc Biol. 2010;88(1):33–9. Garcia NA, Moncayo-Arlandi J, Sepulveda P, Diez-Juan A. Cardiomyocyte exosomes regulate glycolytic flux in endothelium by direct transfer of GLUT transporters and glycolytic enzymes. Cardiovasc Res. 2016;109(3):397–408. Zheng PM, Chen L, Yuan XL, Luo Q, Liu Y, Xie GH, Ma YH, Shen LS. Exosomal transfer of tumor-associated macrophage-derived miR-21 confers cisplatin resistance in gastric cancer cells. J Exp Clin Cancer Res. 2017;36(1):53. Kopylova E, Noe L, Touzet H. SortMeRNA: fast and accurate filtering of ribosomal RNAs in metatranscriptomic data. Bioinformatics (Oxford, England). 2012;28(24):3211–7. Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics (Oxford, England). 2014;30(15):2114–20. Dobin A, Davis CA, Schlesinger F, Drenkow J, Zaleski C, Jha S, Batut P, Chaisson M, Gingeras TR. STAR: ultrafast universal RNA-seq aligner. Bioinformatics (Oxford, England). 2013;29(1):15–21. Van der Auwera GA, Carneiro MO, Hartl C, Poplin R, Del Angel G, Levy-Moonshine A, Jordan T, Shakir K, Roazen D, Thibault J et al: From FastQ data to high confidence variant calls: the genome analysis toolkit best practices pipeline. Current protocols in bioinformatics / editoral board, Andreas D Baxevanis [et al] 2013, 11(1110):11-10. Li H. A statistical framework for SNP calling, mutation discovery, association mapping and population genetical parameter estimation from sequencing data. Bioinformatics. 2011;27(21):2987–93. Anders S, McCarthy DJ, Chen Y, Okoniewski M, Smyth GK, Huber W, Robinson MD. Count-based differential expression analysis of RNA sequencing data using R and bioconductor. Nat Protoc. 2013;8(9):1765–86. McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, et al. The genome analysis toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20(9):1297–303. Kolde R, Laur S, Adler P, Vilo J. Robust rank aggregation for gene list integration and meta-analysis. Bioinformatics (Oxford, England). 2012;28(4):573–80. Robinson JT, Thorvaldsdottir H, Winckler W, Guttman M, Lander ES, Getz G, Mesirov JP. Integrative genomics viewer. Nat Biotechnol. 2011;29(1):24–6. Segura E, Nicco C, Lombard B, Veron P, Raposo G, Batteux F, Amigorena S, Thery C. ICAM-1 on exosomes from mature dendritic cells is critical for efficient naive T-cell priming. Blood. 2005;106(1):216–23. Leinonen E, Hurt-Camejo E, Wiklund O, Hulten LM, Hiukka A, Taskinen M-R. Insulin resistance and adiposity correlate with acute-phase reaction and soluble cell adhesion molecules in type 2 diabetes. Atherosclerosis. 2003;166(2):387–94. Kamiuchi K, Hasegawa G, Obayashi H, Kitamura A, Ishii M, Yano M, Kanatsuna T, Yoshikawa T, Nakamura N. Intercellular adhesion molecule-1 (ICAM-1) polymorphism is associated with diabetic retinopathy in type 2 diabetes mellitus. Diabet Med. 2002;19(5):371–6. Khan T, Muise ES, Iyengar P, Wang ZV, Chandalia M, Abate N, Zhang BB, Bonaldo P, Chua S, Scherer PE. Metabolic Dysregulation and adipose tissue fibrosis: role of collagen VI. Mol Cell Biol. 2009;29(6):1575–91. Dankel SN, Svard J, Mattha S, Claussnitzer M, Kloting N, Glunk V, Fandalyuk Z, Grytten E, Solsvik MH, Nielsen H-J, et al. COL6A3 expression in adipocytes associates with insulin resistance and depends on PPARgamma and adipocyte size. Obesity (Silver Spring). 2014;22(8):1807–13. Conover CA, Harstad SL, Tchkonia T, Kirkland JL. Preferential impact of pregnancy-associated plasma protein-a deficiency on visceral fat in mice on high-fat diet. Am J Physiol-Endoc M. 2013;305(9):E1145–53. Coste B, Mathur J, Schmidt M, Earley TJ, Ranade S, Petrus MJ, Dubin AE, Patapoutian A. Piezo1 and Piezo2 are essential components of distinct mechanically activated cation channels. Science. 2010;330(6000):55–60. Maida Y, Yasukawa M, Furuuchi M, Lassmann T, Possemato R, Okamoto N, Kasim V, Hayashizaki Y, Hahn WC, Masutomi K. An RNA-dependent RNA polymerase formed by TERT and the RMRP RNA. Nature. 2009;461(7261):230–U104. Davis GE, Senger DR. Endothelial extracellular matrix - biosynthesis, remodeling, and functions during vascular morphogenesis and neovessel stabilization. Circ Res. 2005;97(11):1093–107. Ewens KG, George RA, Sharma K, Ziyadeh FN, Spielman RS. Assessment of 115 candidate genes for diabetic nephropathy by transmission/disequilibrium test. Diabetes. 2005;54(11):3305–18. Li Z, Zhang H, Liu J, Liang C-P, Li Y, Li Y, Teitelman G, Beyer T, Bui HH, Peake DA, et al. Reducing plasma membrane sphingomyelin increases insulin sensitivity. Mol Cell Biol. 2011;31(20):4205–18. Shin JH, Lee SH, Kim YN, Kim IY, Kim YJ, Kyeong DS, Lim HJ, Cho SY, Choi J, Wi YJ, et al. AHNAK deficiency promotes browning and lipolysis in mice via increased responsiveness to beta-adrenergic signalling. Sci Rep-Uk. 2016;6:23426. Haase H. Ahnak, a new player in beta-adrenergic regulation of the cardiac L-type Ca2+ channel. Cardiovasc Res. 2007;73(1):19–25. Borgonovo B, Cocucci E, Racchetti G, Podini P, Bachi A, Meldolesi J. Regulated exocytosis: a novel, widely expressed system. Nat Cell Biol. 2002;4(12):955–62. Seyednasrollah F, Laiho A, Elo LL. Comparison of software packages for detecting differential expression in RNA-seq studies. Brief Bioinform. 2015;16(1):59–70. Dennis G Jr, Sherman BT, Hosack DA, Yang J, Gao W, Lane HC, Lempicki RA. DAVID: database for annotation, visualization, and integrated discovery. Genome Biol. 2003;4(5):P3. Matsuo Y, Tanaka M, Yamakage H, Sasaki Y, Muranaka K, Hata H, Ikai I, Shimatsu A, Inoue M, Chun T-H, et al. Thrombospondin 1 as a novel biological marker of obesity and metabolic syndrome. Metabolism. 2015;64(11):1490–9. Inoue M, Jiang Y, Barnes RH 2nd, Tokunaga M, Martinez-Santibanez G, Geletka L, Lumeng CN, Buchner DA, Chun T-H. Thrombospondin 1 mediates high-fat diet-induced muscle fibrosis and insulin resistance in male mice. Endocrinology. 2013;154(12):4548–59. Kent JW Jr, Comuzzie AG, Mahaney MC, Almasy L, Rainwater DL, VandeBerg JL, MacCluer JW, Blangero J. Intercellular adhesion molecule-1 concentration is genetically correlated with insulin resistance, obesity, and HDL concentration in Mexican Americans. Diabetes. 2004;53(10):2691–5. Jelinek D, Millward V, Birdi A, Trouard TP, Heidenreich RA, Garver WS. Npc1 haploinsufficiency promotes weight gain and metabolic features associated with insulin resistance. Hum Mol Genet. 2011;20(2):312–21. Lai EC. Notch signaling: control of cell communication and cell fate. Development. 2004;131(5):965–73. Sorokin L. The impact of the extracellular matrix on inflammation. Nat Rev Immunol. 2010;10(10):712–23. Yarwood SJ, Woodgett JR. Extracellular matrix composition determines the transcriptional response to epidermal growth factor receptor activation. Proc Natl Acad Sci U S A. 2001;98(8):4472–7. Gibbings DJ, Ciaudo C, Erhardt M, Voinnet O. Multivesicular bodies associate with components of miRNA effector complexes and modulate miRNA activity. Nat Cell Biol. 2009;11(9):1143–9. Balaj L, Lessard R, Dai L, Cho Y-J, Pomeroy SL, Breakefield XO, Skog J. Tumour microvesicles contain retrotransposon elements and amplified oncogene sequences. Nat Commun. 2011;2:180. Kim D-K, Lee J, Simpson RJ, Lotvall J, Gho YS. EVpedia: a community web resource for prokaryotic and eukaryotic extracellular vesicles research. Semin Cell Dev Biol. 2015;40:4–7. Kalra H, Simpson RJ, Ji H, Aikawa E, Altevogt P, Askenase P, Bond VC, Borras FE, Breakefield X, Budnik V, et al. Vesiclepedia: a compendium for extracellular vesicles with continuous community annotation. PLoS Biol. 2012;10(12):e1001450. Keerthikumar S, Chisanga D, Ariyaratne D, Al Saffar H, Anand S, Zhao K, Samuel M, Pathan M, Jois M, Chilamkurti N, et al. ExoCarta: a web-based compendium of Exosomal cargo. J Mol Biol. 2016;428(4):688–92. Hill AA, Reid Bolus W, Hasty AH. A decade of progress in adipose tissue macrophage biology. Immunol Rev. 2014;262(1):134–52. Guduric-Fuchs J, O'Connor A, Camp B, O'Neill CL, Medina RJ, Simpson DA. Selective extracellular vesicle-mediated export of an overlapping set of microRNAs from multiple cell types. BMC Genomics. 2012;13:357. Huang X, Yuan T, Tschannen M, Sun Z, Jacob H, Du M, Liang M, Dittmar RL, Liu Y, Liang M, et al. Characterization of human plasma-derived exosomal RNAs by deep sequencing. BMC Genomics. 2013;14:319. Etheridge A, Gomes CP, Pereira RW, Galas D, Wang K. The complexity, function and applications of RNA in circulation. Front Genet. 2013;4:115. Wang K, Zhang S, Weber J, Baxter D, Galas DJ. Export of microRNAs and microRNA-protective protein by mammalian cells. Nucleic Acids Res. 2010;38(20):7248–59. Le H-S, Schulz MH, McCauley BM, Hinman VF, Bar-Joseph Z. Probabilistic error correction for RNA sequencing. Nucleic Acids Res. 2013;41(10):e109. Engstrom PG, Steijger T, Sipos B, Grant GR, Kahles A, Ratsch G, Goldman N, Hubbard TJ, Harrow J, Guigo R, et al. Systematic evaluation of spliced alignment programs for RNA-seq data. Nat Methods. 2013;10(12):1185–91. LaFramboise T. Single nucleotide polymorphism arrays: a decade of biological, computational and technological advances. Nucleic Acids Res. 2009;37(13):4181–93. Trajkovic K, Hsu C, Chiantia S, Rajendran L, Wenzel D, Wieland F, Schwille P, Brugger B, Simons M. Ceramide triggers budding of exosome vesicles into multivesicular endosomes. Science. 2008;319(5867):1244–7. Geiger A, Walker A, Nissen E. Human fibrocyte-derived exosomes accelerate wound healing in genetically diabetic mice. Biochem Biophys Res Commun. 2015;467(2):303–9. Luga V, Zhang L, Viloria-Petit AM, Ogunjimi AA, Inanlou MR, Chiu E, Buchanan M, Hosein AN, Basik M, Wrana JL. Exosomes mediate stromal mobilization of autocrine Wnt-PCP signaling in breast cancer cell migration. Cell. 2012;151(7):1542–56. Higginbotham JN, Demory Beckler M, Gephart JD, Franklin JL, Bogatcheva G, Kremers G-J, Piston DW, Ayers GD, McConnell RE, Tyska MJ, et al. Amphiregulin exosomes increase cancer cell invasion. Curr Biol. 2011;21(9):779–86. JY is supported through Berg postdoc fellowship. ZT receives financial support from Berg Pharma as a consultant. This work is funded by a joint collaboration program between Berg Pharma and Icahn School of Medicine at Mount Sinai. All the data were uploaded to GEO with access https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token=ozyfywuchjsnzwx&acc=GSE94155. The programs are available for download at http://research.mssm.edu/tulab/software/EVtransfer.html. Institute of Genomics and Multiscale Biology, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA Jialiang Yang, Jacob Hagen, Kimaada Allette, Sarah Schuyler, Francesca Petralia, Robert Sebra, Milind Mahajan, Bin Zhang, Jun Zhu, Sander Houten, Andrew Kasarskis, Eric E. Schadt, Carmen A. Argmann & Zhidong Tu Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA BERG, LLC, Framingham, MA, 01701, USA Kalyani V. Guntur, Jyoti Ranjan, Stephane Gesta, Vivek K. Vishnudas, Viatcheslav R. Akmaev, Rangaprasad Sarangarajan & Niven R. Narain Jialiang Yang Jacob Hagen Kalyani V. Guntur Kimaada Allette Sarah Schuyler Jyoti Ranjan Francesca Petralia Stephane Gesta Robert Sebra Milind Mahajan Bin Zhang Sander Houten Andrew Kasarskis Vivek K. Vishnudas Viatcheslav R. Akmaev Rangaprasad Sarangarajan Niven R. Narain Eric E. Schadt Carmen A. Argmann Zhidong Tu ES conceived the overall experiment plan. CA, ZT, BZ, JZ, SH, AK, VV, VA, RS and NN designed the experiment. JY, KG, KA, SS JR, SG, BS and MM performed experiments. JY, ZT, and FP analyzed the data. All authors read and approved the final manuscript. Correspondence to Carmen A. Argmann or Zhidong Tu. A next generation sequencing based approach to identify extracellular vesicle mediated mRNA transfers between cells. (DOCX 1282 kb) Additional file 2: Dataset S1. Informaiton of filted loci from adipocyte to macrophage and from macrophage to adipocyte. (XLS 3896 kb) Details of mRNA transfer analysis from macrophage to adipocyte. (XLSX 1903 kb) Details of mRNA transfer analysis from adipocyte to macrophage. (XLSX 6308 kb) Differential genes between adipocyte cultured alone and co-cultured with macrophage and between macrophage alone and co-cultured with adipocyte. (XLSX 96 kb) Yang, J., Hagen, J., Guntur, K.V. et al. A next generation sequencing based approach to identify extracellular vesicle mediated mRNA transfers between cells. BMC Genomics 18, 987 (2017). https://doi.org/10.1186/s12864-017-4359-1 Extracellular RNA (exRNA) Exosome Bayesian method
CommonCrawl
Tychonoff's Theorem Without Choice This page has been identified as a candidate for refactoring. In particular: This should not be specified as a separate result from Tychonoff's Theorem -- the statement of the result is the same, it's just its execution which is different. Hence it needs to be transcluded into the main body of Tychonoff's Theorem as a Proof 2. Corollaries to be extracted and disposed as appropriate. Lose the "preliminaries", as this does not fit $\mathsf{Pr} \infty \mathsf{fWiki}$ style. Until this has been finished, please leave {{refactor}} in the code. New contributors: Refactoring is a task which is expected to be undertaken by experienced editors only. Because of the underlying complexity of the work needed, it is recommended that you do not embark on a refactoring task until you have become familiar with the structural nature of pages of $\mathsf{Pr} \infty \mathsf{fWiki}$. This article needs to be linked to other articles. In particular: Various places. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by adding these links. To discuss this in more detail, feel free to use the talk page. Once you have done so, you can remove this instance of {{MissingLinks}} from the code. 1 Theorem without Axiom of Choice 1.2 Statement that holds in Zermelo–Fraenkel set theory (without Axiom of Choice) 2 Proof of the version without Axiom of Choice 2.1 Step 1 3 Corollaries without Axiom of Choice 3.1 Corollary 1 3.3 Proof 4 Also see Theorem without Axiom of Choice Let $\displaystyle X = \prod_{i \mathop \in I} X_i$ be a topological product space. From the definition of the Tychonoff topology, a basic open set of the natural basis of $X$ is a set of the form: $\displaystyle \prod_{i \mathop \in I} U_i$ each $U_i$ is a nonempty open subset of $X_i$ $U_i = X_i$ for all but finitely many $i \in I$. Statement that holds in Zermelo–Fraenkel set theory (without Axiom of Choice) Let $\left({I, <}\right)$ be a well-ordered set. The Well-Ordering Theorem, equivalent to the Axiom of Choice over Zermelo–Fraenkel set theory, states that every set is well-orderable. Let $\left \langle {X_i} \right \rangle_{i \mathop \in I}$ be an indexed family of compact topological spaces. Denote $\displaystyle X := \prod_{i \mathop \in I} X_i$. Let $\left({F, \subset}\right)$ be the set-theoretic tree: $\displaystyle \left({\bigcup_{i \mathop \in I} \prod_{j \mathop < i} X_j}\right) \cup X$ of mappings defined on initial intervals of $I$, where the ordering is that of set inclusion. Consider the set of all subtrees $T \subseteq F$ with the following property: For every $i \in I$ and every $\displaystyle f \in T \cap \prod_{j \mathop < i} X_j$, the set $\left\{{g \left({i}\right): g \in T, f \subsetneq g}\right\}$ is closed in $X_i$. Suppose that every such subtree $T$ of $F$ has a branch. The Hausdorff Maximal Principle, equivalent to the Axiom of Choice over Zermelo–Fraenkel set theory, states that in an ordered set, every chain is contained in a maximal chain, which implies that every tree has a branch. Then $\displaystyle \prod_{i \mathop \in I} X_i$ is compact. Proof of the version without Axiom of Choice To prove that every open cover of $X$ has a finite subcover, it is enough to prove that every open cover by basic open set of the natural basis has a finite subcover. (If the Axiom of Choice was assumed, it would be even enough to prove that every open cover by subbasic open sets of the natural subbasis has a finite subcover, according to Alexander Subbase Theorem.) Let $\mathcal O$ be a collection of basic open subsets of $X$ such that no finite subcollection of $\mathcal O$ covers $X$. It is enough to prove that $\mathcal O$ does not cover $X$. Let $T_\mathcal O$ be the set of all elements $f \in F$ such that the set $\left\{{x \in X : f \subseteq x}\right\}$ is not covered by any finite subcollection of $\mathcal O$. Then $T_\mathcal O$ is a subtree of $F$. For every $i \in I$ and $\displaystyle f \in T_\mathcal O \cap \prod_{j \mathop < i} X_j$, denote: $C_\mathcal O \left({f}\right) = \left\{{g \left({i}\right): f \subsetneq g \in T_\mathcal O}\right\}$ For every $i \in I$ and every $\displaystyle f \in T_\mathcal O \cap \prod_{j \mathop < i} X_j$: $C_\mathcal O \left({f}\right)$ is closed in $X_i$ and nonempty. Here the compactness of $X_i$ is used similarly to the usual proof: Topological Product of Compact Spaces. To prove this, let $\mathcal W$ be the set of all open subsets $U$ of $X_i$ such that: there exist a finite subset $\mathcal P \subseteq \mathcal O$ such that: $\displaystyle \left\{ {x \in X: f \subseteq x, x \left({i}\right) \in U}\right\} \subseteq \bigcup \mathcal P$ $\displaystyle X_i \setminus C_\mathcal O \left({f}\right) = \bigcup \mathcal W$ and hence $C_\mathcal O \left({f}\right)$ is closed. Aiming for a contradiction, suppose $C_\mathcal O \left({f}\right)$ is empty. Then $\mathcal W$ would be a cover for $X_i$ and, by compactness of $X_i$, it would have a finite subcover. This would yield a finite subset of $\mathcal O$ that covers $\left\{{x \in X: f \subseteq x}\right\}$ in contradiction with the fact that $f \in T_\mathcal O$. So $C_\mathcal O \left({f}\right)$ is nonempty. It remains to be shown that indeed: $\displaystyle X_i \setminus C_{\mathcal O} \left({f}\right) = \bigcup \mathcal W$ Consider an arbitrary $a \in X_i \setminus C_\mathcal O \left({f}\right)$. Define $\displaystyle g \in \prod_{j \mathop \le i} X_j$ by: $f \subseteq g$ $g \left({i}\right) = a$ $g \notin T_\mathcal O$ and therefore there is a finite set $\mathcal P \subseteq \mathcal O$ such that: $\displaystyle \left\{{x \in X: f \subseteq x, x \left({i}\right) = a}\right\} = \left\{{x \in X: g \subseteq x}\right\} \subseteq \bigcup \mathcal P$ $\forall V \in \mathcal P: \left\{{x \in X: f \subseteq x, x \left({i}\right) = a}\right\} \cap V \ne \varnothing$ Let $\displaystyle U = \bigcap \left\{{\operatorname{pr}_i \left({V}\right): V \in \mathcal P}\right\}$. Then $U$ is an open subset of $X_i$, $a\in U$, and: $\displaystyle \left\{{x \in X: f \subseteq x, x \left({i}\right) \in U}\right\} \subseteq \bigcup \mathcal P$ $U \cap C_{\mathcal O} \left({f}\right) = \varnothing$ and $a \in U \in \mathcal W$. $\Box$ Every branch of $T_\mathcal O$ has the greatest element. Aiming for a contradiction, suppose $B$ is a branch of $T_\mathcal O$ without such a greatest element. Let $f = \bigcup B$. Let $i$ be the least element of $I$ that is not in the domain of any element of $B$. $\displaystyle f \in \prod_{j \mathop < i} X_j$ Since $B$ has no greatest element, $f \notin B$. Since $B$ is a maximal chain in $T_\mathcal O$: $f \notin T_\mathcal O$. Let $\mathcal P \subseteq \mathcal O$ be a finite collection such that: $\displaystyle \left\{{x \in X: f \subseteq x}\right\} \subseteq \bigcup \mathcal P$ Let $m$ be the greatest element of the finite set: $\left\{ {j \in I: j < i \text{ and } \exists V \in \mathcal P: \left({\operatorname{pr}_j \left({V}\right) \ne X_j}\right)}\right\}$ Let $g$ be any element of $B$ that is defined on $m$. Consider an arbitrary $x \in X$ such that $g \subseteq x$. Let $y \in X$ be defined by $f \subseteq y$ and $y \left({j}\right) = x \left({j}\right)$ for every $j \ge i$, and choose $V \in \mathcal P$ such that $y \in V$. Because $V$ does not "take into account" the values of $x \left({j}\right)$ for $m < j < i$: $x \in V$ $\left\{{x \in X: g \subseteq x}\right\} \subseteq \bigcup \mathcal P$ in contradiction of the fact that $g \in T_\mathcal O$. Every maximal element of $T_\mathcal O$ is an element of $X$: Because $C_\mathcal O \left({f}\right) \ne \varnothing$: an element $f \in T_\mathcal O \setminus X$ cannot be maximal in $T_\mathcal O$. Now it can be shown that there is $f \in X$ such that $f \notin \bigcup \mathcal O$. By hypothesis, $T_\mathcal O$ has a branch $B$. Let $f$ be the greatest element of $B$. Then $f$ is a maximal element of $T_\mathcal O$. Therefore $f \in X$. Therefore the set $\left\{{f}\right\} = \left\{{x \in X: f \subseteq x}\right\}$ is not covered by any finite subcollection of $\mathcal O$. Hence $f \notin \bigcup \mathcal O$. Corollaries without Axiom of Choice Corollary 1 The Cartesian product of a finite indexed family of compact topological spaces is compact. It has been suggested that this page or section be merged into Topological Product of Compact Spaces. (Discuss) Let $I$ be a well-orderable set. Let $\left \langle {X_i}\right \rangle_{i \mathop \in I}$ be an indexed family of compact topological spaces. Let the Cartesian product of all nonempty closed subsets of all $X_i$ be nonempty: $\displaystyle \prod \left\{ {C: C \ne \varnothing \text { and } C \text{ is closed in } X_i \text { for some } i \in I}\right\} \ne \varnothing$ Let $<$ be a well-order relation on $I$. Denote $\displaystyle X = \prod_{i \mathop \in I} X_i$. Let $F$ be the tree: ordered by set inclusion. To apply the theorem, it is enough to verify that: If $T$ is a subtree of $F$ such that: for every $i \in I$ and every $\displaystyle f \in T \cap \prod_{j \mathop < i} X_j$, the set $\left\{{g \left({i}\right): g \in T,\ f \subsetneq g}\right\}$ is closed in $X_i$ then $T$ has a branch. Let: $\displaystyle e \in \prod \left\{{C: C \ne \varnothing \text { and } C \text{ is closed in } X_i \text { for some } i \in I}\right\}$ be a choice function. $e \left({C}\right) \in C$ for every nonempty $C$ which is closed in some $X_i$. Let $B_e$ be the minimal tree among all subtrees $S$ of $T$ with the property that: for every $i \in I$ and every $\displaystyle f \in S \cap \prod_{j \mathop < i} X_j$: $e \left({\left\{{g \left({i}\right): g \in T, f \subsetneq g}\right\}}\right) \in \left\{{g \left({i}\right): g \in S, f \subsetneq g}\right\}$ unless: $\left\{{g \left({i}\right): g \in T, f \subsetneq g}\right\} = \varnothing$ Such subtrees of $T$ exist because $T$ itself is one such. The minimal such subtree is the intersection of all such subtrees. It can be shown that $B_e$ is a branch by assuming that it is not, considering the minimal $i\in I$ where it "branches", and arriving at a contradiction with its minimality. "It can be shown that $B_e$ is a branch": so this needs to be done. Find a way to express this as a result in its own right, and link to it perhaps as an "also see". The proof that $\left[{0 \,.\,.\, 1}\right]^\Z$ is compact does not require the Axiom of Choice, because the product of all nonempty closed subsets of $\left[{0 \,.\,.\, 1}\right]$ contains, for example, the greatest lower bound function $\inf$ (restricted to the collection of closed subsets of $\left[{0 \,.\,.\, 1}\right]$): $\displaystyle \left\{{(C,\inf C) : C \ne \varnothing \text { and } C \text{ is closed in } \left[{0 \,.\,.\, 1}\right]}\right\} \in\prod \left\{{C: C \ne \varnothing \text { and } C \text{ is closed in } \left[{0 \,.\,.\, 1}\right]}\right\}$ (Here $\displaystyle \left\{ { \left({C, \inf C}\right): C \ne \varnothing \text { and } C \text{ is closed in } \left[{0 \,.\,.\, 1}\right]}\right\}$ is a way to write a function $f$ defined on the collection of all nonempty closed subsets of $[0 \,.\,.\, 1]$ by $f \left({C}\right) = \inf C$.) Why does "the product of all nonempty closed subsets of $\left[{0 \,.\,.\, 1}\right]$ contains the greatest lower bound function $\inf$" not require the Axiom of Choice? This section needs to be removed from here and placed into its own properly-structured proof page. Mappings of Initial Intervals of a Well-Ordered Set Ordered by Inclusion: such mappings form a set-theoretic tree. Retrieved from "https://proofwiki.org/w/index.php?title=Tychonoff%27s_Theorem_Without_Choice&oldid=353313" Refactoring In Progress Proofs by Contradiction Compact Spaces Product Spaces This page was last modified on 3 May 2018, at 02:42 and is 13,572 bytes
CommonCrawl
Periodica Mathematica Hungarica Complementary Euler numbers Takao Komatsu For an integer k, define poly-Euler numbers of the second kind \(\widehat{E}_n^{(k)}\) (\(n=0,1,\ldots \)) by $$\begin{aligned} \frac{{\mathrm{Li}}_k(1-e^{-4 t})}{4\sinh t}=\sum _{n=0}^\infty \widehat{E}_n^{(k)}\frac{t^n}{n!}. \end{aligned}$$ When \(k=1\), \(\widehat{E}_n=\widehat{E}_n^{(1)}\) are Euler numbers of the second kind or complimentary Euler numbers defined by $$\begin{aligned} \frac{t}{\sinh t}=\sum _{n=0}^\infty \widehat{E}_n\frac{t^n}{n!}. \end{aligned}$$ Euler numbers of the second kind were introduced as special cases of hypergeometric Euler numbers of the second kind in Komatsu and Zhu (Hypergeometric Euler numbers, 2016, arXiv:1612.06210), so that they would supplement hypergeometric Euler numbers. In this paper, we study generalized Euler numbers of the second kind and give several properties and applications. Euler numbers Complementary Euler numbers Euler numbers of the second kind Poly-Euler numbers of the second kind Determinant Zeta functions The author thanks the anonymous referee for careful reading of the manuscript and giving the hint to Theorem 4.3. T. Arakawa, M. Kaneko, Multiple zeta values, poly-Bernoulli numbers, and related zeta functions. Nagoya Math. J. 153, 189–209 (1999)CrossRefMathSciNetzbMATHGoogle Scholar M. Kaneko, Poly-Bernoulli numbers. J. Theor. Nombres Bordx. 9, 221–228 (1997)CrossRefMathSciNetzbMATHGoogle Scholar T. Komatsu, Poly-Cauchy numbers. Kyushu J. Math. 67, 143–153 (2013)CrossRefMathSciNetzbMATHGoogle Scholar T. Komatsu, H. Zhu, Hypergeometric Euler numbers (2016). arXiv:1612.06210 S. Koumandos, H. Laurberg Pedersen, Turán type inequalities for the partial sums of the generating functions of Bernoulli and Euler numbers. Math. Nachr. 285, 2129–2156 (2012)CrossRefMathSciNetzbMATHGoogle Scholar N.J.A. Sloane, The on-line encyclopedia of integer sequences. oeis.org. (2017)Google Scholar Y. Ohno, Y. Sasaki, On the parity of poly-Euler numbers. RIMS Kokyuroku Bessatsu B32, 271–278 (2012)MathSciNetzbMATHGoogle Scholar Y. Ohno, Y. Sasaki, Periodicity on poly-Euler numbers and Vandiver type congruence for Euler numbers. RIMS Kokyuroku Bessatsu B44, 205–211 (2013)MathSciNetzbMATHGoogle Scholar Y. Ohno, Y. Sasaki, On poly-Euler numbers. J. Aust. Math. Soc. (2016). doi: 10.1017/S1446788716000495 zbMATHGoogle Scholar Y. Sasaki, On generalized poly-Bernoulli numbers and related \(L\)-functions. J. Number Theory 132, 156–170 (2012)CrossRefMathSciNetzbMATHGoogle Scholar © Akadémiai Kiadó, Budapest, Hungary 2017 1.School of Mathematics and StatisticsWuhan UniversityWuhanChina Komatsu, T. Period Math Hung (2017) 75: 302. https://doi.org/10.1007/s10998-017-0199-7
CommonCrawl
Image by Bain News Service (Wikimedia Commons), Public Domain Lord Kevin (purportedly a distant relative of Lord Kelvin) is holding a party to celebrate his recent entrance into the British nobility. The exact process whereby Kevin acquired his title is a matter of some dispute, and there are even whispered rumours of identity theft, but fortunately we need not concern ourselves with such unpleasantries here. All the invitees to Kevin's party are British Lords with whom he hopes to ingratiate himself, and since Kevin has detected that these nobles are a rather reserved lot, he has invented a party game to facilitate interactions among his guests. As the Lords arrive at Kevin's manor, they are numbered from $1$ to $n$. Each Lord is given a lapel pin on which is written his unique identifying number, and he is asked to wear this pin throughout the evening. (Kevin, who is organizing the game but not participating in it, does not have an assigned number.) Once all the guests have assembled in the ballroom, Kevin takes a set of $n$ cards on which are written the numbers $1$ through $n$ (one number per card), places them in his top hat, and then draws them out at random and distributes them to his guests, one per Lord. Now the party game is ready to begin. The game consists of a sequence of rounds, each of which has three stages. In Stage $1$, Kevin calls out "Match!", at which point each Lord checks to see if the number on the card in his hand matches the number on his lapel pin. If so, the Lord is freed from the game and can move into the dining room to partake of the fine food (and drink) waiting there. In Stage $2$, Kevin calls out "Hat!", at which point each remaining Lord memorizes the number on the card in his hand, and then places the card in the brim of his top hat. In Stage $3$, Kevin calls out "Snatch!", at which point each Lord finds the other guest in the room whose lapel pin contains the number he has just memorized, and then snatches (or, more politely, gently takes) the card from that Lord's hat. (Since British nobles are also called peers, Stage $3$ is an example of a peer-to-peer protocol.) At the end of Stage 3, each Lord once again has a numbered card in his hand, ready for the next round to begin. Note that at the end of Stage $1$, if all Lords have been freed from the game, the game is over, so obviously Kevin will skip the final Stages $2$ and $3$. Kevin thinks that this game he has invented will be very fun for his guests, but he is a little worried that some of the Lords might stay trapped in the game forever, eventually dying of starvation. Given the initial distribution of numbered cards to guests, help Kevin determine whether or not the game will have such a dire outcome. The first line of input contains a single integer $T$ $(1 \leq T \leq 10)$, the number of test cases that follow. Each test case occupies two lines. The first line contains an integer $n$ $(1 \leq n \leq 1\, 200)$, the number of guests at the party. The second line contains $n$ distinct space-separated integers $a_1, a_2, \ldots , a_ n$, where $a_ i$ $(1 \leq i \leq n)$ is the number on the card initially given to the Lord whose lapel pin contains the number $i$. The output consists of $T$ lines, one per case. For each test case, output "All can eat." if the game eventually ends, or "Some starve." if the game never ends. All can eat. Some starve. Problem ID: partygame Author: Liam Keliher Source: 2018 Atlantic Canadian Preliminary Contest
CommonCrawl
Modern cosmology George F.R. Ellis and Jean-Philippe Uzan (2017), Scholarpedia, 12(8):32352. doi:10.4249/scholarpedia.32352 revision #183839 [link to/cite this article] Post-publication activity Curator: Jean-Philippe Uzan Olivier Minazzoli Ruth Durrer George F.R. Ellis Prof. George F.R. Ellis, University of Cape Town, Cape Town, South Africa Prof. Jean-Philippe Uzan, Institut d'Astrophysique de Paris, Paris, France Physical cosmology is the study of the properties and evolution of the large scale structure of the universe. It firstly determines, in the light of astronomical observations, what is there when we average out the matter distribution to the largest scales, secondly what its history has been in the light of our understanding of physical forces, and thirdly what its future is likely to be. Thus it has descriptive and dynamic aspects, supported by theory and observation. Modern cosmology is a mathematical physics subject, so we present the mathematics used in these studies. However we try to clearly link this in to the underlying conceptual principles on the one hand, and the observational tests of the resulting models on the other. There is a huge literature on cosmology, and there is no purpose in simply duplicating it here (see e.g. Dodelson, 2003, Durrer, 2008, Mukhanov, 2005, Ellis, Maartens and MacCallum, 2012, Peter and Uzan J-P, 2013 for detailed texts). Our aim is to get the main ideas across to the interested reader in a way that emphasizes key issues, with references to further literature where this can be followed up. To simplify this, we will extensively refer to a recent detailed review by one of us (Uzan J-P, 2016), which is available on line, and where details and further references are given. The big picture is that remarkably simple models, largely based in known physics, succeed in giving an accurate explanation of a vast array of detailed astronomical observations; and they do so in terms of just a few free parameters that are fixed observationally. However major unkowns remain, in particular the nature of dark matter and dark energy, and the supposed inflaton field in the very early universe. In each of these cases we have good observational evidence for our models but no solid link to tested phyiscs that explains them. 1 Basic assumptions 1.1 Foundations 1.2 Nature of Spacetime Geometry 1.3 Dynamics: The Einstein Field Equations 1.4 Matter sources 2 Geometry of the Observable Universe 2.1 Background models 2.2 Topology 2.3 Perturbed models 2.4 The gauge issue 2.5 Scalar, vector and tensor perturbations. 2.6 Fourier analysis 2.7 Justification 3 Dynamics 3.1 Generic 3.2 Conservation equations 3.3 Field equations 3.4 Background model 3.5 The Main Epochs 3.6 Inflation 3.7 The growth of structure 3.8 Dark energy and Dark matter 3.9 Start of the universe 3.10 Physics Horizons 4 Observations and testing 4.1 The basic observational dilemma 4.2 Observational relations: background model 4.3 Observational relations: perturbed model 5 The concordance model 5.1 The primary data 5.2 The concordance parameters 5.3 A data rich subject 6 Causal and visual horizons 6.1 Particle horizons 6.2 Visual horizons 6.3 Primeval particle horizons 6.4 Small universes 7 More general models 7.1 The Bianchi models and phase planes 7.2 Lemaître -Tolman-Bondi spherically symmetric models 7.3 Other geometries 7.4 Other dynamics 7.5 Multiverses 8 Cosmological successes, puzzles, and limits 8.3 External links 9 Footnotes As with all sciences, cosmology has at its base certain philosophical methods and assumptions that order its nature. Broadly, these are the assumptions and processes that generically underlie the scientific method. In the particular case of cosmology, a mathematical model of the cosmos is proposed, based in our current understanding of the relevant physics and the data so far. This is then tested in detail against observations, and is adopted only if it explains those observations satisfactorily, with suitable values of its free parameters. However because of the particular nature of its object of study - there is only one Universe accessible to observation - the specific form of the assumptions made is particularly vital in the case of cosmology. This uniqueness means we are unable to carry out observations or experiments that will test important aspects of our proposed models. A key issue then - as in all science, but particularly significant here - is what is the domain of validity of our models, and what kinds of conclusions can we legitimately deduce from them. The core principles underlying standard cosmology are, Physics is the same everywhere in the universe. This is not obvious, because cosmology covers such vast time and distance scales, going right back to the start of the universe and to immensely larger distances than the Solar System, but without this assumption of universality we could not undertake a scientific study of the universe in the large. This assumption is tested by the success and coherence of the resulting models, for example the precise black body spectrum of the Cosmic Blackbody Radiation confirms that statistical physics and quantum physics were the same 13 billion years ago as they are today. The local physics that applies everywhere determines the dynamics of the universe in the large. That is, there is not a `cosmological physics' that applies only on large scales; rather large scale dynamics emerges in a bottom-up way from the combined effects of local dynamics on matter everywhere on small scales. Gravity is the dominant force in the Universe on Solar System and larger scales. This generalizes to galactic and cosmological scales the finding that gravity dominates the Solar System. This would not be true if astronomical objects were electrically charged, for example if electron and proton charges were not of precisely the same magnitude. In that case electromagnetism (the other long range force) would be the dominant force on large scales, because it is a much stronger force than gravity. [1] Gravity in the classical regime is well described by General Relativity Theory (Hawking and Ellis, 1973), so this is the theory that best describes the geometry and dynamics of the observable universe. One represents its evolution by using the Einstein Field Equations to determine the effects of gravity, together with models of the local behaviour of matter which are the source terms for the gravitational field. This assumption, because of the equivalence principle, encodes 1 and 2, One consequently has to consider a General Relativistic model with geometry and matter descriptions suitable for a cosmological context, and the Einstein Field Equations (see Eq.(7) below) determining the dynamics of the model. As to the nature of the universe, The universe is of vast size and age, with the Solar System (and even our galaxy) a tiny speck in a mostly empty spacetime. Matter is hierarchically arranged, with planets circulating stars imbedded in star clusters that form galaxies which occur in clusters that again occur in large scale structures with a web-like nature. However on the largest scales the universe appears to be spatially homogeneous: as far as we can see, it has no preferred places or centre. It is dynamic: it had a hot early stage, structures formed out of an initially very smooth state in a way that was modulated by dark matter of an unknown nature, and at present matter is expanding at a rate characterised by the Hubble constant. It may or may not have had a start, and while it is likely that it will continue expanding forever, there are other possibilities because we do not know the nature of the dark energy presently causing its expansion to accelerate. Our observational access to the universe is strictly limited by visual horizons and the opaque nature of the very early universe. Our ability to test the relevant physics at very early times is limited by practicalities in terms of what particle accelerators we can construct. Almost all of this was unexpected. It was assumed by the best physicists in the world who initiated relativistic cosmology in 1917 that the universe was static. Its dynamic nature was generally realised only in 1931 after a famous meeting of the Royal Astronomical Society, Friedmann discovered the expanding $k=+1$ solutions (1922) and the $k = -1$ solutions (1924) of the Einstein equations for a spatially homogeneous and isotropic spacetime. Lemaître independently discovered the $k=+1$ expanding solutions (1927), and related them to the expansion of the Universe via the redshift-distance observations of Slipher, hence offering the first connection to observations. The physical processes of the Hot Big Bang Era and associated existence of Cosmic Blackbody Background Radiation were not generally realised until 1965, even though they had been predicted in 1948 by Gamow, Alpher and Herman. They were not realised by the best minds in the field at the time, even though they are very obvious in hindsight. The existence of the blackbody cosmic background radiation should have been predicted, in terms of physics known at the time, in 1934, when Tolman wrote his book (Tolman, 1934), which includes the radiation dominated version of the Friedmann equation. Nature of Spacetime Geometry The space-time geometry is represented on some suitable averaging scale, which is usually implicit rather than explicit. According to General Relativity (Hawking and Ellis, 1973), spacetime is a 4-dimensional manifold with a Riemannian geometry determined by a symmetric metric tensor \( g_{ab}(x^i) = g_{(ab)}(x^i) (a,b = 0,1,2,3) \) [2] with normal form $g_{ab} = diag(-1,+1,+1,+1)$ [3] Thus timelike world lines $x^a(s)$ with tangent vector $u^a(s)=dx^a/ds$ have a magnitude $u^a g_{ab} u^b < 0$, null world lines $x^a(v)$ with tangent vector $k^a(s)=dx^a/dv$ have a magnitude $k^a g_{ab} k^b = 0$. Proper time $\tau$ along a timelike world line is determined by the relation \begin{equation}\tag{1} \tau = \int \sqrt{\left|g_{ab}\frac{dx^a}{ds} \frac{dx^b}{ds}\right|}\,ds \end{equation} and spatial distances along a spacelike world line by the analogous relation. A connection $\Gamma$ determines the covariant derivative $\nabla$ such that $\nabla_{[ab]}f=0$ for all functions $f$, $\nabla_ag_{bc}=0$, these two relations together determining the connection in terms of the derivatives of the metric. A geodesic curve $x^a(v)$ with tangent vector $K^a=\frac{dx^a}{dv}$ obeys the relation \begin{equation}\tag{2} K^a \nabla_a K^b = 0, \end{equation} and represents freely falling matter if $K^a$ is timelike ($K^aK_a <0$), or the path of light rays if $K^a$ is null ($K^aK_a =0$). The curvature tensor $R_{abcd}$ is given by the Ricci identities that hold for an arbitrary vector field $u^{a}$: \begin{equation}\tag{3} \nabla_{[dc]} u_b =u^{a}R_{abcd}. \end{equation} It satisfies the symmetries \begin{equation}\tag{4} R_{abcd}= R_{[ab][cd]} = R_{cdab}, \,\, R_{a[bcd]}=0 \end{equation} and the integrability conditions \begin{equation}\tag{5} \nabla_{[a}R_{bc]de} = 0 \end{equation} (the Bianchi identities). The curvature tensor has two important contractions, the Ricci tensor $ R_{ab}:=R_{\;acb}^{c}$ and Ricci scalar $R:=R_{\;\;a}^{a}.$ Because of (5), the Einstein tensor $ G_{ab} := R_{ab} - 1/2 R g_{ab}$ satisfies the identity \begin{equation}\tag{6} \nabla_{b}G^{ab} =0. \end{equation} Dynamics: The Einstein Field Equations The matter present determines the geometry, through the Einstein field equations given by \begin{equation} G_{ab} ={8\pi G } T_{ab}-\Lambda \,g_{ab}\ , \tag{7} \end{equation} where $G$ is the gravitational constant, $\Lambda$ the cosmological constant (possibly zero), and $T_{ab}$ the matter stress-energy tensor. In essence, this shows how matter (on the right hand side) curves spacetime (on the left), where $\Lambda$ can be regarded as representing a Lorentz invariant matter field. Geometry in turn determines the motion of the matter because the identities (6) together with the equations (7) guarantee the conservation of total energy-momentum: \begin{equation} \nabla_{b}G^{ab} =0 \Rightarrow \nabla_{b}T^{ab}=0\ , \tag{8} \end{equation} provided the cosmological constant $\Lambda $ satisfies the relation $\nabla _{a}\Lambda =0$, i.e., it is constant in time and space. In essence, this shows how spacetime determines how matter moves because, given suitable equations of state, matter dynamics is controlled by the energy momentum conservation equations (8). In conjunction with suitable equations of state for the matter, relating the components of the stress-energy tensor $T_{ab},$ the Einstein Field Equations (7) determine the combined dynamical evolution of the model and the matter in it, with (8) acting as integrability conditions. It is crucial that in appropriate circumstances, (7) have the Newtonian gravitational equations as a limit, and so are in accordance with all the well-established results of Newtonian gravitational theory in the contexts where that is an accurate theory of gravitation. Matter sources Clearly the solutions to the Einstein Field Equations (7) depend on the nature of the matter sources present. The stress tensor on the right hand side of these equations should include all matter and radiation fields present. In the cosmological context, there are three significant such sources: Matter, usually represented as a perfect fluid with energy density $\rho$ and pressure $p$: \begin{equation}\tag{9} T_{ab} = (\rho +p) u_au_b + p g_{ab} \end{equation} where $u^a$ ($u^a u_a = -1$) is the 4-velocity, so $T_{ab}u^au^b=\rho$, $T^a_{\,\,\,\,a} = 3p$. In particular, baryonic matter and Cold Dark Matter (CDM) are represented in this way, with equation of state $p=0$. The equations of motion follow from the conservation equations (8); matter moves on timelike geodesics if $p=0$. Occasionally at times where irreversible processes are taking place, one may include viscosity and heat flux terms. Radiation, represented either as a perfect fluid (9) with equation of state $p = \rho/3$, or through a kinetic theory description with distribution function $f(x^i,p^j)$ and stress tensor $T^{ab}(x^i) = \int f(x,p) p^ap^b\pi$, which will almost always not give the perfect fluid form (9). The dynamical equation for the radiation then is the Boltzmann equation ${ \cal L}f = {\cal C}(f)$ representing its interaction with matter (Durrer, 2008). This reduces to the Liouville equation when no interactions take place: ${\cal C}(f)=0 \Rightarrow {\cal L}f=0$. A scalar field $\phi$ with=0 potential $V(\phi)$ may be present. This will have a perfect fluid stress tensor (9) with 4-velocity $u^a = \nabla^a \phi/(\left|\nabla^b \phi \nabla_b \phi\right|)^{1/2}$, and \begin{equation}\tag{10} \rho =\frac{1}{2}(d\phi /dt)^{2}+V(\phi), \,\, p=\frac{1}{2}(d\phi /dt)^{2}-V(\phi). \end{equation} The equation of motion is the Klein-Gordon Equation \begin{equation}\tag{11} \nabla^a \nabla_a \phi +dV/d\phi =0. \end{equation} Different kinds of matter dominate at different eras in the history of the universe. Geometry of the Observable Universe Realistic cosmological models are perturbed versions of a background Friedmann-Lemaître -Robertson-Walker model. Background models Robertson-Walker geometries are spatially homogeneous and isotropic universes, representing the geometry of the universe on the largest observable scales (Robertson, 1933). They are everywhere spatially homogeneous and isotropic, and so have no distinguishing feature anywhere. They are characterised by a scale factor $a(t)$ that shows how relative spatial distances between fundamental world lines change as time progresses. In comoving coordinates, the 4-velocity $u^{a}=dx^{a}/dt$ of preferred fundamental observers is \begin{equation}\tag{12} u^{a}=\delta _{0}^{a},\,\, u^{a}u_{a}=-1 \end{equation} and the metric tensor is \begin{equation}\tag{13} ds^{2}=-dt^{2}+a^{2}(t)d\sigma ^{2}, \end{equation} where the 3-space metric $d\sigma ^{2}$ represents a 3-space of constant curvature $k.$ It can be represented by \begin{equation}\tag{14} d\sigma ^{2}=dr^{2}+f^{2}(r)d\Omega^{2},\,\, d\Omega^{2}:=d\theta ^{2}+\sin ^{2}\theta d\phi^{2}, \end{equation} where $f(r)=$ $\left\{ \sin r,r,\sinh r\right\} $ if $k=\left\{ +1,0,-1\right\} $. Spatial homogeneity together with isotropy implies that a multiply transitive group of isometries $G_{6}$ acts on the surfaces $ \left\{ t=constant\right\},$ [4] and consequently all physical and geometric quantities are isotropic and depend only on the coordinate time $t.$ These spacetimes are conformally flat, as the Weyl conformal curvature tensor vanishes: $C_{abcd}=0$. Thus there is no free gravitational field, so no tidal forces or gravitational waves occur in these background models. However for realistic matter the Ricci tensor is non-zero and is not proportional to the metric tensor, so the matter world lines and 4-velocity (12) are uniquely determined by the stress tensor $T_{ab}$, and hence, via the Field Equations (7), also by the geometry: $u^a$ is the unique timelike eigenvector of $R_{ab}$. Thus the surface $\{t = const\}$ are geometrically and physically preferred surfaces.[5] The time parameter $t$ represents proper time (1) measured along the fundamental world lines $x^{a}(t)$ with $u^{a}$ as tangent vector; the distances between these world lines scales with $a(t)$, so volumes scale as $a^{3}(t)$. The relative expansion rate of matter is represented by the Hubble parameter \begin{equation}\tag{15} H(t)=\frac{\dot{a}}{a}, \end{equation} which is positive if the universe is expanding and negative if it is contracting. The rate of slowing down of the expansion is given by the dimensionless decceleration parameter \begin{equation}\tag{16} q(t):=-\frac{\ddot{a}}{a H^2}, \end{equation} positive if the cosmic expansion is slowing down and negative if it is speeding up. It is important to note that the spatial topology of the universe is not determined by the Einstein field equations. If $k = +1$, the spatial sections of the universe necessarily are spatially closed (this follows from the geodesic deviation equation): they are finite and contain a finite amount of matter. If simply connected they will be spheres $S^3$. However they need not be simply connected, and there are a variety of other possible topologies, obtained by identifying points in the sphere $S^3$ that are moved into each other by a discrete group of isometries. If $k=0$, then the universe may have simply connected space sections with topology $R^3$. Then there is an infinite amount of matter in the universe. However again they may not have that simply connected topology, and there a variety of other possibilities, for example a torus topology, resulting in the amount of matter in the universe being finite. The same is true for the case $k=-1$, except then there are an infinite number of possible spatial topologies. Classifying them is a major task. No known principle determines what the spatial topology is. Wheeler and Einstein have argued strongly that the universe should be spatially finite, which means either $k=+1$, or one of the compact topologies in the cases $k=0$ or $k=-1$. However one key point should be noted: the topology cannot change with time. All the spatial surfaces ${\cal S}(t)$ will necessarily have the same topology, because the fundamental world lines form a continuous map between them. Perturbed models To describe structure formation in the universe, one needs to perturb the exactly homogeneous Robertson-Walker geometries. One can perturb cosmological models as in any other case: given a background metric $\bar{g}_{ab}$, define a new metric \begin{equation}\tag{17} g_{ab} = \bar{g}_{ab} + \epsilon h_{ab}, \,\,|\epsilon| \ll 1 \end{equation} and then expand the field equations and conservation equations in powers of $\epsilon$, to obtain the background equations, linear equations, and higher order equations. Thus in order to describe the deviations from homogeneity, one starts by considering the most general form of a perturbed Robertson-Walker metric, \begin{eqnarray}\tag{18} ds^2&=&a^2(\eta)\left[-(1+2A)d\eta^2 + 2B_\nu d x^\nu d\eta +(\gamma_{\mu\nu} + h_{\mu\nu}) d x^\mu d x^\nu\right], \end{eqnarray} where the small quantities $A$, $B_\nu$ and $h_{\mu\nu}$ are unknown functions of space and time to be determined from the Einstein equations (Greek indices, representing spatial dimensions, run from 1 to 3). The gauge issue The problem is however not that simple, because in General Relativity Theory, gravitational dynamics is carried by the dynamic spacetime geometry. Hence there is no given fixed background spacetime $\bar{g}_{ab}$ which we can perturb to find the more realistic `lumpy' model $g_{ij}$. Rather, we are faced with modelling a particular almost-Robertson Walker spacetime $g_{ab}$ which can be represented in the form (17) in many different ways, with different background spacetimes $\bar{g}_{ab}$. The problem is related to the fact that we can choose coordinates in the perturbed model in many different ways. Thus for example the perturbed energy density at time $t$ is defined to be \begin{equation}\tag{19} \delta\rho(x^\nu,t) = \rho(x^\nu,t) - \bar{\rho}(x^\nu,t) \end{equation} where in the first term on the right the coordinates are chosen in the perturbed model, and in the second term they are chosen in the background model. Now the surfaces of constant time in the perturbed model can be chosen in many different ways. For example, we can define them to be surfaces of constant density, labelled by the background density; then by definition, for any given $t$, $\rho(x^\nu,t) = \bar{\rho}(x^\nu,t)$ for all $x^\nu$, which implies $\delta\rho(x^\nu,t)=0$. This gauge choice has made the density perturbations disappear! There are essentially three ways to handle this. Careful gauge fixing One can very carefully fix the gauge, i.e. specify fully how the coordinates are chosen in the perturbed model, at each stage of this coordinate fixing, calculating what effect the remaining gauge freedom has on the physical variables by using Lie Derivatives related to this coordinate freedom. This is acceptable if done very carefully. However it has in the past not always been done with due diligence, and many errors have occurred in the literature as a consequence. Gauge invariant variables One can combine physical and geometrical variables to give combinations that are invariant when one makes a gauge change (Uzan J-P, 2016,64-67). This is a highly influential and much used method that has been developed in depth, including covering kinetic theory (Durrer, 2008), but the variables so determined do not have a clear geometrical meaning. They have to be interpreted differently in different coordinate systems. 1+3 gauge invariant and covariant method In essence, it considers (19) to be defined by a map from the background spacetime into the perturbed spacetime. The gauge freedom is the freedom in that map. One uses covariantly defined variables with a clear geometric meaning that are invariant under that map. From the vector form of (19), if a variable vanishes in the background it is automatically gauge invariant: \begin{equation}\tag{20} \{\bar{X}^i(x^\nu,t) = 0\} \Rightarrow \delta X^i(x^\nu,t) = X^i(x^\nu,t) \end{equation} for all choices of the surfaces of constant time in the perturbed model. Therefore a covariant and gauge invariant measure of spatial inhomogenity is the comoving fractional spatial density gradient ${\cal D}_a:= h_a^b \rho_{,b}/\rho$ for an observer with 4-velocity $u^a$ ($u_au^a = -1$), where $h_{ab}:=g_{ab}+u_au_b$ projects orthogonal to $u^a$. The spatial vector ${\cal D}_a$ vanishes in a Robertson-Walker spacetime, and so is gauge invariant by (20). Its dynamic equations follow from the 1+3 covariant equations for generic fluid flows. One can develop this method in detail (Ellis, Maartens and MacCallum, 2012), covering also kinetic theory. Scalar, vector and tensor perturbations. Because of linearisations, one can split the perturbations up into scalar, vector, and tensor parts that (at linear order) evolve independently and so do not interfere with each other. Scalars can generate vector and tensor perturbations, and vector perturbations can generate tensor perturbations. Scalar perturbations represent density and pressure inhomogeneities, vector perturbations represent rotational effects, and tensor perturbations represent gravitational waves. Scalar perturbations are the most important in cosmology. For structure formation studies, we can use pure scalar perturbations with a metric form \begin{equation}\tag{21} ds^{2}=a^{2}(t)[-(1+2\Psi )d\tau ^{2}+(1-2\Phi )\gamma_{\mu\nu}dx^{\mu}dx^{\nu}] \end{equation} with conformal time $\tau$ related to proper time $t$ by the transformation $dt=a(\tau)d\tau$. The spatial coordinates are not comoving, in general; that is, the matter moves relative to these coordinates. This is called the Conformal Newtonian Gauge because it uses conformal time $\tau$, and the scalar $\Phi$ corresponds to the gravitational potential of Newtonian gravitational theory. For a pressureless fluid, considering sub-Hubble modes much smaller than the curvature scale, the scalar equations reduce to the comoving Poisson equation \begin{equation}\tag{22} \Delta\Phi = 4\pi G\rho a^2\delta \end{equation} as in Newtonian cosmology (see Uzan J-P, 2016:(77))with the definition $\delta \equiv \delta\rho/\rho$. In a universe with isotropic stress in this frame, the Einstein equations show that $ \Phi=\Psi$. Fourier analysis The perturbations are Fourier analysed in terms of wavelength, represented by a wavenumber $k$. This enables spatial power spectra to be defined, and spatial derivative operators are replaced by multiplication by $k$. The perturbation equations become equations relating the spatial harmonics of physical and geometric variables. Thus one ends up with harmonic components of scalar, vector, and tensor perturbations. Their dynamics is then examined via the linearised Einstein Field Equations, which reduce to a series of coupled ordinary differential equations for the Fourier components of the perturbation variables. Why it is reasonable to use such a perturbed Robertson-Walker model for the observed universe? The key point concerning the observational context is that we can only view the universe from one space-time event, and cannot easily distinguish spatial variation from time variation in this context. We have to test the supposition of Robertson-Walker geometry in that context. The central point of the argument is that observations of galaxies and radio sources (number counts), as well as of background radiation, show that when averaged on a large enough scale, the universe is isotropic about us. This shows that either (a) the universe is spherically symmetric but inhomogeneous, and we are near the centre; or (b) the universe is spherically symmetric about every point. However a theorem by Walker shows that isotropy everywhere around a preferred family of observers implies the geometry is that of a Robertson-Walker spacetime. Hence unless we are special, being at a very special place in the universe so that we are the only fundamental observers who see an isotropic universe, its geometry must be Robertson-Walker. If we have good enough standard candles at large distances, we can directly test spatial homogeneity from astronomical observations; and we are close to being able to carry out such tests by using supernovae as standard candles. This argument is greatly strengthened by a theorem of Ehlers, Geren and Sachs, which states that if an expanding family of geodesic observers in a local domain $\cal U$ see isotropic freely moving radiation, the universe is Robertson-Walker in that domain. It can be shown that this result is stable: if the radiation is almost isotropic, the universe is almost Robertson-Walker (Ellis, Maartens and MacCallum, 2012). Now we cannot directly determine whether the background radiation observed from the vantage point of a remote observer, but we can do so indirectly via the kinetic Sunyaev-Zeldovich effect, whereby hot gases in distant objects scatter radiation and so affect the Cosmic Blackbody Radiation spectrum if its temperature is anisotropic at that distant point, also causing polarisation. This currently gives the best limits on large scale inhomogeneity of the universe. There are some generic relations one can find for cosmological models, without assuming any symmetries. These reduce to simple forms in the case of Friedmann-Lemaître models with Robertson-Walker geometries. The real universe is nowhere empty, in particular because of the Cosmic Blackbody Background radiation that permeates all of spacetime. Hence a realistic cosmological model always has a geometrically and physically preferred fundamental velocity field $u^a=dx^a/d\tau$ ($u^a u_a = -1$) at each point, (cf. eqn.(12)); specifically, the timelike eigenvector of the Ricci tensor. Using a 1+3 covariant approach, there are a number of exact relations that are true for any cosmological model, whether inhomogeneous, anisotropic, or not (Ellis, Maartens and MacCallum, 2012). The key ones are energy-momentum conservation equations and the Ehlers-Raychaudhuri equation of gravitational attraction. Cosmic time derivatives are denoted by $\dot{f} \equiv f_{,a} u^a$. Conservation equations These are the timelike and spacelike components of (8). For a perfect fluid (9) the energy conservation equation takes the form \begin{equation}\tag{23} \dot{\rho}+3\frac{\dot{a}}{a} (\rho +p)=0 \end{equation} where the fluid expansion $\theta:=\nabla^a u_a= 3\dot{a}/{a}$ defines the variation of the average length scale $a$ along the fundamental world lines. The momentum conservation equation is \begin{equation}\tag{24} (\rho+p)a_b +{}^{(3)}\nabla_b p=0, \end{equation} where ${}^{(3)}\nabla_b:=(g^{ab}+u^au^b)\nabla_b$ is the spatial derivative orthogonal to $u^a$, and $a^a:=u^b\nabla_bu^a$ is the relativistic fluid acceleration. This shows that $\rho_{\rm inert}:=\rho+p$ can be interpreted, in Newtonian terms, as the inertial mass density. These equations become determinate when an equation of state $p=p(\rho ,\phi )$ specifies $p$ in terms of $\rho$ and possibly some internal variable $\phi$ such as the temperature $T$ or entropy $S$ (which if present, will have to have its own dynamical equations). The outcome of these equations depends on the equation of state; for example in the case of pressure free matter, \begin{equation}\tag{25} \{p=0\} \Rightarrow \{\rho = M/{a}^3,\,\, a^a=0\}, \end{equation} where $\dot{M}:=u^a\nabla_a M =0$. This shows that pressure-free matter moves on geodesics. In the case of radiation, \begin{equation}\tag{26} \{p=\rho/3\} \Rightarrow \{\rho = M/{a}^4,\,\, a^a=- {}^{(3)}\nabla_b U\}, \end{equation} where the acceleration potential $U$ is defined by $\,\,U:= \frac{1}{4} \log \rho$. Field equations On using the field equations (7), the trace of the geodesic deviation equation for the timelike vector field $u^a$ is the Ehlers-Raychaudhuri equation \begin{equation} 3\frac{\ddot{a}}{a}=-2(\omega ^{2}-\sigma ^{2})-4 \pi G (\rho +3p)+\Lambda -a_{\,\,\,;b}^{b}, \tag{27} \end{equation} where $\omega$ is the vorticity scalar and $\sigma$ the shear scalar. This is the non-linear exact fundamental equation of gravitational attraction for a fluid flow, showing that vorticity tends to resist collapse whereas shear hastens it, \begin{equation}\tag{28} \rho_{\rm grav}:=(\rho +3p) \end{equation} is the active gravitational mass density that tends to cause matter to collapse (it is positive for all ordinary matter), and a positive cosmological constant $\Lambda$ tends to keep matter apart. The acceleration term can have either sign. Background model The background Friedmann-Lemaître models are given by assuming the Einstein Field Equations (7) govern the dynamics of a spacetime with a Robertson-Walker geometry. The behaviour of matter and expansion in theses universes is governed by three related equations. First, the energy conservation equation (23) relates the rate of change of the energy density $\rho (t)$ to the pressure $p(t).$ In the case of pressure-free matter, (25) holds; in the case of radiation, (26). The Raychaudhuri equation (27) becomes \begin{equation} \frac{3}{a}\frac{d^{2}a}{dt^{2}}=-4\pi G(\rho +3p)+\Lambda \tag{29} \end{equation} which directly gives the deceleration due to matter. Its first integral is the Friedmann equation \begin{equation} 3H^{2}=8\pi G \rho +\Lambda -{\frac{3k}{a^{2}}}, \tag{30} \end{equation} where ${{\frac{3k}{a^{2}}}}$ is the curvature of the 3-spaces $\{t=const\}$; this is just the Gauss-Codazzi equation relating the curvature of imbedded 3-spaces to the curvature of the imbedding 4-dimensional spacetime. The evolution of $a(t)$ is determined by any two of (23), (29), and (30), as any two imply the third one. Defining the normalized density parameters\begin{equation}\tag{31} \Omega_m :=\frac{ 8\pi G \rho }{3H^{2}},\,\, \Omega_\Lambda :=\frac{\Lambda }{3H^{2}},\,\, \Omega_k :=-\frac{k }{H^{2}a^2}, \end{equation} the Friedmann equation has the dimensionless form \begin{equation} \Omega_m +\Omega_\Lambda +\Omega_k = 1 .\tag{32} \end{equation} The behaviour of solutions to this equation for ordinary matter has been known since the 1930s (Robertson, 1933, Tolman, 1934): When $\Lambda =0,$ if $k=+1$, then $\Omega >1$ and the universe recollapses; if $k=-1$, then $\Omega <\ 1$ and it expands forever; and $k=0$ ($\Omega =1)$ is the critical case separating these behaviours, that just succeeds in expanding forever. In the pressure-free case $p=0$, this limiting case is the \textit{Einstein-de Sitter universe} with \begin{equation}\tag{33} a(t) = a_0 t^{2/3}. \end{equation} There is always a singular start to these universes at time $ t_{0}<1/H_{0}$ ago if $\rho +3p>0$, which will be true for ordinary matter plus radiation One can in principle get a bounce if both $\Lambda >0$ and $k=+1$.However in practice there is still a singular start to the universe if $\rho +3p>0$ ($\Lambda$ cannot avert this because it is constant, and is small today.) When $\Lambda >0$ and $k=+1$, a static solution is possible (the Einstein static universe), and the universe can `hover' close to this constant radius before starting to expand (these are the Eddington-Lemaître models). The de Sitter universe occurs in three different FLRW forms with different spatial curvatures, which is possible because it has no unique timelike vector field (as it is a spacetime of constant curvature (Hawking and Ellis, 1973): [6] \begin{equation}\tag{34} k=0, \,\,\,a(t)= a_0 \exp(H_0 t) \end{equation} (a steady-state solution that is geodesically incomplete in the past), \begin{equation}\tag{35} k=+1,\,\,\,\, a(t) = a_0 \cosh H_0 t \end{equation} (a geodesically complete form with a bounce), and \begin{equation}\tag{36} k=-1, \,\,\,a(t) = a_0 \sinh H_0 t \end{equation} (a geodesically incomplete singular form).[7]. They are all solutions with $\rho + p = 0$ (perhaps with a cosmological constant $\Lambda>0$), and so are physically unrealistic as exact solutions. However at early or late times, they may be good approximate solutions for a while. The Friedmann equation for early times in the Hot Big Bang era of the universe, when radiation dominates the dynamics ($p = \rho/3$) and curvature and the cosmological constant are negligible ($\Omega_k \simeq 0$, $\Omega_\Lambda \simeq0$), gives \begin{equation}\tag{37} a(t) = a_0 t^{1/2} \end{equation} (the Tolman solution) and the temperature-time relation \begin{equation} T=\frac{T_{0}}{t^{1/2}},\,\,\,T_{0}:=0.74\left( \frac{10.75}{g_{\ast }}\right) ^{1/2} \tag{38} \end{equation} with no free parameters. Here $t$ is time in seconds, $T$ is the temperature in MeV, and $g_{\ast }$ is the effective number of relativistic degrees of freedom, which is 10.75 for the standard model of particle physics, where there are contributions of 2 from photons, 7/2 from electron-positron pairs and 7/4 from each neutrino flavor. This relation plays a key role in nucleosynthesis. All these behaviours can be illuminatingly represented by appropriate dynamical systems phase planes (Wainwright, 1997, Uzan J-P, 2016). A particularly useful version by Ehlers and Rindler (Ehlers and Rindler, 1989) gives the 3-dimensional phase plane for models with non-interacting pressure free matter and radiation, as well as a cosmological constant. The special solutions mentioned above (the Einstein static universe, Einstein de Sitter universe, Tolman universe, and de Sitter universe) are all fixed points in these phase planes. The Main Epochs The real universe is filled with a complex mix of matter and radiation, leading to changing behaviour at different times because the nature of the matter present (the right hand side of the Einstein Field Equations) drives the dynamics. The main epochs, represented schematically in Figure 1, are as indicated below: Table 1: The main epochs in the history of the Universe Matter source Start Unknown Creation or bounce 1 Inflation Scalar field dominated Seed perturbations 2 Hot Big Bang Radiation dominated Nucleosynthesis 3 Visible universe Dark Matter dominated Structure formation 4 Accelerating Universe Dark energy dominated Acceleration 5 Future Dark energy or curvature Unknown 6 Figure 1: Graphic of the history of the universe (credit: NASA). The start of the universe is on the left, time increases towards the right, and the vertical scale indicates the size of the universe at each time. The start of the inflationary era is difficult to determine and very model-dependent. One can estimate it to take place around $10^{-35}$~s. The hot big-bang phase started before the temperature of the radiation bath drops below 100~MeV, i.e. typically earlier than $10^{-3}$~s after the big-bang. The universe becomes transparent after the decoupling between photons and matter. It happens at a redshift of order $1100$, corresponding roughly to $380,000$~years. Note that this is after radiation-matter equality, which occurs at a redshift $1+z=\Omega_{m}/\Omega_{r}\sim 3,600$. The onset of the late time acceleration phase is defined by $\ddot a=0$ which, for a $\Lambda$CDM model, occurs at $1+z=(2\Omega_\Lambda/\Omega_m)^{1/3}\sim 1.7$. Each epoch has different physical processes happening and consequently different cosmological dynamics. They are discussed in detail in Uzan J-P, 2016, so we will just very briefly discuss them here. The big picture is that the whole history of our visible universe consists of an episode of decelerated expansion, during which complex structures can form, sandwiched in between two periods of accelerated expansion, which do not allow such structures to form. Epoch 1: The start of the universe. There may have been a singular beginning to the universe (as in the Tolman case), or a bounce from a previous era (as in a de Sitter universe in the $k=+1$ frame), or an emergence from a static state (as in the Eddington-Lemaître universe). We don't know the physics applicable in the quantum gravity era that would have presumably occurred when the density was higher than the Planck density, so we don't know which of them occurred. Indeed we can only speculate as to what kind of mechanism can `create a universe from nothing', for any such proposed mechanism is surely not testable. Epoch 2: Inflation is a supposed extremely rapid but short lived accelerating expansion through many efoldings before the hot big bang era. By ((29)) this requires some kind of field $\phi$ such that $\rho_\phi + 3p_\phi < 0$. This is possible for a scalar field $\phi (t)$ with potential $V(\phi )$ obeying the Klein Gordon Equation (11), which in a FLRW universe has the form \begin{equation} \ddot{\phi}+3H\dot{\phi} +dV/d\phi =0. \tag{39} \end{equation} This field will have an energy density $\rho_\phi$ and pressure $p_\phi$ given by (10). so the inertial and gravitational energy densities are \begin{equation}\tag{40} \rho_\phi +p_\phi=(\dot{\phi})^{2}>0,\;\rho_\phi +3p_\phi=(\dot{\phi})^{2}-V(\phi ). \end{equation} Hence the active gravitational mass $\rho_\phi + 3p_\phi$ can be negative in the `slow roll' case, that is, when $(\dot{\phi})^{2}<V(\phi).$ Scalar fields can thus cause an exponential expansion $a(t) = \exp H_0 t$ when $\phi $ stays approximately at a constant value $\phi _{0}$ because the friction term $3H\dot{\phi}$ in (39) dominates the $dV/d\phi$ term, resulting in $\rho_\phi +3p_\phi \approx -V(\phi _{0})=const$ and an approximately de Sitter spacetime. This is the physical basis of the inflationary universe idea of an extremely rapid exponential expansion at very early times that smooths out and flattens the universe, also causing any matter or radiation content to die away towards zero. The inflaton field itself dies away at the end of inflation when slow rolling comes to an end and the inflaton gets converted to matter and radiation by a process of reheating. This epoch has the important feature of providing the basis for a causal explanation of the origin of seeds for structure formation in the early universe. The nature of the inflaton field is not known (it is not tied into any tested particle physics particles or fields). Epoch 3: A Hot Big Bang radiation dominated early phase of the universe, with ionised matter and radiation in equilibrium with each other because of tight coupling between electrons and radiation, and a temperature-time relation given by (38) in this Tolman phase ($a(t) = a_0 t^{1/2}$). Baryosynthesis, neutrino decoupling, and nucleosynthesis took place in this era, with neutron capture processes at a temperature of about $T=10^{8}$K leading to formation of Deuterium, Helium, and a trace of Lithium from protons and neutrons. Predictions of light element abundances resulting from these nuclear interactions agree well with primordial element abundances determined by astronomical observations, up to some unresolved worries about Lithium. It was this theory of nucleosynthesis that established cosmology as a solid branch of physics (Lahav and Liddle, 2015). The Hot Big Bang phase ended when the temperature dropped below the ionisation temperature of the matter, resulting in matter-radiation decoupling and cosmic blackbody radiation being emitted at the Last Scattering Surface at about $T_{emit}=4000K$. This radiation is observed today as cosmic microwave blackbody radiation (CMB) with a temperature of $2.73K$. A major transition is the baryon-radiation density equality which separates the universe into two eras: a matter dominated later era during which structure can grow, and a radiation dominated earlier era during which the radiation pressure prevents this for many wavelengths, rather giving rise to acoustic waves (`baryon acoustic oscillations'). Key aspects are the evolution of the speed of sound with time, and diffusion effects leading to damping of fine-scale structure. Epoch 4: The Matter dominated phase of the visible universe. After decoupling of matter and radiation, the radiation density drops faster than the matter density (compare (25) with (26)) and the solution becomes approximately Einstein-de Sitter ($a(t) = a_0 t^{2/3}$). Small density perturbations on the Last Scattering Surface grow by gravitational instability to form structures in the universe in a bottom-up way (small structures form first and then coagulate to form larger structures), with stars forming and nuclear reactions starting in their interiors, leading to stellar nucleosynthesis of elements up to iron and subsequent supernovae explosions whereby these get spread through space to allow formation of second generation stars that can be surrounded by rocky planets. The gravitational dynamics of these processes is dominated by pressure-free dark matter, which has a density much larger than that of the visible baryons. Its nature is unknown. Epoch 5: The accelerating late phase of the visible universe: In recent times the expansion of the universe has been speeding up rather than slowing down. From eqn.((29)), this means that either the present expansion is dominated by a cosmological constant $\Lambda > 0$, that is $\Omega_\Lambda> \{\Omega_m, \Omega_k\}$, or some dynamic field $\phi$ ("Dark energy") is present such that $\rho_\phi +3p_\phi < 0$ at recent times (similar to what happens in inflation). The data is compatible with a cosmological constant; if there is instead a dynamical field $\phi$, there is no way to test its nature or dynamics in the solar system. [8] No large scale structure can form in this era, although gravitational collapse (for example to form black holes) can continue in local structures that are already gravitationally bound at the start of this era. Epoch 6: The Future. If dark energy is a cosmological constant, which is compatible with the data, then the universe will expand forever, getting ever cooler and more diffuse, with all stars dying out, matter decaying, and a final state of everlasting darkness and cold (this is what used to be called a "Heat Death" in the 1930s to 1970s). If the dark energy is rather a dynamical field that can decay away, then the future is uncertain. The universe might still expand forever, which will be the case if $k=0$ or $k=-1$, or it might recollapse, which is possible if $k=+1$. [9] In that case there will probably be a future inhomogeneous state with isolated singularities, but re-expansion in some places is a possibility through re-initiation of the mechanism that led to inflation in the previous epoch, leading to a chaotic cyclic model (if the universe were exactly homogeneous, it could bounce and re-expand everywhere, but that is unrealistic). Various other forms of cyclic universe have been proposed (pre-Big Bang models, ekpyrotic models, conformal cyclic cosmology, and so on), but none are based in well-established physics. The physics is least well understood at very early times (inflation and before), because it cannot be tested in a laboratory or particle accelerators, and so is speculative. It is best understood during the Hot Big Bang era, governed by particle physics, nuclear physics, and statistical physics, up to the time of decouplong, It is less well known at present times and in the future, because we do not know the nature of either dark matter or dark energy. It is now broadly agreed that there was indeed a period of inflation in the very early universe but the details are not clear:\ there are over 100 different variants, including single-field inflation, multiple-field inflation, and models where matter is not described by a scalar field as, for example vector inflation. As the potential is not tied in to any specific physical field, one can run the field equations backwards to determine the effective inflaton potential from the desired dynamic behaviour. This does not obviously tie the supposed dynamics into any known physical field, unless the inflaton is a non-miminally coupled Higgs field (as was often supposed when the theory was initially introduced). If that were the case, there would be a marvelous link between particle physics and cosmology, whereby collider physics would imply real constraints on cosmological processes in the very early universe. Slow rolling takes place when \begin{equation}\tag{41} \dot{\phi}^2 \ll V,\,\, \ddot{\phi}\ll3 H \dot{\phi} \end{equation} Then the equations of motion reduce to \begin{equation}\tag{42} H^2 \simeq \frac{8\pi G}{3}V, \,\,\dot{H} \simeq - 4 \pi G \dot{\phi}^2, \,\, 3 H\dot{\phi} \simeq - dV/d\phi \end{equation} giving the almost exponential expansion characteristic of inflation. Key features of this phase are, Slow roll parameters, such as \begin{equation}\tag{43} \epsilon = \frac{m^2_{PL}} {16\pi} \left(\frac{V'}{V}\right),\,\,\,\eta = \frac{m^2_{PL}} {16\pi} \left(\frac{V''}{V}\right) \end{equation} that characterise the dynamics of inflation, The number $N$ of inflationary efolds occurring between the start $a_a:=a(t_i)$ and end $a_f:=a(t_f)$of inflation, \begin{equation}\tag{44} N := \ln \left(\frac{a_f}{a_i}\right). \end{equation} This must be large enough to reduce the initial curvature to very close to flat at the end of inflation and also allow the entire Last Scattering Surface to be causally connected, demanding $N \gtrsim 70$ (Uzan J-P, 2016). The reheating process at the end of inflation, whereby the inflaton field gets converted to radiation. The simplest example is chaotic inflation with a massive scalar field $\phi$ so the potential is $V(\phi) = \frac{1}{2}m^2\phi^2$ and the Klein Gordon equation reduces to that of a damped Simple Harmonic Oscillator. If $\phi$ is initially large the Friedmann equation implies that $H$ is also large and the friction term dominates the Klein Gordon equation so the field is in the slow rolling regime. When quantum fluctuations are taken into account, their effects on the dynamics leads to a model in which local inflationary domains occur in a fractal way (`chaotic inflation'). However the latest Planck data does not support such a model. Inflation was initially proposed to solve a series of problems perceived with standard cosmology: the `flatness problem', the homogeneity problem, and the cosmic texture relics problem. However the first two are philosophic in nature, as they are based in naturalness assumptions that may not be true. Furthermore Penrose has strongly argued that inflation does not solve the homogeneity problem, rather it assumes it has been solved before the analysis starts. That is because almost all inflationary calculations (such as those sketched above) assume a Robertson-Walker geometry at the start. In that case there is no need to solve the homogeneity issue because homogeneity was presumed from the outset. If one allows generic geometries, inflation is extraordinarily unlikely to take place and result in a smooth universe, because the most likely initial state (that with the highest entropy) is one dominated by black holes. The cosmic textures problem does not arise if the relevant fields do not exists. The real importance of inflation is that it provides a causal model for how seeds for structure formation in the expanding universe came into being. The growth of structure The real universe is only approximately a Robertson-Walker spacetime. Structure formation in an expanding universe can be studied by using linearly perturbed Friedmann-Lemaître models at early times, plus numerical simulations at later times when the inhomogeneities have gone non-linear. The physical interactions are different in each epoch mentioned above, and impact different wavelengths in crucial ways, leading to a predicted perturbation power spectrum $P(k)$ as a function of comoving wavenumber $k$ as discussed in (Dodelson, 2003, Mukhanov, 2005,Durrer, 2008, Peter and Uzan H-P, 2013). As an example of how this works, the general non-linear dynamic equations for the normalised spatial density gradient ${\cal D}_a$ follow from taking the spatial gradients of the conservation equation (23) and Raychaudhuri equation ((27)), and substituting the result of the second into the first. On linearising, when $w = p/\rho = const$, $\Lambda = 0$, and spatial curvature $\Omega_k\simeq 0$, the linearised growth equation for density perturbation modes of wave number $k$ is (Ellis, Maartens and MacCallum, 2012) \begin{equation}\tag{45} {\ddot{\cal D}}_a + (\frac{2}{3}-w)(\frac{3\dot{a}}{a}) \,{\dot{\cal D}}_a -\left(\frac{(1-w)(1+3w)}{2} 4\pi G\rho\right) {\cal D}_a - w \frac{k^2}{a^2}{\cal D}_a=0.\end{equation} This directly gives the general relativistic version of the Jean's length: when there is no expansion: $\dot{a}=0$, and pressure gradients can halt collapse when \begin{equation}\tag{46} \left|\frac{(1-w)(1+3w)}{2w} 4\pi G \rho\right| < \frac{k^2}{a^2}{\cal D}_a\,, \end{equation} otherwise the inhomogeneity increases exponentially. However this changes dramatically when $\dot{a}\neq 0$. When $w=0\Leftrightarrow p=0$, (45) reduces to \begin{equation}\tag{47} {\ddot{\cal D}}_a + (\frac{2\dot{a}}{a})\,{\dot{\cal D}}_a - 2\pi G\rho {\cal D}_a =0\,, \end{equation} which directly gives the basic results for pressure free matter: with an Einstein de Sitter background (33), \begin{equation}\tag{48} {\cal D}_a = c_{1}(t-t_1)^{2/3} + c_{2}(t-t_2)^{-1}, \end{equation} the first term being a power law growing mode and the second a power law decaying mode. The first term does not grow fast enough to generate the structures we see today from statistical fluctuations. In a radiation dominated phase, $w = 1/3$ and \begin{equation}\tag{49} {\ddot{\cal D}}_a + (\frac{\dot{a}}{a}) \,{\dot{\cal D}}_a -\frac{2}{3} \left(4 \pi G \rho - \frac{1}{2} \frac{k^2}{a^2}\right){\cal D}_a=0. \end{equation} which gives damped oscillations if $2\kappa\rho \ll \frac{k^2}{a^2}$ and growing and decaying modes similar to (48) if $2\kappa\rho \gg \frac{k^2}{a^2}$. Generating fluctuations During inflation, the inflaton field $\varphi$ has quantum fluctuations (as does any matter field) and such fluctuation will generate metric perturbations, as they are coupled by the Einstein field equations. This creates an almost scale-free power law spectrum spectrum of scalar and tensor perturbations at the end of the inflationary era (Mukhanov, 2005). Simple inflation models have three independent parameters describing the primordial power spectrum: the amplitude of scalar fluctuations $A_S$, determined by a scalar potential which is unknown, the scalar to tensor ratio $r$, relating gravitational waves produced to the scalar perturbations, and the scalar spectral index $n_S$, defining to what degree these fluctuations are not scale invariant (as they would be if the expansion were exactly exponential) (Lahav and Liddle, 2015). Here the power spectrum $P_S(k)$ is defined as the variance per logarithmic interval: $(\delta \rho/\rho)^2 =\int P_S(k) d \log k$, and the primordial power spectrum can be written in the form \begin{equation}\tag{50} P_S(k) = A_S \left(\frac{k}{k_*}\right)^{n_s-1} \end{equation} where $n_s$ is the spectral index. Then a scale-invariant spectrum is given by $n_s=1$; usual inflationary models give $n_s\simeq1$, $n_s<1$. Similarly tensor modes are generated and enjoy a power spectrum \begin{equation}\tag{51} P_T(k) = A_T \left(\frac{k}{k_*}\right)^{n_t} \end{equation} where $n_t$ is the spectral index. The scalar and tensor spectral indices are given in terms of the inflationary slow roll parameters by \begin{equation}\tag{52} n_s \simeq 1 - 6 \epsilon + 2 \eta, \,\,\,n_t \simeq = 2 \epsilon \end{equation} and a consistency condition must hold: \begin{equation}\tag{53} r \simeq 16 \epsilon \simeq = 9 n_t . \end{equation} These fluctuations are then processed by pressure gradients during the hot big bang era, generating baryon acoustic oscillations, and giving a spectrum of primordial perturbations on the Last Scattering Surface with acoustic peaks, that then forms the basis for structure formation through gravitational attraction after decoupling. A transfer function $T(k)$ characterizes how the primordial perturbations of comoving number $k$ are modified by these processes. As stated above, the real importance of inflation is that it gives a mechanism for the generation of an almost scale-free spectrum of primordial perturbations that lead to structure formation in the universe after decoupling. It is the first theory to give such a causal mechanism for the origin of structure. However one should note that the theory does not fix the amplitude $A_S$ of these fluctuations, because we do not have a unique inflaton field identified which has a potential with a physically determined scale. Thus it has to be determined by observations. Furthermore we do not have a good theory as to how the quantum fluctuations generated during the inflationary epoch become classical by the end of inflation (decoherence does not do the job, as some have suggested, becasue while it gets rid of entanglement it does not get rid of superpositions). This is a major lacuna in the theory. Dark energy and Dark matter Two key realisations arising from detailed studies of structure formation, starting with the linearised theory and then joining the outcomes to numerical simulations, were firstly that one needs Cold Dark Matter as well as baryons for this to work in accordance with observations (hot dark matter would wipe out all but the largest initial fluctuations by free streaming). Secondly that introduction of Dark Energy, possibly a cosmological constant $\Lambda$, was needed to allow a cold dark matter scenario to match observations in an almost flat universe. Start of the universe A key issue as regards the dynamics of the universe is whether it had a start. This has different answers if we consider General Relativity with classical matter, General Relativity with quantum fields, and quantum gravity. Classical General Relativity and Friedmann-Lemaître models Because of the Raychaudhuri equation ((29)), $a(t) \rightarrow 0$ and a singularity occurs at the start of the current expansion phase of the universe in the standard Friedmann Lemaître models of cosmology, provided the energy condition \begin{equation}\tag{54} \rho_{\rm grav}:=(\rho +3p)>0 \end{equation} is satisfied; and this will be true for all ordinary matter. Note that the present cosmological constant will not be able to prevent this, as it is constant, but the matter would have had a much higher energy density in the early universe. As $a(t) \rightarrow 0$, by the energy conservation equation (23), the energy density of matter will diverge provided $\rho + p > 0$, which will be true for all ordinary matter; [10] and then the Ricci scalar will also diverge, so this is a spacetime singularity. It represents an edge to spacetime, where space, time, and matter come into existence (all timelike geodesics are incomplete towards the past). Physics as we know it comes to an end then because spacetime did not exist. Classical General Relativity, Singularity theorems A key issue is whether this prediction is a result of the high symmetry of these spacetimes, and so they might disappear in more realistic models with inhomogeneity and anisotropy. Many attempts to prove theorems in this regard by direct analysis of the field equations and examination of exact solutions failed. The situation was totally transformed by a highly innovative paper by Roger Penrose in 1965 that used global methods and causal analysis to prove that singularities will occur in gravitational collapse situations where closed trapped surfaces occur, a causality condition is satisfied, and suitable energy conditions are satisfied by the matter and fields present. Stephen Hawking then showed that time-reversed versions of this theorem would be valid in an expanding universe, because time reversed closed trapped surfaces occur in realistic cosmological models; indeed their existence can be shown to be a consequence of the existence of the cosmic microwave blackbody radiation (Hawking and Ellis, 1973). Thus the prediction of a start to the universe is not dependent on the high symmetries of the Robertson-Walker geometries. According to classical general relativity, they can be expected to occur in realistic classical cosmological models. John Wheeler emphasized that existence of spacetime singularities - an edge to spacetime, where not just space, time, and matter cease to exist, but even the laws of physics themselves no longer apply - is a major crisis for physics: "The existence of spacetime singularities represents an end to the principle of sufficient causation and to so the predictability gained by science. How could physics lead to a violation of itself -- to no physics?" It is unclear how to deal with this in physics or in philosophical terms. Quantum Fields As is shown by inflationary models, it is no longer necessarily true that the energy condition (54) holds once quantum fields are taken into account (see (40)). This means that despite some contrary claims, it is after all possible there are singularity-free models when this is taken into account, because such fields are expected to be important in the early universe. The canonical singularity free inflationary model is the de Sitter universe in the $k=+1$ frame (35); it is also possible to have models that are asymptotic to the Einstein static universe in the past (`emergent universes'). Quantum gravity This is of course still a prediction of the classical theory of gravitation, even if the matter source is a quantum field. The real issue is whether there will be a singularity at the start of the universe when we take full quantum gravity into account. It is still not known if quantum gravity solves this issue or not, primarily because we do not know what the true nature of quantum gravity is. There are hints from loop quantum cosmology that quantum gravity might remove the initial singularity, but the issue is not yet resolved. Thus in the end the outcome of the classical singularity theorems is a prediction that there most likely was a quantum gravity era in the very early universe, but not a statement as to whether a physical singularity existed or not. Physics Horizons The problem we are running into here is what we will call the physics horizon. For temperatures larger that $10^{11}$~K, the microphysics is less understood and more speculative than during nucleosynthesis. Many phenomena such as baryogenesis and reheating at the end of inflation still need to be understood in better detail, but we can't test the relevant physics in the laboratory or in colliders at present, or perhaps ever. And no matter how we improve colliders, we have no hope of reaching the Planck energy to explore experimentally the nature of quantum gravity. Thus there are energies beyond which we will never be able to test physics for both experimental and economic reasons. The physics the other side of this horizon will always be speculative. We will be able to make well-informed guessses as to what its nature is likely to be, but we will never be able to directly test such hypotheses. We may be able to test some of their implications for cosmology, such as whether they imply an inflationary era or not; but it is extremely unlikely we will ever be able to prove that what we hypothesize about the relevant physics at such very early eras is the only possibility. This applies of course specifically to any theories we may have about creation of the universe. We can hopefully extend our knowledge of what happens once the universe exists into theories of what might happen before it exists, but even that sentence does not make sense, because none of `before' or `exist' or `happens' are applicable then, particularly because `then' does not exist either. Observations and testing The core feature of any scientific enterprise is the way we can experimentally or observationally test our proposed theories. Cosmology now has an abundance of extraordinarily sensitive observations with which to test its models; nevertheless, there are strict limits to what such tests can achieve. The basic observational dilemma Because of the vast scale of the universe, we can essentially only see it from one spacetime position (`here and now'), see Figure 2. Figure 2: Astrophysical data are mostly located on our past lightcone, shown here in confroemal coordinates, and they fade away with distance (i.e. with larger redshift $z$). Hence we only have observational access to a portion of a 3-dimensional null hypersurface. An astronomical object can be observed only when its worldline (dashed lines) intersects our past lightcone, with all such images being projected onto a 2-dimensional space (`the sky'), leaving us with the task of determining the distances of visible obects from this restricted data. Some geological type data about the neighborhood of our Solar system is also available in terms of matter and objects we see nearby Consequently Our access to the universe is a 2-dimensional image of a 4-dimensional spacetime. Our first key problem is reliably determining distances of objects we see. All observations of distant objects are associated with a lookback time associated with the speed of light, which makes it difficult distinguishing time variation from spatial variation, and means we see them when their properties may be very different from those of similar objects today (`source evolution'). Visual horizons occur, limiting the part of the universe available for observational examination by means of any kind of radiation, now and for the foreseeable future. However we do have available at our present spacetime position relics of events in the very early universe, such as elements, stars, and galaxies, which enable us to place limits on physical processes near our past world line at very early times. We will call this `geological evidence'. Observational relations: background model Observational relations can be worked out for the background FLRW models, based on the fact that (Kristian and Sachs, 1966) photons move on null geodesics $x^{a}(v)$ with tangent vector \begin{equation}\tag{55} k^{a}(v)\ =dx^{a}/dv:k^{a}k_{a}=0, k_{\;;b}^{a}k^{b}=0. \end{equation} This shows that the radial coordinate value of radial null geodesics through the origin (which are generic null geodesics, because of the spacetime symmetry) is given by $\{ ds^{2}= 0, \,\,\, \;d\theta = \;d\phi =0\} $ which implies \begin{eqnarray}\tag{56} u(t_{0},t_{1}):=\int _{0}^{r_{\rm emit}}dr=\int _{t_{\rm emit}}^{t_{\rm obs}}\frac{dt}{a(t)}=\int \nolimits _{a_{\rm emit}}^{a_{\rm obs}}\frac{1}{H(t)}\frac{da}{a^{2}(t)}. \end{eqnarray} where $t_0$ corresponds to the emission time $t_{\rm emit}$ and $t_1$ to the reception time $t_{\rm obs}$. Substitution from the Friedmann equation (30) shows how the cosmological dynamics affects $u(t_{0},t_{1}).$ This equation also shows that observational relations will be simplified if we use conformal time $\tau = \int dt/a(t)$ instead of $t$, for then $u=\tau$. Key observable variables resulting are observed redshifts $z,$ given by \begin{equation}\tag{57} 1+z=\frac{\left( k^{a}u_{a}\right) _{emit}}{\left( k^{b}u_{b}\right) _{obs}}= \frac{a(t_{obs})}{a(t_{emit})}, \end{equation} angular size distance $r_{0}$, giving observed angular size $\alpha$ of objects of physical length $L$ by the relation $L = r_{0} \alpha$. Up to a redshift factor this distance is the same as the luminosity distance $D_{L}$: \begin{equation}\tag{58} D_{L}=r_{0}(1+z). \end{equation} This is the reciprocity theorem, which is true for any geometry. A major theoretical result, consequent on the reciprocity theorem, is that radiation emitted with blackbody spectrum for a temperature $T_{emit}$ will be observed as blackbody radiation, but with temperature \begin{equation}\tag{59} T_{obs}=T_{emit}/(1+z). \end{equation} This is again true for any geometry;hence the black body radiation released from the Last Scattering Surface in the early universe at a temperature of about $1200K$ is observed today as Cosmic Microwave Blackbody radiation with a temperature of about $2.73K$. Number counts $N(z_1,z_2)$ for sources of any class visible in a given solid angle $d\Omega$ and redshift range $(z_1,z_2)$. These will be determined by the area distance $d_A = 4 \pi r_0^2$ and the relation between the affine parameter distance $v$ down the past light cone and redshift $z$. One can work out observational relations for galaxy number counts versus magnitude $(n,m)$ and the magnitude--redshift relation $(m,z)$, which determines the present deceleration parameter $q_{0}$ when applied to standard candles. Major observational programmes have examined these relations and determined $ H_{0}$ and $q_{0}.$ These depend critically on finding suitable standard candles. Galaxies, radios sources, and qso's do not do the job because of their intrinsic variability. However Supernovae of type SN1A have been found to be reliable standard candles, because the rate of decay after the Supernova maximum is related to its intrinsic luminosity. These observations showing that, assuming the geometry is indeed that of a Robertson-Walker spacetime, the universe is accelerating at recent times ($ q_{0}<0)$. This means some kind of dark energy is present such that at recent times, $\rho +3p<\ 0$ (Dodelson, 2003, Peter and Uzan H-P, 2013, Ellis, Maartens and MacCallum, 2012). The simplest interpretation is that this is due to a cosmological constant $\Lambda >0$ (equivalent to a perfect fluid with $p = - \rho$) that dominates the recent dynamics of the universe. Observations of gravitational waves from black hole binary merges have the potential to provide high quality standard candles in the future. Observational relations: perturbed model A power series derivation of observational relations in generic cosmological models is given by Kristian and Sachs (Kristian and Sachs, 1966). A series of further phenomena arise in these cases resulting both directly from the model being inhomogeneous, and also from the natures of the structures that arise and reflect the nature of the inhomogeneous cosmological context that lead to their existence. Observational anisotropies in redshifts and number counts allow determination of angular power spectra and angular correlation functions for both matter and background radiation. The Cosmic Blackbody Radiation anisotropies characterised by the angular power spectrum allow determination of the radiation power spectrum $P(k)$ with Sachs-Wolfe plateau and acoustic peaks. They are determined by equations for the hierarchy of angular multipoles, which can be written in terms of angular spherical harmonics (Dodelson, 2003, Durrer, 2008), or an equivalent covariant formalism (Ellis, Maartens and MacCallum, 2012). Cosmic Blackbody Radiation polarisation measurements allow one to determine the primordial perturbation tensor to scalar ratio, allowing an indirect observation of the effects of gravitational waves. Deep redshift surveys of matter sources allow identification of the matter power spectrum $P(k)$, whose Fourier transform is the two-point correlation function, and its baryon acoustic oscillations. Redshift space distortions result from gravitational attraction caused by inhomogeneities changing the redshift-distance relation along the line of sight, and put constraints on the growth rate of structure. Baryon Acoustic Oscillation features can be measured in both the line-of-sight and transverse directions from number count surveys with redshifts. Inhomogeneities lead to both strong and weak gravitational lensing, changing the apparent position of sources in the sky. These are calculated by using the geodesic equations (55) in the perturbed metric (18). Lensing can be used to detect dark matter and can act as a magnifier for galaxies at very large distances if a strongly lensing galaxy or cluster intervenes. Lensing also affects the Cosmic Blackbody Radiation angular power spectrum, leading a smoothing of the acoustic peaks and troughs and the conversion of E-mode polarization to B-modes as well as generation of non-Gaussianity. The Sunyaev-Zeldovich effect (scattering of the Cosmic Blackbody Radiation by ionised hot gas in clusters of galaxies) after decoupling acts as sensitive test of anisotropies at the time of scattering, and constrains perturbation amplitude measures. The concordance model The standard model of cosmology is based on nullcone data (electromagnetic radiation) of all wavelengths coming to us up the past light cone, with an associated lookback time, together with geological data, deriving from) massive particles that originated near our past light cone a very long time ago. The primary data The primary data comes from many sources, as follows. Supernovae Supernovae provide good standard candles, determining the deceleration parameter and hence showing the universe is accelerating at recent time (Figure 3), and so dark energy must be present, in agreement with the conclusion from structure formation studies. Figure 3: A Hubble diagram obtained by the joint lightcurve analysis of 740 SNeIa from four different samples. The top panel depicts the Hubble diagram itself with the best fit (black line); the bottom panel shows the residuals. 3(b) (right) Constraints on the cosmological parameters obtained from this Hubble diagram (together with other observations: Cosmic Background Radiation (green), and Baryon Acoustic Oscillations added in (red). The dot-dashed contours correspond to the constraints from earlier Supernova data. Discrete sources and the Lyman-$\alpha$ forest Number counts of very large numbers of sources with associated redshifts determine the matter power spectrum over a wide range of scales. This can be compared with measurements of 2-point angular correlation functions, as well as Lyman-$\alpha$ forest measurements of the intergalactic medium. These both reveal the Baryon Acoustic Oscillation peaks that were imprinted on the Last Scattering Surface (Figure 4). Their angular size on the LSS constrains the background cosmological model. [11] Figure 4: 4(a) (left) Matter power spectra measured from the luminous red galaxy sample and the main galaxy sample of the Sloan Digital Sky Survey. Red solid lines indicate the predictions of the linear perturbation theory, while red dashed lines include nonlinear corrections. 4(b) (right) Two-point correlation function for objects aligned with the line of sight, measured with quasars ($2.1\leq z\leq3.5$) and the intergalactic medium traced by their Lyman-$\alpha$ forest, as function of comoving distance $r$. The effective redshift is $z=2.34$ here. Background radiation Background radiation at all wavelengths provides important constraints on cosmological models, however the most important is spectrum and angular power spectrum of the relic Cosmic Blackbody Radiation which at present is microwave radiation with a temperature of $2.725K$. The spectrum of this radiation is a perfect black body spectrum, confirming the origin of this radiation from a plasma with matter-radiation equilibrium in the early universe. The Cosmic Blackbody Radiation angular power spectrum has a series of acoustic peaks corresponding to the matter acoustic peaks, extremely well modeled by structure formation theories based on an initial almost scale-free primordial spectrum (Durrer, 2008, Peter and Uzan H-P, 2013), see Figure 5(a). Figure 5: 5(a) Angular power spectrum of the Cosmis Microwave Background temperature (left), and 5(b) $E$-mode -polarisation anisotropies of the radiation, as measured by the Planck Satellite (right). From Ref. Ade et al., 2015 This radiation will be polarised because of interactions with matter, and the Cosmic Blackbody Radiation E-mode polarisation spectrum also has acoustic peaks (Figure 5(b)). These spectra, and particularly the positions and relative heights of the peaks, importantly constrains inflationary universe models (Ade et al., 2015). There will also be B-mode polarisation if gravitational waves are present; presence or absence of such modes is a key test of inflationary models. The present best values provide evidence against a quadratic inflaton potential, as is required for the simplest version of chaotic inflation. Relics The `geological' relics from the early universe, giving information on conditions near our world line in the very distant past, [12] are of various types: Baryons, giving evidence of an epoch of baryogenesis, whose details are not understood; Chiral asymmetry of neutrinos, again for reasons that are not understood; Helium and Lithium abundances, well understood in terms of the process of nucleosynthesis in the early universe, but with some open questions about the details of the abundance of Lithium; Galaxies and clusters of galaxies, giving evidence of the nature of the galaxy formation process. This is broadly understood in terms of a bottom-up process of spontaneous structure formation facilitated by Cold Dark Matter, but detailed issues remain to be sorted out, for example why the density in the cores of galaxies do not have a cusp-like nature and why there are so few dark matter halos. The concordance parameters The concordance model is characterised by a set of parameters that are determined by taking all these observations into account. "A simple inflationary model with a slightly tilted, purely adiabatic, scalar fluctuation spectrum fits the Planck data and most other precision astrophysical data" (Ade et al., 2015). Not all groups use precisely the same set, but a very useful comprehensive survey is given by the Particle Data Group (Lahav and Liddle, 2015) with measured values updated annually. The main ones given by the Planck group (Ade et al., 2015), using data from all sources, are shown in Tables 2 and 3. Table 2: Background model parameters. Background Mode Measured value Hubble Parameter $H_0$ $H_0 = (67.8\pm 0.9) km/s/Mpc$ Total matter density $\Omega_m$ $\Omega_m h^2 = 0.134. \pm 0.006$ Baryon density $\Omega_b$ $\Omega_b h^2 = 0.0221 \pm 0.00034$ Radiation density $\Omega_r$ $\Omega_r h^2 = 2.47 \times 10^{-5}$ Neutrino density $\Omega_\nu$ not measured Cosmological constant $\Omega_\Lambda$ $\Omega_\Lambda = 0.707 \pm 0.010 $ Spatial curvature $\Omega_k$ $\Omega_k =0.0008 \pm 0.0040 $ Ionization optical depth $\tau$ $\tau = 0.066 \pm 0.016$ Equation of state of dark energy $w$ $w = - 1.019 \pm 0.080$ We know the neutrino background must be present, even though we cannot measure it. The reionization optical depth parameter $\tau$ provides an important constraint on models of early galaxy evolution and star formation. It is determined by the EE spectrum in the multipole range $\ell = 2-6$. This is an example of how good limits on background model parameters follow from the observations related to the perturbed model. Other parameters, such as the age of the universe ($t_0 = 13.799 \pm 0.021$ Gyr), follow from those given here. Table 3: Perturbation parameters. Density perturbation amplitude $\sigma_8$ $\sigma_8 = 0.8159 \pm 0.0086$ Density perturbation spectral index $n_s$ $n_s = 0.9667\pm 0.0040$ Tensor to scalar ratio $r$ $r \leq 0.113$ Angular size of the sound horizon at recombination $\theta_*$ $\theta_* =(1.04093\pm0.00030)\times 10^{-2}$ Here the perturbation amplitude $A_S$ is represented in terms of a quantity $\sigma_8$, which is the Root Mean Square matter fluctuations today in linear theory on a scale of $8 h^{-1} Mpc$. These values directly place limits on the slow roll parameters (43) via (52), and hence constrain inflationary models (Ade et al., 2015). In particular they give an upper limit on the energy scale of inflation. The tensor-to-scalar ratio $r$ gives no statistically significant evidence for a primordial gravitational wave signal, hence ruling out the quadratic potentials underlying chaotic inflationary models. An important future effort will be getting more precise limits on this ratio by observations of B-mode Cosmic Microwave Backround polarisation. Neutrino parameters Cosmological models with and without neutrino mass have different primary Cold Dark Matter power spectra, so one obtains limits on the number of neutrinos and on neutrino masses. The Planck team state (Ade et al., 2015), "There is no evidence for additional neutrino-like relativistic particles beyond the three families of neutrinos in the standard model. Using Baryon Acoustic Oscillation and Cosmic Microwave Background data, we find $N_{eff} = 3.30 \pm 0.27$ for the effective number of relativistic degrees of freedom, and an upper limit of $0.23$ eV for the sum of neutrino masses". This is a striking demonstration of how cosmological observations can help determine parameters of the standard model of particle physics. Density Parameters The density parameters of the concordance model determined by the various observations are shown in Figure 6. The left hand diagram shows how the parameters $\Omega_{\Lambda}$ and $\Omega_m$ are restricted by data, and the right hand diagram shows the resulting proportions of different kinds of matter/energy in the universe. Figure 6: 6(a) (Left) Constraints on the cosmological density parameters obtained from the Hubble diagram, structure formation studies, and CMB measurements before Planck. Credit: ESO. 6(b) (Right) Fractions of the different energy components after Planck. Credit: https://darkmatterdarkenergy.com/tag/dark-energy/. The supernova data represent direct measurements of the parameters via the geometry of the background models. The Baryon Acoustic Oscillations and Cosmic Microwave Background data represent measurements of these parameters via the effects of the background model on structure formation, through the coefficients in (45). The data are compatible with each other, but it is the latter that provide the tightest limits. The amount of baryonic matter is determined by nucleosynthesis theory and observations (Uzan J-P, 2016), which also place limits on the number of neutrinos. The key point resulting is that the dominant dark matter is not ordinary baryonic matter: its nature is unknown. Furthermore the nature of dark energy is also unknown. However these two components dominate the dynamics of the universe (Figure 6(b)). A data rich subject The overall conclusion is that due to a great many very advanced large scale observational projects, cosmology is now a very data rich subject with many observations supporting the concordance model (see Figures 3-6). A variety of different kinds of data all agree on the same basic model, which gives it much more credibility than if there were just a few items supporting the overall model. There are unresolved issues about the physics involved, but that is because this is a work in progress. Causal and visual horizons A key finding is the existence in cosmology of limits both to causation, represented by particle horizons, and to observations, represented by visual horizons. Event horizons relate to the ultimate limits of causation in the future universe, that is whether $u(t_{0},t_{1})$, given by (56), converges as $t_{1}\rightarrow \infty $. They will exist in universe models that accelerate forever, but are irrelevant to observational cosmology. Particle horizons The issue is whether $u(t_{0},t_{1})$ given by (56) converges or diverges as $t_{0}\rightarrow 0.$ For ordinary matter ($a(t)$ is given by (33)) and radiation ($a(t)$ is given by (37)), it converges. Then \begin{equation}\tag{60} u_{ph}(t_1) := \lim\limits_{t\rightarrow 0}u(t_0,t_1) \end{equation} gives the comoving particle horizon size at time $t_1$, representing a limit to how far causal effects can have propagated since the start of the universe. Matter at a greater comoving distance $r$ cannot have had any causal contact whatever on the particle at the origin, because this is the furthest that any causal effect travelling at the speed of light can have moved since the start of the universe. Thus it is an absolute limit to causal effects. Much confusion about their nature was cleared up by Rindler in a classic paper (Rindler, 1956, with further clarity coming from use of Penrose causal diagrams for these models (Hawking and Ellis, 1973). These show that particle horizons occur if and only if the initial singularity is spacelike. There are many statements in the literature that such horizons represent motion of galaxies away from us at the speed of light, but that is not the case; they occur due to the integrated behaviour of light from the start of the universe to the present day. Visual horizons We cannot see all the way to the particle horizon, because the early universe was opaque. The comoving visual horizon is the most distant matter we can detect by electromagnetic radiation of any kind (Ellis, Maartens and MacCallum, 2012). It is given by the comoving coordinate value \begin{equation}\tag{61} u_{vh}(t_1) := u(t_{dec},t_1) \end{equation} at time $t_1$, where $t_{dec}$ is the time of decoupling of matter and radiation (corresponding to the Last Scattering Surface). It necessarily lies inside the particle horizon. Its physical size at time $t_1$ is $d_{vh}(t_1) = a(t_1)u_{vh}(t_1)$. The visual horizon size can be 42 billion light years in an Einstein de Sitter model with a Hubble scale of 14 billion years, because of the changing expansion rate of the universe given by (33). There will be corresponding horizons for neutrino observations, arising from the neutrino decoupling time, and gravitational waves, corresponding to the time of ending of gravitational wave equilibrium with other matter and radiation in the very early universe. However cosmological observations to those distances by directly detecting neutrinos and primordial gravitational waves would appear very unlikely. For practical purposes the observational limit is given by the visual horizon, the comoving matter comprising that horizon being the matter seen by COBE, WMAP, and Planck satellites (Figure 7). Primeval particle horizons The size of the particle horizon at the time of decoupling represents the largest scale at which matter (see in Figure 7) can have been causally connected at that time. It is given by \begin{equation}\tag{62} u_{pph} := \lim\limits_{t\rightarrow 0}u(t_0,t_{dec}), \,\, d_{pph} = a(t_{dec})u_{pph}. \end{equation} It is much smaller than the Last Scattering Surface as a whole if we have a Tolman model (37) at very early times all the way back to the start of the universe, which is the horizon problem: what can have caused the matter on the Last Scattering Surface to be so uniform, when it consists of a large number of causally unrelated regions? A partial solution is provided by inflationary universe models, when the expansion at these very early times was almost exponential, as in (34). However one must be cautious here: (62) is valid only in a Robertson-Walker geometry, or approximately in an almost Robertson-Walker geometry, and will be inapplicable if the universe is not spatially homogeneous to begin with. Furthermore, having causal contact is necessary but not sufficient: one also needs a mechanism to create uniformity. So this `solution' largely assumes the result it wants. In a genuinely inhomogeneous model, the problem remains open, and inflation does not provide a solution. Small universes As mentioned above, complex topologies are possible for all three spatial curvatures, resulting in altered number counts and Cosmic Microwave Background Radiation anisotropy patterns if the smallest identification scale is less than the size of the visual horizon. Then we live in a "small universe" where we have seen right round the universe since last scattering. This would result in several observational signatures, in particular there would be identical circles of temperature fluctuations in the Cosmic Microwave Background Radiation sky that would depend explicitly in the specific spatial topology. Figure 7: The image of the Last Scattering Surface obtained by the Planck Satellite, representing fluctuations at the level of one part in $10^{-5}$. The matter seen here comprises our visual horizon (we cannot see any more distant matter). Harmonic analysis of this data leads to the angular power spectra shown in Figure 5. Note that we cannot see this matter as it is today: we see it as it was $13.8$ billion years ago. Credit: Planck. The simplest such models have been ruled out by the Planck observations, but some complex topologies might still be viable. Checking all such possibilities is a massive observational task. The Scholarpedia article "Cosmic Topology" by J-P Luminet discusses these possibilities. One should note that if the size of the universe is 15% larger than the size of the observable universe, one cannot distinguish observationally a large universe from an infinite universe. More general models The Robertson-Walker models have an exceptionally simple geometry. Other geometries have been explored, and this is an important exercise, for one cannot put limits on anisotropy and inhomogeneity if one does not have anisotropic and inhomogeneous models where one can compute observational relations which one can compare with the data. One also needs to explore the outcomes of alternatives to standard General Relativity Theory to see if they can do away with the need for dark energy or dark matter. The Bianchi models and phase planes Following the work of Gödel, Taub, and Heckmann and Schücking, there is a large literature examining the properties of spatially homogeneous anisotropically expanding models (Ellis, Maartens and MacCallum, 2012), generically invariant under 3-dimensional continuous Lie groups of symmetries. Special cases (Locally Rotationally Symmetric models) allow higher symmetries. This family of models allows a rich variety of non-linear behaviour, including highly anisotropic expansion at early and late times, even if the present behaviour is nearly isotropic; different expansion rates at the time of nucleosynthesis than in Friedmann-Lemaître models, leading to different primordial element abundances; complex anisotropy patterns in the Cosmic Microwave Background sky; much more complex singularity behaviour than in Friedmann-Lemaître models, including cigar singularities, pancake singularities (where particle horizons may be broken in specific directions), chaotic (`mixmaster') type behaviour characterised by `billiard ball' dynamics, and non-scalar singularities if the models are tilted. Dynamical systems methods can be used to derive phase planes showing the dynamical behaviour of these solutions and the relations of families of such models to each other (Wainwright, 1997). If a cosmological model is generic, it should include Bianchi anisotropic modes as well as inhomogeneous modes, and may well show mixmaster behaviour. These models are discussed in the Scholarpedia article "Bianchi universes, by Pontzen. Lemaître -Tolman-Bondi spherically symmetric models The growth of inhomogeneities may be studied by using exact spherically symmetric solutions, enabling study of non-linear dynamics. The zero pressure such models are the Lemaître-Tolman-Bondi exact solutions (Krasinski, 1997, Ellis, Maartens and MacCallum, 2012) where the time evolution of each spherically symmetric shell of matter is independently governed by a Friedmann equation. The solutions generically have a matter density that is radially dependent, as well as a spatially varying spatial curvature and bang time. These models can be used to study how a spherical mass with low enough kinetic energy breaks free from the overall cosmic expansion and recollapses to form a local bound system. They can have a complex singularity behaviour, but this is unrealistic as the early universe will not be pressure free. Near the singularity they can however be velocity dominated. What is important is that these models show that any observed source magnitude ($m,z$) and number count $(N,z)$ relations can be obtained in a suitable Lemaître -Tolman-Bondi model where one runs the Einstein Field Equations backwards to determine the free functions in the metric from observations. This can be done for any value whatever of the cosmological constant $\Lambda$, including zero (Ellis, Maartens and MacCallum, 2012). This opens up the possibility of doing away with the need for dark energy if we live in an inhomogeneous universe model rather than a Friedmann-Lemaître universe, where the data usually taken to indicate a change of expansion rate in time due to dark energy are in fact due to a spatial variation. However although the supernova and number count observations can be explained in this way, detailed observational studies based in the kinematic Sunyaev-Zeldovich effect show this possibility is unlikely. Other geometries Szekeres Models These are non spherically symmetric Lemaître -Tolman-Bondi generalisations that can be used to study more general exact dynamical and observational behaviour than the Lemaître -Tolman-Bondi models (Krasinski, 1997, Ellis, Maartens and MacCallum, 2012). Wheeler-Lindquist type models These models are quite different than perturbed FLRW models. They are based in the realisation that most of the universe is in fact empty space, so instead of using perturbed FLRW models one attempts to patch empty Schwarzschild solutions together in such a way that the average distance between them changes with time. One can then show that this distance obeys a Friedmann like equation and examine observational properties of such models. Other dynamics Other dynamics than general relativity might be applicable on the cosmological scale. Many such possibilities have been examined, in particular $f(R)$ models, where the Lagrangian is modified by replacing the scalar curvature $R$ by a function $f(R)$, but none are compelling at the present time. This class of models span only a subclass of the more general scalar-tensor theories. Multiverses Some cosmologists propose a multiverse exists: it is suggested the observed expanding universe domain (bounded by the visual horizon) is just one of billions of such domains, in each of which different physics or different cosmological parameters obtain, i.e. the other domains do not just represent more of the same (an extension beyond the horizon of the same physics and geometry). The reasons for proposing this are primarily, This is claimed to be a result of known physics, which is alleged to lead to chaotic inflation, and it is then assumed that some mechanism related to string theory ensures that different vacua, and hence different local physics, are realised in these domains; As a plausible explanation of anthropic fine tuning of various parameters so as to allow life to come into existence, and in particular as an explanation of the small value of the cosmological constant $\Lambda$. Because of the existence of visual and physical horizons discussed above, the supposed existence of these other universe domains is not observationally testable in the usual sense. Nevertheless it is claimed by some that the observed small value of the cosmological constant (about 100 orders of magnitude less than the value suggested by quantum field theory) should be taken as adequate proof that they exist. However this is not the majority view of working cosmologists. Claims of infinities Some multiverse enthusiasts insist that there are not just a finite number of other such domains but an infinite number, each containing an infinite number of galaxies. This is certainly not an observationally testable scientific claim, nor does it inevitably follow from established local physics. It should be regarded as metaphysical speculation rather than established science. Cosmological successes, puzzles, and limits Standard cosmology is a major application of GR showing how matter curves spacetime and spacetime determines the motion of matter and radiation. It is both a major success, showing how the dynamical nature of spacetime underlies the evolution of the universe itself, with this theory tested by a plethora of observations (Ade et al., 2015, Uzan J-P, 2016), and a puzzle, because the nature of three major elements of the standard model is unknown: Dark matter, Dark energy, The Inflaton. While much of cosmological theory (the epoch since decoupling) follows from Newtonian gravitational theory, this is not true of the dynamics of the early universe, where pressure plays a key role in gravitational attraction: thus for example Newtonian Theory cannot give the correct results for nucleosynthesis. The theory provides a coherent view of structure formation (with a few puzzles, for example the core-cusp problem), and hence of how galaxies come into existence. A worry about differing values for the Hubble constant obtained by different methods (Addison et al. 2017) needs to be resolved. It raises the issue that the universe not only evolves but (at least classically) had a beginning, whose dynamics lies outside the scope of standard physics because it lies outside of space and time. Limits to physical cosmology The domain of validity of the models has been discussed above. We can only be sure of their validity as geometric and dynamic models as regards the matter within the domain we can see, curtailed by the visual horizon. We cannot be certain of the physics that held sway in the very distant past. Only physics issues The models discussed here inform but cannot resolve philosophic issues; for example they have nothing directly to say about purpose or meaning. The only reason we mention this is because some scientists do indeed claim they can resolve such issues on the basis of our current understandings of physical cosmology. This is a regrettable attempt to extend the implications of these models way beyond their domain of validity. Addison et al. (2017). "Elucidating ΛCDM: Impact of Baryon Acoustic Oscillation Measurements on the Hubble Constant Discrepancy". arXiv:1707.06547. Ade et al. (2015). "Planck 2015 results. XIII. Cosmological parameters". Ast and Ast A13. Dodelson, S (2003). Modern Cosmology. Academic Press. Durrer R (2008). The Cosmic Microwave Background. Cambridge: Cambridge University Press. Ehlers J and Rindler W (1989). "A phase space representation of Friedmann-Lemaître universes containing both dust and radiation and the inevitability of a big bang". Mon Not Roy Ast Soc 238: 503-521. Ellis G F R, Maartens R, and MacCallum M A H (2012). Relativistic Cosmology. Cambridge: Cambridge University Press. Hawking S W and Ellis G F R (1973). The large scale structure of space-time. Cambridge: Cambridge University Press. Krasinski A (1997). Inhomogeneous Cosmological Models. Cambridge: Cambridge University Press. Kristian J and Sachs R K (1966). "Observations in Cosmology". Astrophysical Journal: 379-99. Mukhanov V (2005). Physical Foundations of Cosmology. Cambridge University Press. (Particle Data Group) O Lahav and A R Liddle (2015). "The Cosmological Parameters". Peter P and Uzan J-P (2013). Primordial Cosmology. Oxford Graduate Texts. Robertson H P (1933). "Relativistic Cosmology". Rev. Mod. Phys: 62-90. Rindler W (1956). "Visual horizons in world models". Mon Not Roy Ast Soc 116: 662. Tolman R C (1934). Relativity, Thermodynamics, and Cosmology. Oxford: Clarendon Press. Uzan J-P (2016). "The big-bang theory: construction, evolution and status" Introductory lecture notes from the Poincaré seminar XX (2015) and Les Houches school "Cosmology after Planck: what is next?". arXiv:1606.06112. Wainwright J and Ellis G F R (1997). Dynamical systems in cosmology. Cambridge: Cambridge University Press. Challinor A D and Lasenby A N (1998) "A covariant and gauge-invariant analysis of cosmic microwave background anisotropies from scalar perturbations" Phys.Rev. D58: 023001[arXiv:astro-ph/9804150] The covariant approach to cosmological perturbations applied to the cosmic microwave background radiation.. Clifton T, Ferreira P G, Padilla A, and Skordis C (2012) "Modified Gravity and Cosmology" Physics Reports 513:1-189 [arXiv:1106.2476] Comprehensive survey of alternative gravity models and cosmology Ehlers J (1961) "Contributions to the Relativistic Mechanics of Continuous Media" Akademie der Wissenschaften und Literatur (Mainz), Abhandlungen der Mathematisch-Naturwissenschaftlichen Klasse 11: 792-837. Reprinted as Golden Oldie: Gen. Rel. Grav. 25: 1225-66 (1993). Classic paper on the covariant approach to relativistic fluids, with applications to cosmology. Derives the generic Raychaudhuri-Ehlers equation. Ellis G F R (1971) "General relativity and cosmology". In General Relativity and Cosmology, Varenna Course No. XLVII, ed R. K. Sachs (Academic, New York). Reprinted as Golden Oldie, General Relativity and Gravitation 41: 581-660 (2009). Derives dynamic and observational relations for generic cosmological models in a 1+3 covariant way Guth A H (1981) "Inflationary universe: A possible solution to the horizon and flatness problems" Phys. Rev. D 23: 347-356 The original inflationary universe paper. Lidsey J E, Liddle A R, Kolb E W, Copeland E J, Barreiro T, and Abney (1997) "Reconstructing the inflaton potential-an overview" Rev Mod Phys 69:373--410. The inverse method of finding a desired inflationary potential. Lindquist R W and Wheeler J A (1957) "Dynamics of a Lattice Universe by the Schwarzschild-Cell Method" Rev. Mod. Phys. 29:432-443. The first general relativity lattice universe models. Maartens R (2011) "Is the Universe homogeneous?" Philo Trans Roy Soc A 369: 5115-5137 Observational tests for an inhomogeneous cosmology, where the Copernican Principle does not hold Martin J, Ringeval C, and Vennin, V (2013) "Encyclopaedia Inflationaris". arXiv:1303.3787. Comprehensive survey of inflationaryu models and their degree of observational confirmation. Mukhanov V F, Feldman H A, and Brandenberger R H (1992) "Theory of cosmological perturbations" Physics Reports 215: 203-333 Standard reference on cosmological perturbation theory, based on the Bardeen gauge invariant approach Penrose R (1989) "Difficulties with Inflationary Cosmology" Ann NY Acad Sci 571 (Texas Symposium on Relativistic Astrophysics): 249-264. An early statement of the gravitational entropy problems with inflationary models. Sachs R K and Wolfe A M (1967) "Perturbations of a Cosmological Model and Angular Variations of the Microwave Background" Astrophys Jour 147:73-89. [Reprinted as GRG Golden Oldie: Gen. Rel.Grav 39:1944 (2007)] The pioneering paper showing how Cosmic Background Radiation anisotropies arise in perturbed Friedmann-Lemaître cosmologies. Sandage A (1961) "The Ability of the 200-Inch Telescope to Discriminate Between Selected World Models" Astrophys Journ 133:355-392. Classic text as to how observational cosmology was viewed in the 1960's Zhang P and Stebbins A (2011) "Confirmation of the Copernican principle through the anisotropic kinetic Sunyaev Zel'dovich effect" Phil. Trans. R. Soc. A 369:5138-5145. Shows how the kinematic Sunyaev-Zeldovich effect rules out most inhomogeneous cosmologies. The Living Reviews in Relativity series at at http://www.livingreviews.org/. has a subject field "Physical Cosmology", see http://relativity.livingreviews.org/Articles/subject.html. Topics in this field are Luca Amendola / Euclid Theory Working Group "Cosmology and Fundamental Physics with the Euclid Satellite" Ofer Lahav / Yasushi Suto "Measuring our Universe from Galaxy Redshift Surveys" Timothy J. Sumner "Experimental Searches for Dark Matter" Sean M. Carroll "The Cosmological Constant" Aled Jones / Anthony N. Lasenby"The Cosmic Microwave Background" The series also has an article Peter Anninos "Computational Cosmology: From the Early Universe to the Large Scale Structure" in the field Numerical Relativity. ^ If a mode of unification of forces occurred in the early universe such that the relevant forces particles were then massless, the dynamics of the universe at those times would be different than in the standard model, as gravity would then not be the dominant force. [return] ^ Round brackets denote symmetrisation, square brackets denote antisymmetrisation. [return] ^ Units are chosen so that the speed of light is unity, which is always possible. [return] ^ At each point there is a 3-dimensional group of isotropies and a 3-dimensional group of translations. [return] ^ The real universe is not a de Sitter spacetime, nor an anti-deSitter spacetime. [return] ^ There is also a static form, but this is irrelevant for cosmology. [return] ^ Note that only (34) has a Hubble Parameter $H(t)=H_0$ where $H_0>0$ is constant. [return] ^ One should note here that one could in principle account for the observations without requiring any dark energy, if the universe were inhomogeneous about us (Ellis et al. 2012) However this seems not to be the case. [return] ^ We discount here the idea of "sudden singularities", where the universe expands infinitely at a finite time in the future, and the density of matter diverges then. This proposal is based in supposing highly speculative exotic forms of matter; there is no good reason to assume such matter exists. [return] ^ cf. (25) and (26)[return] ^ The scholarpedia article "Cosmological constraints from baryonic acoustic oscillation measurements" by Le Goff and Ruhlmann-Kleider discusses this. [return] ^ Note that this domain near our world line is not accessible to astronomical observation (see Figure 1). [return] Sponsored by: Prof. Pedro Ferreira, University of Oxford, Oxford, United Kingdom Sponsored by: Dr. Olivier Minazzoli, Centre Scientifique de Monaco and Observatoire de la Côte d'Azur (ARTEMIS), Nice, France Reviewed by: Prof. Ruth Durrer, Université de Genève, Genève, SWITZERLAND Reviewed by: Anonymous (via Dr. Olivier Minazzoli, Centre Scientifique de Monaco and Observatoire de la Côte d'Azur (ARTEMIS), Nice, France) Accepted on: 2017-08-02 13:36:05 GMT Retrieved from "http://www.scholarpedia.org/w/index.php?title=Modern_cosmology&oldid=183839" Space-time and gravitation This page was last modified on 2 August 2017, at 13:36. "Modern cosmology" by George F.R. Ellis and Jean-Philippe Uzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Permissions beyond the scope of this license are described in the Terms of Use
CommonCrawl
Software article PathCORE-T: identifying and visualizing globally co-occurring pathways in large transcriptomic compendia Kathleen M. Chen ORCID: orcid.org/0000-0002-7461-95301, Jie Tan ORCID: orcid.org/0000-0002-8893-45662, Gregory P. Way ORCID: orcid.org/0000-0002-0503-93481, Georgia Doing ORCID: orcid.org/0000-0002-0835-69553, Deborah A. Hogan ORCID: orcid.org/0000-0002-6366-29713 & Casey S. Greene ORCID: orcid.org/0000-0001-8713-92131 BioData Mining volume 11, Article number: 14 (2018) Cite this article Investigators often interpret genome-wide data by analyzing the expression levels of genes within pathways. While this within-pathway analysis is routine, the products of any one pathway can affect the activity of other pathways. Past efforts to identify relationships between biological processes have evaluated overlap in knowledge bases or evaluated changes that occur after specific treatments. Individual experiments can highlight condition-specific pathway-pathway relationships; however, constructing a complete network of such relationships across many conditions requires analyzing results from many studies. We developed PathCORE-T framework by implementing existing methods to identify pathway-pathway transcriptional relationships evident across a broad data compendium. PathCORE-T is applied to the output of feature construction algorithms; it identifies pairs of pathways observed in features more than expected by chance as functionally co-occurring. We demonstrate PathCORE-T by analyzing an existing eADAGE model of a microbial compendium and building and analyzing NMF features from the TCGA dataset of 33 cancer types. The PathCORE-T framework includes a demonstration web interface, with source code, that users can launch to (1) visualize the network and (2) review the expression levels of associated genes in the original data. PathCORE-T creates and displays the network of globally co-occurring pathways based on features observed in a machine learning analysis of gene expression data. The PathCORE-T framework identifies transcriptionally co-occurring pathways from the results of unsupervised analysis of gene expression data and visualizes the relationships between pathways as a network. PathCORE-T recapitulated previously described pathway-pathway relationships and suggested experimentally testable additional hypotheses that remain to be explored. The number of publicly available genome-wide datasets is growing rapidly [1]. High-throughput sequencing technologies that measure gene expression quickly with high accuracy and low cost continue to enable this growth [2]. Expanding public data repositories [3, 4] have laid the foundation for computational methods that consider entire compendia of gene expression data to extract biological patterns [5]. These patterns may be difficult to detect in measurements from a single experiment. Unsupervised approaches, which identify important signals in the data without being constrained to previously-described patterns, may discover new expression modules and thus will complement supervised methods, particularly for exploratory analyses [6, 7]. Feature extraction methods are a class of unsupervised algorithms that can reveal unannotated biological processes from genomic data [7]. Each feature can be defined by a subset of influential genes, and these genes suggest the biological or technical pattern captured by the feature. These features, like pathways, are often considered individually [7, 8]. When examined in the context of knowledgebases such as the Kyoto Encyclopedia of Genes and Genomes (KEGG) [9], most features are significantly enriched for more than one biological gene set [7]. In this work, we refer to such a gene set by the colloquial term, pathway. It follows then that such features can be described by sets of functionally related pathways. We introduce the PathCORE-T (identifying pathway co-occurrence relationships in transcriptomic data) software, which implements existing methods that jointly consider features and gene sets to map pathways with shared transcriptional responses. PathCORE-T offers a data-driven approach for identifying and visualizing transcriptional pathway-pathway relationships. In this case, relationships are drawn based on the sets of pathways, annotated in a resource of gene sets, occurring within constructed features. Because PathCORE-T starts from a feature extraction model, the number of samples in the compendium used for model generation and the fraction of samples needed to observe a specific biological or technical pattern is expected to vary by feature extraction method. Pathways must be perturbed in a sufficient fraction of experiments in the data compendium to be captured by any such method. To avoid discovering relationships between pathways that share many genes—which could more easily be discovered by directly comparing pathway membership—we implement an optional pre-processing step that corrects for genes shared between gene sets, which Donato et al. refer to as pathway crosstalk [10]. Donato et al.'s correction method, maximum impact estimation, has not previously been implemented in open source software. We have released our implementation of maximum impact estimation as its own Python package (PyPI package name: crosstalk-correction) so that it can be used independently of PathCORE-T. Applying this correction in PathCORE-T software allows a user to examine relationships between gene sets based on how genes are expressed as opposed to which genes are shared. We apply PathCORE-T to a microbial and a cancer expression dataset, each analyzed using different feature extraction methods, to demonstrate its broad applicability. For the microbial analysis, we created a network of KEGG pathways from recently described ensemble Analysis using Denoising Autoencoders for Gene Expression (eADAGE) models trained on a compendium of Pseudomonas aeruginosa (P. aeruginosa) gene expression data (doi:https://doi.org/10.5281/zenodo.583694) [7]. We provide a live demo of the PathCORE-T web application for this network: users can click on edges in the network to review the expression levels of associated genes in the original compendium (https://pathcore-demo.herokuapp.com/PAO1). To show its use outside of the microbial space, we also demonstrate PathCORE-T analysis of Pathway Interaction Database (PID)-annotated [11] non-negative matrix factorization (NMF) features [12, 13] extracted from The Cancer Genome Atlas's (TCGA) pan-cancer dataset of 33 different tumor types (doi:https://doi.org/10.5281/zenodo.56735) [14]. In addition to visualizing the results of these two applications, the PathCORE-T web interface (https://pathcore-demo.herokuapp.com/) links to the documentation and source code for our implementation and example usage of PathCORE-T. Methods implemented in PathCORE-T are written in Python and pip-installable (PyPI package name: PathCORE-T). Examples of how to use these methods are provided in the PathCORE-T analysis repository (https://github.com/greenelab/PathCORE-T-analysis). In addition to scripts that reproduce the eADAGE and NMF analyses described in this paper, the PathCORE-T-analysis repository includes a Jupyter notebook (https://goo.gl/VuzN12) with step-by-step descriptions for the complete PathCORE-T framework. Our approach diverges from other algorithms that we identified in the literature in its intent: PathCORE-T finds pathway pairs within a biological system that are overrepresented in features constructed from diverse transcriptomic data. This complements other work that developed models specific to a single condition or disease. Approaches designed to capture pathway-pathway interactions from gene expression experiments for disease-specific, case-control studies have been published [15, 16]. For example, Pham et al. developed Latent Pathway Identification Analysis to find pathways that exert latent influences on transcriptionally altered genes [17]. Under this approach, the transcriptional response profiles for a binary condition (disease/normal), in conjunction with pathways specified in the KEGG and functions in Gene Ontology (GO) [18], are used to construct a pathway-pathway network where key pathways are identified by their network centrality scores [17]. Similarly, Pan et al. measured the betweenness centrality of pathways in disease-specific genetic interaction and coexpression networks to identify those most likely to be associated with bladder cancer risk [19]. These methods captured pathway relationships associated with a particular disease state. Global networks identify relationships between pathways that are not disease- or condition-specific. One such network, detailed by Li et al., relied on publicly available protein interaction data to determine pathway-pathway interactions [20]. Two pathways were connected in the network if the number of protein interactions between the pair was significant with respect to the computed background distribution. Such approaches rely on databases of interactions, though the interactions identified can be subsequently used for pathway-centric analyses of transcriptomic data [20, 21]. Pita-Juárez et al. created the Pathway Coexpression Network (PCxN) as a tool to discover pathways correlated with a pathway of interest [22]. They estimated correlations between pathways based on the expression of their underlying genes (as annotated in MSigDB) across a curated compendium of microarray data [22]. Software like PathCORE-T that generates global networks of pathway relationships from unsupervised feature analysis models built using transcriptomics data has not yet been published. The intention of PathCORE-T is to work from transcriptomic data in ways that do not give undue preference to combinations of pathways that share genes. Other methods have sought to consider shared genes between gene sets, protein-protein interactions, or other curated knowledgebases to define pathway-pathway interactions [20, 21, 23,24,25]. For example, Glass and Girvan described another network structure that relates functional terms in GO based on shared gene annotations [26]. In contrast with this approach, PathCORE-T specifically removes gene overlap in pathway definitions before they are used to build a network. Our software reports pathway-pathway connections overrepresented in gene expression patterns extracted from a large transcriptomic compendium while controlling for the fact that some pathways share genes. PathCORE-T identifies functional links between known pathways from the output of feature construction methods applied to gene expression data (Fig. 1a, b). The result is a network of pathway co-occurrence relationships that represents the grouping of biological gene sets within those features. We correct for gene overlap in the pathway annotations to avoid identifying co-occurrence relationships driven by shared genes. Additionally, PathCORE-T implements a permutation test for evaluating and removing edges—pathway-pathway relationships—in the resulting network that cannot be distinguished from a null model of random associations. Though we refer to the relationships in the network as co-occurrences, it is important to note that the final network displays co-occurrences that have been filtered based on this permutation test (Fig. 1c). The PathCORE-T software analysis workflow. a A user applies a feature construction algorithm to a transcriptomic dataset of genes-by-samples. The features constructed must preserve the genes in the dataset and assign weights to each of these genes according to some distribution. b Inputs required to run the complete PathCORE-T analysis workflow. The constructed features are stored in a weight matrix. Based on how gene weights are distributed in the constructed features, a user defines thresholds to select the set of genes most indicative of each feature's function—we refer to these user-defined thresholds as gene signature rules. Finally, a list of pathway definitions will be used to interpret the features and build a pathway co-occurrence network. c Methods in the PathCORE-T analysis workflow (indicated using purple font) can be employed independently of each other so long as the necessary input(s) are provided. The 2 examples we describe to demonstrate PathCORE-T software use the following inputs: (1) the weight matrix and gene signature rules for eADAGE (applied to the P. aeruginosa gene compendium) and KEGG pathways and (2) the weight matrix and gene signature rules for NMF (applied to the TCGA pan-cancer dataset) and PID pathways. Our software is written in Python and pip-installable (PyPI package name: PathCORE-T), and examples of how to use the methods in PathCORE-T are provided in the PathCORE-T-analysis repository (https://github.com/greenelab/PathCORE-T-analysis). We recommend that those interested in using the PathCORE-T software consult the documentation and scripts in PathCORE-T-analysis. Each of the functions in PathCORE-T that we describe here can be used independently; however, we expect most users to employ the complete approach for interpreting pathways shared in extracted features (Fig. 1). PathCORE-T requires the following inputs: A weight matrix that connects each gene to each feature. We expect that this results from the application of a feature construction algorithm to a compendium of gene expression data. The primary requirements are that features must contain the full set of genes in the compendium and genes must have been assigned weights that quantify their contribution to a given feature. Accordingly, a weight matrix will have the dimensions n x k, where n is the number of genes in the compendium and k is the number of features constructed. In principal component analysis (PCA), this is the loadings matrix [27]; in independent component analysis (ICA), it is the unmixing matrix [28]; in ADAGE or eADAGE it is termed the weight matrix [5, 7]; in NMF it is the matrix W, where the NMF approximation of the input dataset A is A ~ WH [12]. In addition to the scripts we provide for the eADAGE and NMF examples in the PathCORE-T analysis repository, we include a Jupyter notebook (https://goo.gl/VuzN12) that demonstrates how a weight matrix can be constructed by applying ICA to the P. aeruginosa gene compendium. Gene signature rule(s). To construct a pathway co-occurrence network, the weight matrix must be processed into gene signatures by applying threshold(s) to the gene weights in each feature—we refer to these as gene signature rules. Subsequent pathway overrepresentation will be determined by the set of genes that makes up a feature's gene signature. These are often the weights at the extremes of the distribution. How gene weights are distributed will depend on the user's selected feature construction algorithm; because of this, a user must specify criterion for including a gene in a gene signature. PathCORE-T permits rules for a single gene signature or both a positive and a negative gene signature. The use of 2 signatures may be appropriate when the feature construction algorithm produces positive and negative weights, the extremes of which both characterize a feature (e.g. PCA, ICA, ADAGE or eADAGE). Because a feature can have more than one gene signature, we maintain a distinction between a feature and a feature's gene signature(s). A list of pathway definitions, where each pathway is defined by a set of genes (e.g. KEGG pathways, PID pathways, GO biological processes). We provide the files for the P. aeruginosa KEGG pathway definitions and the Nature-NCI PID pathway definitions in the PathCORE-T analysis repository (https://github.com/greenelab/PathCORE-T-analysis/tree/master/data). Weight matrix construction and signature definition In practice, users can obtain a weight matrix from many different methods. For the purposes of this paper, we demonstrate generality by constructing weight matrices via eADAGE and NMF. eADAGE eADAGE is an unsupervised feature construction algorithm developed by Tan et al. [7] that uses an ensemble of neural networks (an ensemble of ADAGE models) to capture biological patterns embedded in the expression compendium. We use models from Tan et al. [7]. In that work, Tan et al. evaluated multiple eADAGE model sizes to identify that k = 300 features was an appropriate size for the current P. aeruginosa compendium. The authors also compared eADAGE to two other commonly used feature construction approaches, PCA and ICA [7]. Tan et al. produced 10 eADAGE models that each extracted k = 300 features from the compendium of genome-scale P. aeruginosa data. Because PathCORE-T supports the aggregation of co-occurrence networks created from different models on the same input data, we use all 10 of these models in the PathCORE-T analysis of eADAGE models (doi:https://doi.org/10.5281/zenodo.583172). Tan et al. refers to the features constructed by eADAGE as nodes. They are represented as a weight matrix of size n x k, where n genes in the compendium are assigned positive or negative gene weights, according to a standard normal distribution, for each of the k features. Tan et al. determined that the gene sets contributing the highest positive or highest negative weights (+/− 2.5 standard deviations) to a feature described gene expression patterns across the compendium, and thus referred to the gene sets as signatures. Because a feature's positive and negative gene signatures did not necessarily correspond to the same biological processes or functions, Tan et al. analyzed each of these sets separately [7]. Tan et al.'s gene signature rules are specified as an input to the PathCORE-T analysis as well. We also constructed an NMF model for the TCGA pan-cancer dataset. Given an NMF approximation of A ~ WH [12], where A is the input expression dataset of size n x s (n genes by s samples), NMF aims to find the optimal reconstruction of A by WH such that W clusters on samples (size n x k) and H clusters on genes (size k x s). In order to match the number of features constructed in each eADAGE model by Tan et al., we set k, the desired number of features, to be 300 and used W as the input weight matrix for the PathCORE-T software. We found that the gene weight distribution of an NMF feature is right-skewed and (as the name suggests) non-negative (Additional file 1 Figure S1). In this case, we defined a single gene signature rule: an NMF feature's gene signature is the set of genes with weights 2.0 standard deviations above the mean weight of the feature. The selection of k = 300 for the NMF model allowed us to make the eADAGE-based and NMF-based case studies roughly parallel. We verified that 300 components was appropriate by evaluating the percentage of variance explained by PCA applied to the TCGA dataset. In general, the principal components explained very little variance—the first principal component only explained 11% of the variance. At 300 components, the proportion of variance explained was 81%. As an additional analysis, we determined the number of components (k = 24) where each additional component explained less than 0.5% of the variance. We found that using a very small number of constructed features resulted in a substantial loss of power: PathCORE-T analysis with a single k = 24 model yielded no significant edges after permutation test. However, PathCORE-T can be applied over multiple models as long as the feature construction method produces different solutions depending on random seed initialization. We performed 10 factorizations to generate an aggregate of 10 k = 24-feature NMF models and found that the resulting co-occurrence network was denser (364 edges) than our k = 300 factor network (119 edges). 65 edges were found in both networks. These shared edges had higher weights, on average, in both networks compared to edges unique to each network (https://goo.gl/vnDVNA). Construction of a pathway co-occurrence network We employ a Fisher's exact test [29] to determine the pathways significantly associated with each gene signature. When considering significance of a particular pathway, the two categories of gene classification are as follows: (1) presence or absence of the gene in the gene signature and (2) presence or absence of the gene in the pathway definition. For each pathway in the input list of pathway definitions, we specify a contingency table and calculate its p-value, which is corrected using the Benjamini—Hochberg [30] procedure to produce a feature-wise false discovery rate (FDR). A pathway with an adjusted p-value that is less than the user-settable FDR significance cutoff, alpha (default: 0.05), is considered significantly enriched in a given gene signature. This cutoff value should be selected to most aid user interpretation of the model. The next step of PathCORE-T is to convert pathway-node relationships into pathway-pathway relationships. For this, we apply a subsequent permutation test over pathway-pathway edge weights that accounts for the frequency at which pathways are observed as associated with features. This permutation produces a p-value for each edge. Two pathways co-occur, or share an edge in the pathway co-occurrence network, if they are both overrepresented in a gene signature. The weight of each edge in the pathway-pathway graph corresponds to number of times such a pathway pair is present over all gene signatures in a model (Fig. 2a). The approach implemented in PathCORE-T to construct a pathway co-occurrence network from an expression compendium. a A user-selected feature extraction method is applied to expression data. Such methods assign each gene a weight, according to some distribution, that represents the gene's contribution to the feature. The set of genes that are considered highly representative of a feature's function is referred to as a feature's gene signature. The gene signature is user-defined and should be based on the weight distribution produced by the unsupervised method of choice. In the event that the weight distribution contains both positive and negative values, a user can specify criteria for both a positive and negative gene signature. A test of pathway enrichment is applied to identify corresponding sets of pathways from the gene signature(s) in a feature. We consider pathways significantly overrepresented in the same feature to co-occur. Pairwise co-occurrence relationships are used to build a network, where each edge is weighted by the number of features containing both pathways. b N permuted networks are generated to assess the statistical significance of a co-occurrence relation in the graph. Here, we show the construction of one such permuted network. Two invariants are maintained during a permutation: (1) pathway side-specificity (if applicable, e.g. positive and negative gene signatures) and (2) the number of distinct pathways in a feature's gene signature. c For each edge observed in the co-occurrence network, we compare its weight against the weight distribution generated from N (default: 10,000) permutations of the network to determine each edge's p-value. After correcting the p-value by the number of edges observed in the graph using the Benjamini—Hochberg procedure, only an edge with an adjusted p-value below alpha (default: 0.05) is kept in the final co-occurrence network. Permutation test The network that results from the preceding method is densely connected, and many edges may be spurious. To remove correlations that cannot be distinguished from random pathway associations, we define a statistical test that determines whether a pathway-pathway relationship appearing x times in a k-feature model is unexpected under the null hypothesis—the null hypothesis being that the relationship does not appear more often than it would in a random network. We create N weighted null networks, where each null network is constructed by permuting overrepresented pathways across the model's gene signatures while preserving the number of pathways for which each gene signature is enriched (Fig. 2b). N is a user-settable parameter: the example PathCORE-T analyses we provide specify an N of 10,000. Increasing the value of N leads to more precise p-values, particularly for low p-values, but comes at the expense of additional computation time. In the case where we have positive and negative gene signatures, overrepresentation can be positive or negative. Because certain pathways may display bias toward one side—for example, a pathway may be overrepresented more often in features' positive gene signatures—we perform the permutation separately for each side. The N random networks produce the background weight distribution for every observed edge; significance can then be assessed by comparing the true (observed) edge weight against the null. The p-value for each edge e is calculated by summing the number of times a random network contained e at a weight greater than or equal to its observed edge weight and dividing this value by N. Following Benjamini—Hochberg FDR correction by the number of edges in the observed network, pathway-pathway relationships with adjusted p-values above alpha (user-settable default: 0.05) are removed from the network of co-occurring pathways (Fig. 2c). The threshold alpha value is a configurable parameter, and the user should select an FDR that best balances the costs and consequences of false positives. For highly exploratory analyses in which it may be helpful to have more speculative edges, this value can be raised. For analyses that require particularly stringent control, it can be lowered. Because the expected weight of every edge can be determined from the N random networks (by taking the sum of the background weight distribution for an edge and dividing it by N), we can divide each observed edge weight by its expected weight (dividing by 1 if the expected edge weight is 0 based on the N permutations) to get the edge's odds ratio. Edges in the final network are weighted by their odds ratios. Gene overlap correction Pathways can co-occur because of shared genes (Fig. 3a, b, d). Though some approaches use the overlap of genes to identify connected pathways, we sought to capture pairs of pathways that persisted even when this overlap was removed. The phenomenon of observing enrichment of multiple pathways due to gene overlap has been previously termed as "crosstalk," and Donato et al. have developed a method to correct for it [10]. Due to confusion around the term, we refer to this as overlapping genes in this work, except where specifically referencing Donato et al. Their approach, called maximum impact estimation, begins with a membership matrix indicating the original assignment of multiple genes to multiple pathways. It uses expectation maximization to estimate the pathway in which a gene contributes its greatest predicted impact (its maximum impact) and assigns the gene only to this pathway [10]. This provides a set of new pathway definitions that no longer share genes (Fig. 3c, e). Correcting for gene overlap results in a sparser pathway co-occurrence network. a The KEGG pathway annotations for the sulfonate transport system are a subset of those for sulfur metabolism. 12 genes annotated to the sulfonate transport system are also annotated to sulfur metabolism. b Without applying the overlap correction procedure, 25 of the genes in the positive and negative gene signatures of the eADAGE feature "Node 11" are annotated to sulfur metabolism—of those, 8 genes are annotated to the sulfonate transport system as well. c All 8 of the overlapping genes are mapped to the sulfur metabolism pathway after overlap correction. d A co-occurrence network built without applying the overlap correction procedure will report co-occurrence between the sulfonate transport system and sulfur metabolism, whereas (e) no such relation is identified after overlap correction. There was no existing open source implementation of this algorithm, so we implemented Donato et al.'s maximum impact estimation as a Python package (PyPI package name: crosstalk-correction). This software is separate from PathCORE-T because we expect that it may be useful in its own right for other analytical workflows, such as differential expression analysis. The procedure is written using NumPy functions and data structures, which allows for efficient implementation of array and matrix operations in Python [31]. In PathCORE-T, we used this software to resolve overlapping genes before pathway overrepresentation analysis. Overlap correction is applied to each feature of the model independently. This most closely matches the setting evaluated by the original authors of the method. Work on methods that resolves overlap by using information shared across features may provide opportunities for future enhancements but was deemed to be out of the scope of a software contribution. With this step, the pathway co-occurrence network identifies relationships that are not driven simply by the same genes being annotated to multiple pathways. Without this correction step, it is difficult to determine whether a co-occurrence relationship can be attributed to the features extracted from expression data or gene overlap in the two pathway annotations. We incorporate this correction into the PathCORE-T workflow by default; however, users interested in using PathCORE-T to find connections between overlapping gene sets can choose to disable the correction step. PathCORE-T network visualization and support for experimental follow-up The PathCORE-T analysis workflow outputs a list of pathway-pathway relationships, or edges in a network visualization, as a text file. An example of the KEGG P. aeruginosa edges file is available for download on the demo application: http://pathcore-demo.herokuapp.com/quickview. While we chose to represent pathway-pathway relationships as a network, users can use this file output to visualize the identified relationships as an adjacency matrix or in any other format they choose. As an optional step, users can set up a Flask application for each PathCORE-T network. Metadata gathered from the analysis are saved to TSV files, and we use a script to populate collections in a MongoDB database with this information. The co-occurrence network is rendered using the D3.js force-directed graph layout [32]. Users can select a pathway-pathway relationship in the network to view a new page containing details about the genes annotated to one or both pathways (Fig. 4a). A web application used to analyze pathway-pathway relationships in the eADAGE-based, P. aeruginosa KEGG network. a A user clicks on an edge (a pathway-pathway relationship) in the network visualization. b The user is directed to a page that displays expression data from the original transcriptomic dataset specific to the selected edge (https://goo.gl/Hs5A3e). The expression data is visualized as two heatmaps that indicate the fifteen most and fifteen least expressed samples corresponding to the edge. To select the "most" and "least" expressed samples, we assign each sample a summary "expression score." The expression score is based on the expression values of the genes (limited to the top twenty genes with an odds ratio above 1) annotated to one or both of the pathways. Here, we show the heatmap of least expressed samples specific to the [Phosphate transport - Type II general secretion] relationship. c Clicking on a square in the heatmap directs a user to an experiment page (https://goo.gl/KYNhwB) based on the sample corresponding to that square. A user can use the experiment page to identify whether the expression values of genes specific to an edge and a selected sample differ from those recorded in other samples of the experiment. In this experiment page, the first three samples (labeled in black) are P. aeruginosa "baseline" replicates grown for 72 h in drop-flow biofilm reactors. The following three samples (labeled in blue) are P. aeruginosa grown for an additional 12 h (84 h total). Labels in blue indicate that the three 84 h replicates are in the heatmap of least expressed samples displayed on the [Phosphate transport – Type II general secretion] edge page. We created a web interface for deeper examination of relationships present in the pathway co-occurrence network. The details we included in an edge-specific page (1) highlight up to twenty genes—annotated to either of the two pathways in the edge—contained in features that also contain this edge, after controlling for the total number of features that contain each gene, and (2) display the expression levels of these genes in each of the fifteen samples where they were most and least expressed. The quantity of information (twenty genes, thirty samples total) we choose to include in an edge page is intentionally limited so that users can review it in a reasonable amount of time. To implement the functionality in (1), we computed an odds ratio for every gene annotated to one or both pathways in the edge. The odds ratio measures how often we observe a feature enriched for both the given gene and the edge of interest relative to how often we would expect to see this occurrence. We calculate the proportion of observed cases and divide by the expected proportion--equivalent to the frequency of the edge appearing in the model's features. Let k be the number of features from which the PathCORE-T network was built. kG is the number of features that contain gene G (i.e. G is in kG features' gene signatures), kE the number of features that contain edge E (i.e. the two pathways connected by E are overrepresented in kE features), and kG & E the number of features that contain both gene G and edge E. The odds ratio is computed as follows: $$ {\displaystyle \begin{array}{l} observed=\frac{k_{G\&E}}{k_G}\\ {} expected=\frac{k_E}{k}\\ {} odds\kern0.34em ratio=\frac{observed}{expected}\end{array}} $$ An odds ratio above 1 suggests that the gene is more likely to appear in features enriched for this pair of pathways. In the web interface, we sort the genes by their odds ratio to highlight genes most observed with the co-occurrence relationship. The information specified in (2) requires an "expression score" for every sample. A sample expression score is calculated using the genes we selected in goal (1): it is the average of the normalized gene expression values weighted by the normalized gene odds ratio. Selection of the most and least expressed samples is based on these scores. We use two heatmaps to show the (maximum of twenty) genes' expression values in each of the fifteen most and least expressed samples (Fig. 4b). For each sample in an edge page, a user can examine how the expression values of the edge's twenty genes in that sample compare to those recorded for all other samples in the dataset that are from the same experiment (Fig. 4c). Genes that display distinct expression patterns under a specific setting may be good candidates for follow-up studies. PathCORE-T software Unsupervised methods can identify previously undiscovered patterns in large collections of data. PathCORE-T overlays curated knowledge after feature construction to help researchers interpret constructed features in the context of existing knowledgebases. Specifically, PathCORE-T aims to clarify how expert-annotated gene sets work together from a gene expression perspective. PathCORE-T starts from an unsupervised feature construction model. Before applying the software, users should evaluate models to make sure that they capture biological features in their dataset. Model evaluation can be performed in numerous ways depending on the setting and potential assessments include consistency across biological replicates, reconstruction error given a fixed dimensionality, and independent validation experiments. Tan et al. described several ways that models could be evaluated [7]. Datasets will vary in terms of their amenability to analysis by different model-building strategies, and researchers may wish to consult a recent review for more discussion of feature construction methods [33]. We implemented the methods contained in the PathCORE-T software in Python. The implementations of the primary steps are pip-installable (PyPI package name: PathCORE-T), and examples of how to use the methods in PathCORE-T are provided in the PathCORE-T-analysis repository (https://github.com/greenelab/PathCORE-T-analysis). We also implemented an optional step, which corrects for overlapping genes between pathway definitions, described by Donato et al. [10]. Though the algorithm had been described, no publicly available implementation existed. We provide this overlap correction algorithm as a Python package (PyPI package name: crosstalk-correction) available under the BSD 3-clause license. Each component of PathCORE-T can be used independently of each other (Fig. 1c). Here, we present analyses that can be produced by applying the full PathCORE-T pipeline to models created from a transcriptomic compendium by an unsupervised feature construction algorithm. Input pathway definitions are "overlap-corrected" for each feature before enrichment analysis. An overlap-corrected, weighted pathway co-occurrence network is built by connecting the pairs of pathways that are overrepresented in features of the model. Finally, we remove edges that cannot be distinguished from a null model of random associations based on the results of a permutation test. Case study: P. aeruginosa eADAGE models annotated with KEGG pathways We used PathCORE-T to create a network of co-occurring pathways out of the expression signatures extracted by eADAGE from a P. aeruginosa compendium [7]. For every feature, overlap correction was applied to the P. aeruginosa KEGG pathway annotations and overlap-corrected annotations were used in the overrepresentation analysis. PathCORE-T aggregates multiple networks by taking the union of the edges across all networks and summing the weights of common pathway-pathway connections. We do this to emphasize the co-occurrence relationships that are more stable [34]—that is, the relationships that appear across multiple models. Finally, we removed edges in the aggregate network that were not significant after FDR correction when compared to the background distributions generated from 10,000 permutations of the network. Used in this way, PathCORE-T software allowed for exploratory analysis of an existing well-validated model. The eADAGE co-occurrence network that resulted from our exploratory analysis contained a number of pathway-pathway relationships that have been previously characterized by other means (Fig. 5). Three glucose catabolism processes co-occur in the network: glycolysis, pentose phosphate, and the Entner-Doudoroff pathway (Fig. 5a). We also found a cluster relating organophosphate and inorganic phosphate transport- and metabolism-related processes (Fig. 5b). Notably, phosphate uptake and acquisition genes were directly connected to the hxc genes that encode a type II secretion system. Studies in P. aeruginosa clinical isolates demonstrated that the Hxc secretion system was responsible for the secretion of alkaline phosphatases, which are phosphate scavenging enzymes [35, 36] and the phosphate binding DING protein [37]. Furthermore, alkaline phosphatases, DING and the hxc genes are regulated by the transcription factor PhoB which is most active in response to phosphate limitation. The identification of this relationship by PathCORE-T as a global pattern suggested the role of type II secretion and phosphate limitation seen in a limited number of isolates may be generalizable to broader P. aeruginosa biology. As shown in Fig. 5c, we also identified linkages between two pathways involved in the catabolism of sulfur-containing molecules, taurine and methionine, and the general sulfur metabolism process. Other connections between pathways involved in the transport of iron (ferrienterobactin binding) [38] and zinc (the znu uptake system [39]) were identified (Fig. 5d). Interestingly, genes identified in the edge between the zinc transport and MacAB-TolC pathways included the pvd genes involved in pyoverdine biosynthesis and regulation, a putative periplasmic metal binding protein, as well as other components of an ABC transporter (genes PA2407, PA2408, and PA2409 at https://goo.gl/bfqOk8) [40]. PathCORE-T suggested a relationship between zinc and iron pathways in P. aeruginosa transcriptional data though such a relationship has not yet been described. Structural analysis of the iron-responsive regulator Fur found that it also productively binds zinc in E. coli and Bacillus subtilis providing a mechanism by which these pathways may be linked [41, 42]. eADAGE features constructed from publicly available P. aeruginosa expression data describe known KEGG pathway-pathway relationships. a The glycerolipid metabolism, Entner-Doudoroff, glycolysis/gluconeogenesis, and pentose phosphate, pathways share common functions related to glucose catabolism. b Organophosphate and inorganic phosphate transport- and metabolism-related processes frequently co-occur with bacterial secretion systems. Here, we observe pairwise relationships between type II general secretion and phosphate-related processes. c Pathways involved in the catabolism of sulfur-containing molecules—taurine (NitT/TauT family transport) and methionine (D-Methionine transport), and the general sulfur metabolism process—are functionally linked. d The zinc transport, iron complex transport, and MacAB-TolC transporter systems are pairwise connected. The fully labeled network can be viewed at https://pathcore-demo.herokuapp.com/PAO1. The list of KEGG pathway-pathway relationships visualized in the network is available at the specified link (Ctrl + L for list view) and as a Additional file 2 for this paper. The network constructed using the PathCORE-T framework had 203 edges between 89 pathways. For comparison, we constructed a KEGG pathway-pathway network where edges were drawn between pathways with significant gene overlap (FDR-corrected hypergeometric test < 0.05). The overlap-based network had 406 edges between 158 pathways. Only 35 of the edges in the PathCORE-T network were between pathways that shared genes, with an average Jaccard Index of only 0.035. The network constructed using PathCORE-T (with overlap-correction applied by default) captured pathway co-occurrences not driven by shared genes between pathways. Case study: TCGA's pan-cancer compendium analyzed by NMF with PID pathways PathCORE-T is not specific to a certain dataset, organism, or feature construction method. We constructed a 300-feature NMF model of TCGA pan-cancer gene expression data, which is comprised of 33 different cancer-types from various organ sites and applied the PathCORE-T software to those features. We chose NMF because it has been used in previous studies to identify biologically relevant patterns in transcriptomic data [12] and by many studies to derive molecular subtypes [43,44,45]. The 300 NMF features were analyzed using overlap-corrected PID pathways, a collection of 196 human cell signaling pathways with a particular focus on processes relevant to cancer [11]. PathCORE-T detected modules of co-occurring pathways that were consistent with our current understanding of cancer-related interactions (Fig. 6). Because cancer-relevant pathways were used, it was not surprising that cancer-relevant pathways appeared. However, the edges between those pathways were also encouraging. For example, a module composed of a FoxM1 transcription factor network, an E2F transcription factor network, Aurora B kinase signaling, ATR signaling, PLK1 signaling, and members of the Fanconi anemia DNA damage response pathway was densely connected (Fig. 6a). The connections in this module recapitulated well known cancer hallmarks including cellular proliferation pathways and markers of genome instability, such as the activation of DNA damage response pathways [46]. We found that several pairwise pathway co-occurrences corresponded with previously reported pathway-pathway interactions [47,48,49]. We also observed a hub of pathways interacting with Wnt signaling, among them the regulation of nuclear Beta-catenin signaling, FGF signaling, and BMP signaling (Fig. 6b). The Wnt and BMP pathways are functionally integrated in biological processes that contribute to cancer progression [50]. Additionally, Wnt/Beta-catenin signaling is a well-studied regulatory system, and the effects of mutations in Wnt pathway components on this system have been linked to tumorigenesis [51]. Wnt/Beta-catenin and FGF together influence the directional migration of cancer cell clusters [52]. PID pathway-pathway relationships discovered in NMF features constructed from the TCGA pan-cancer gene expression dataset. a Pathways in this module are responsible for cell cycle progression. b Wnt signaling interactions with nuclear Beta-catenin signaling, FGF signaling, and BMP signaling have all been linked to cancer progression. c Here, we observe functional links between pathways responsible for angiogenesis and those responsible for cell proliferation. d The VEGF-VEGFR pathway interacts with the S1P3 pathway through Beta3 integrins. e This module contains many relationships related to immune system processes. The interaction cycle formed by T-Cell Receptor (TCR) signaling in naïve CD4+ T cells and IL-12/IL-4 mediated signaling events, outlined in yellow, is one well-known example. The cycle in blue is formed by the ATF2, NFAT, and AP1 pathways; pairwise co-occurrence of these three transcription factor networks may suggest that dysregulation of any one of these pathways can trigger variable oncogenic processes in the immune system. The list of PID pathway-pathway relationships visualized in the network is available as an Additional file 3 for this paper. Two modules in the network related to angiogenesis, or the formation of new blood vessels (Fig. 6c, d). Tumors transmit signals that stimulate angiogenesis because a blood supply provides the necessary oxygen and nutrients for their growth and proliferation. One module relates angiogenesis factors to cell proliferation. This module connected the following pathways: PDGFR-beta signaling, FAK-mediated signaling events, VEGFR1 and VEGFR2-mediated signaling events, nuclear SMAD2/3 signaling regulation, and RB1 regulation (Fig. 6c). These functional connections are known to be involved in tumor proliferation [53,54,55]. The other module indicated a relationship by which the VEGF pathway interacts with the S1P3 pathway through Beta3 integrins (Fig. 6d). S1P3 is a known regulator of angiogenesis [56], and has been demonstrated to be associated with treatment-resistant breast cancer and poor survival [57]. Moreover, this interaction module has been observed to promote endothelial cell migration in human umbilical veins [58]. Taken together, this independent module may suggest a distinct angiogenesis process activated in more aggressive and metastatic tumors that is disrupted and regulated by alternative mechanisms [59]. Finally, PathCORE-T revealed a large, densely connected module of immune related pathways (Fig. 6e). We found that this module contains many co-occurrence relationships that align with immune system processes. One such example is the well characterized interaction cycle formed by T-Cell Receptor (TCR) signaling in naïve CD4+ T cells and IL-12/IL-4 mediated signaling events [60,61,62]. At the same time, PathCORE-T identifies additional immune-related relationships. We observed a cycle between the three transcription factor networks: ATF-2, AP-1, and CaN-regulated NFAT-dependent transcription. These pathways can take on different, often opposing, functions depending on the tissue and subcellular context. For example, ATF-2 can be an oncogene in one context (e.g. melanoma) and a tumor suppressor in another (e.g. breast cancer) [63]. AP-1, comprised of Jun/Fos proteins, is associated with both tumorigenesis and tumor suppression due to its roles in cell survival, proliferation, and cell death [64]. Moreover, NFAT in complex with AP-1 regulates immune cell differentiation, but dysregulation of NFAT signaling can lead to malignant growth and tumor metastasis [65]. The functional association observed between the ATF-2, AP-1, and NFAT cycle together within the immunity module might suggest that dysregulation within this cycle has profound consequences for immune cell processes and may trigger variable oncogenic processes. Just as we did for the eADAGE-based P. aeruginosa KEGG pathways case study, we constructed a network only from PID pathways with significant gene overlap. The network constructed using PathCORE-T and NMF features had 119 edges between 57 pathways. The overlap-based network was much denser: it had 3826 edges between 196 pathways. This suggested a high degree of overlap between PID pathways. For the PathCORE-T NMF co-occurrence network, 96 of the edges were between pathways that shared genes. However, the average Jaccard Index for these pathway pairs remained low, at 0.058. Unsupervised analyses of genome-scale datasets that summarize key patterns in the data have the potential to improve our understanding of how a biological system operates via complex interactions between molecular processes. Feature construction algorithms capture coordinated changes in the expression of many genes as features. The genes that contribute most to each feature co-vary. However, interpreting the features generated by unsupervised approaches remains challenging. PathCORE-T creates a network of globally co-occurring pathways based on features created from the analysis of a compendium of gene expression data. Networks modeling the relationships between curated processes in a biological system offer a means for developing new hypotheses about which pathways influence each other and when. Our framework provides a data-driven characterization of the biological system at the pathway-level by identifying pairs of pathways that are overrepresented across many features. PathCORE-T connects the features extracted from data to curated resources. It is important to note that PathCORE-T will only be able to identify pathways that occur in features of the underlying model, which means these pathways must be transcriptionally perturbed in at least some subset of the compendium. Models should be evaluated before analysis with PathCORE-T. The network resulting from PathCORE-T can help to identify global pathway-pathway relationships—a baseline network—that complements existing work to identify interactions between pathways in the context of a specific disease. The specific niche that PathCORE-T framework aims to fill is in revealing to researchers which gene sets are most closely related to each other in machine learning-based models of gene expression, which genes play a role in this co-occurrence, and which conditions drive this relationship. Availability and requirements Project name: PathCORE-T Project home page: https://pathcore-demo.herokuapp.com Archived version: https://github.com/greenelab/PathCORE-T-analysis/releases/tag/v1.2.0 (links to download .zip and .tar.gz files are provided here) Operating system: Platform independent Other requirements: Python 3 or higher License: BSD 3-clause Greene CS, Foster JA, Stanton BA, Hogan DA, Bromberg Y. Computational approaches to study microbes and microbiomes. Pac Symp Biocomput. 2016;21:557. Tatlow PJ, Piccolo SR. a cloud-based workflow to quantify transcript-expression levels in public cancer compendia. Sci Rep. 2016;6 Barrett T, Wilhite SE, Ledoux P, Evangelista C, Kim IF, Tomashevsky M, et al. NCBI GEO: archive for functional genomics data sets—update. Nucleic Acids Res. 2012;41:D991–5. Kolesnikov N, Hastings E, Keays M, Melnichuk O, Tang YA, Williams E, et al. ArrayExpress update—simplifying data submissions. Nucleic Acids Res. 2014:gku1057. Tan J, Hammond JH, Hogan DA, Greene CS. Adage-based integration of publicly available Pseudomonas aeruginosa gene expression data with denoising autoencoders illuminates microbe-host interactions. mSystems. 2016;1:25. Stein-O'Brien G, Carey J, Lee W, Considine M, Favorov A, Flam E, et al. PatternMarkers and Genome-Wide CoGAPS Analysis in Parallel Sets (GWCoGAPS) for data-driven detection of novel biomarkers via whole transcriptome Non-negative matrix factorization (NMF). bioRxiv. 2016:083717. Tan J, Doing G, Lewis KA, Price CE, Chen KM, Cady KC, et al. Unsupervised extraction of stable expression signatures from public compendia with an Ensemble of Neural Networks. Cell Syst. 2017;5:63–71.e6. Engreitz JM, Daigle BJ, Marshall JJ, Altman RB. Independent component analysis: mining microarray data for fundamental human gene expression modules. J Biomed Inform. 2010;43:932–44. Kanehisa M, Goto S. KEGG: Kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28:27–30. Donato M, Xu Z, Tomoiaga A, Granneman JG, MacKenzie RG, Bao R, et al. Analysis and correction of crosstalk effects in pathway analysis. Genome Res. 2013;23:1885–93. Schaefer CF, Anthony K, Krupa S, Buchoff J, Day M, Hannay T, et al. PID: the pathway interaction database. Nucleic Acids Res. 2009;37:D679. Brunet J-P, Tamayo P, Golub TR, Mesirov JP. Metagenes and molecular pattern discovery using matrix factorization. Proc Natl Acad Sci. 2004;101:4164–9. Kim PM, Tidor B. Subsystem identification through dimensionality reduction of large-scale gene expression data. Genome Res. 2003;13:1706–18. Weinstein JN, Collisson EA, Mills GB, Shaw KRM, Ozenberger BA, Ellrott K, et al. The Cancer genome atlas Pan-Cancer analysis project. Nat Genet. 2013;45:1113–20. Visakh R, Nazeer KA. Identifying epigenetically dysregulated pathways from pathway–pathway interaction networks. Comput Biol Med. 2016;76:160–7. Yang J-B, Luo R, Yan Y, Chen Y. Differential pathway network analysis used to identify key pathways associated with pediatric pneumonia. Microb Pathog. 2016;101:50–5. Pham L, Christadore L, Schaus S, Kolaczyk ED. Network-based prediction for sources of transcriptional dysregulation using latent pathway identification analysis. Proc Natl Acad Sci. 2011;108:13347–52. Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, et al. Gene ontology: tool for the unification of biology. Nat Genet. 2000;25:25–9. Pan Q, Hu T, Andrew AS, Karagas MR, Moore JH. Bladder cancer specific pathway interaction networks. ECAL Citeseer. 2013:94–101. Li Y, Agarwal P, Rajagopalan D. A global pathway crosstalk network. Bioinformatics. 2008;24:1442–7. de A-JG, Meja-Pedroza RA, Espinal-Enrquez J, Hernndez-Lemus E. Crosstalk events in the estrogen signaling pathway may affect tamoxifen efficacy in breast cancer molecular subtypes. Comput Biol Chem. 2015;59:42–54. Pita-Juárez Y, Altschuler G, Kariotis S, Wei W, Koler K, Green C, et al. The pathway Coexpression network: revealing pathway relationships. PLoS Comput Biol. 2018;14:e1006042. Bell L, Chowdhary R, Liu JS, Niu X, Zhang J. Integrated bio-entity network: a system for biological knowledge discovery. PLoS One. 2011;6:e21474. Chowbina SR, Wu X, Zhang F, Li PM, Pandey R, Kasamsetty HN, et al. HPD: an online integrated human pathway database enabling systems biology studies. BMC Bioinformatics. 2009;10:S5. Wu X, Chen JY. Molecular interaction networks: topological and functional characterizations. Autom Proteomics Genomics Eng Case-Based Approach. 2009;145 Glass K, Girvan M. Finding new order in biological functions from the network structure of gene annotations. PLoS Comput Biol. 2015;11:e1004565. Abdi H, Williams LJ. Principal component analysis. Wiley Interdiscip Rev Comput Stat. 2010;2:433–59. Stone JV. Independent component analysis. Wiley Online Library. 2004; Upton GJG. Fisher's exact test. J. Royal Stat. Soc. 1992;155:395–402. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Royal Stat Soc. 1995;57:289–300. van der WS, Colbert SC, Varoquaux G. The NumPy array: a structure for efficient numerical computation. Comput Sci Eng. 2011;13:22–30. Bostock M. D3js Data Driven Doc. 2012:492. Stein-O'Brien GL, Arora R, Culhane AC, Favorov A, Greene C, Goff LA, et al. Enter the matrix: Interpreting unsupervised feature learning with matrix decomposition to discover hidden knowledge in high-throughput omics data. bioRxiv. 2017:196915. Stability YB. Bernoulli. 2013;19:1484–500. Liu X, Long D, You H, Yang D, Zhou S, Zhang S, et al. Phosphatidylcholine affects the secretion of the alkaline phosphatase PhoA in Pseudomonas strains. Microbiol Res. 2016;192:21–9. Ball G, ric D, Lazdunski A, Filloux A. A novel type II secretion system in Pseudomonas aeruginosa. Mol Microbiol. 2002;43:475–85. Ball G, Viarre V, Garvis S, Voulhoux R, Filloux A. Type II-dependent secretion of a Pseudomonas aeruginosa DING protein. Res Microbiol. 2012;163:457–69. Stephens DL, Choe MD, Earhart CF. Escherichia coli periplasmic protein FepB binds ferrienterobactin. Microbiology. 1995;141:1647–54. Ellison ML, IIII JMF, Parrish W, Danell AS, Pesci EC. The transcriptional regulator Np20 is the zinc uptake regulator in Pseudomonas aeruginosa. PLoS One. 2013;8:e75389. Winsor GL, Lam DK, Fleming L, Lo R, Whiteside MD, Nancy YY, et al. Pseudomonas genome database: improved comparative analysis and population genomics capability for Pseudomonas genomes. Nucleic Acids Res. 2010:gkq869. Althaus EW, Outten CE, Olson KE, Cao H, O'Halloran TV. The ferric uptake regulation (Fur) repressor is a zinc metalloprotein. Biochemistry. 1999;38:6559–69. Ma Z, Faulkner MJ, Helmann JD. Origins of specificity and cross-talk in metal ion sensing by Bacillus subtilis Fur. Mol Microbiol. 2012;86:1144–55. Bailey P, Chang DK, Nones K, Johns AL, Patch A-M, Gingras M-C, et al. Genomic analyses identify molecular subtypes of pancreatic cancer. Nature. 2016;531:47–52. Network CGAR. Integrated genomic analyses of ovarian carcinoma. Nature 2011;474:609–615. Yuan Y, Allen EMV, Omberg L, Wagle N, Amin-Mansour A, Sokolov A, et al. Assessing the clinical utility of cancer genomic and proteomic data across tumor types. Nat Biotechnol. 2014;32:644–52. Hanahan D, Weinberg RA. Hallmarks of cancer: the next generation. Cell. 2011;144:646–74. Moldovan G-L, D'Andrea AD. How the Fanconi Anemia pathway guards the genome. Annu Rev Genet. 2009;43:223–49. Sadasivam S, DeCaprio JA. The DREAM complex: master coordinator of cell cycle-dependent gene expression. Nat Rev Cancer. 2013;13:585–95. Tomida J, Itaya A, Shigechi T, Unno J, Uchida E, Ikura M, et al. A novel interplay between the Fanconi Anemia core complex and ATR-ATRIP kinase during DNA cross-link repair. Nucleic Acids Res. 2013;41:6930–41. Itasaki N, Hoppler S. Crosstalk between Wnt and bone morphogenic protein signaling: a turbulent relationship. Dev Dyn. 2010;239:16–33. MacDonald BT, Tamai K, He X. Wnt/β-catenin signaling: components, mechanisms, and diseases. Dev Cell. 2009;17:9–26. Aman A, Piotrowski T. Wnt/β-catenin and Fgf signaling control collective cell migration by restricting chemokine receptor expression. Dev Cell. 2008;15:749–61. Petersen M, Pardali E, Horst GVD, Cheung H, Hoogen CVD, Pluijm GVD, et al. Smad2 and Smad3 have opposing roles in breast cancer bone metastasis by differentially affecting tumor angiogenesis. Oncogene. 2010;29:1351–61. Yoon H, Dehart JP, Murphy JM, S-TS L. Understanding the roles of FAK in cancer: inhibitors, genetic models, and new insights. J Histochem Cytochem. 2015;63:114–28. Chinnam M, Goodrich DW. RB1, development. and cancer Curr Top Dev Biol. 2011;94:129. Takuwa Y, Du W, Qi X, Okamoto Y, Takuwa N, Yoshioka K. Roles of sphingosine-1-phosphate signaling in angiogenesis. World J Biol Chem. 2010;1:298–306. Watson C, Long JS, Orange C, Tannahill CL, Mallon E, McGlynn LM, et al. High expression of sphingosine 1-phosphate receptors, S1P 1 and S1P 3, sphingosine kinase 1, and extracellular signal-regulated kinase-1/2 is associated with development of tamoxifen resistance in estrogen receptor-positive breast cancer patients. Am J Pathol. 2010;177:2205–15. Paik JH, Chae S, Lee M-J, Thangada S, Hla T. Sphingosine 1-phosphate-induced endothelial cell migration requires the expression of EDG-1 and EDG-3 receptors and rho-dependent activation of αvβ3-and β1-containing integrins. J Biol Chem. 2001;276:11830–7. Serini G, Valdembri D, Bussolino F. Integrins and angiogenesis: a sticky business. Exp Cell Res. 2006;312:651–8. Brogdon JL, Leitenberg D, Bottomly K. The potency of TCR signaling differentially regulates NFATc/p activity and early IL-4 transcription in naive CD4 T cells. J Immunol. 2002;168:3825–32. Hsieh C-S, Macatonia SE, Tripp CS, Wolf SF, O'Garra A, Murphy KM. Development of TH1 CD4 T cells through IL-12. Science. 1993;260:547. Vacaflores A, Chapman NM, Harty JT, Richer MJ, Houtman JC. Exposure of human CD4 T cells to IL-12 results in enhanced TCR-induced cytokine production, altered TCR signaling, and increased oxidative metabolism. PLoS One. 2016;11:e0157175. Lau E, Ze'ev AR. ATF2–at the crossroad of nuclear and cytosolic functions. J Cell Sci. 2012;125:2815–24. Shaulian E, Karin M. AP-1 as a regulator of cell life and death. Nat Cell Biol. 2002;4:E136. Mller MR, Rao A. NFAT, immunity and cancer: a transcription factor comes of age. Nat Rev Immunol. 2010;10:645–56. McKinney W. Data structures for statistical computing in python. Proc. 9th Python Sci. Conf. van der Voort S, Millman J; 2010. p. 51–6. Jones E, Oliphant T, Peterson P. SciPy: open source scientific tools for Python. 2014; Seabold S, Perktold J. Statsmodels: Econometric and statistical modeling with python. Proc. 9th Python Sci. Conf. 2010:61. The authors would also like to thank Daniel Himmelstein, Dongbo Hu, Kurt Wheeler, and René Zelaya for helping to review the source code. KMC was funded by an undergraduate research grant from the Penn Institute for Biomedical Informatics. CSG was funded in part by a grant from the Gordon and Betty Moore Foundation (GBMF 4552). JT, GD, DAH, and CSG were funded in part by a pilot grant from the Cystic Fibrosis Foundation (STANTO15R0). DAH was funded in part by R01-AI091702. Files for each of the PathCORE-T networks described in the results are provided in Additional files 2 and 3 P. aeruginosa eADAGE models: https://doi.org/10.5281/zenodo.583172 P. aeruginosa gene expression compendium: doi:https://doi.org/10.5281/zenodo.583694 The normalized gene expression compendium provided in this Zenodo record contains datasets on the GPL84 from ArrayExpress as of 31 July 2015. It includes 1051 samples grown in 78 distinct medium conditions, 128 distinct strains and isolates, and dozens of different environmental parameters [7]. Tan et al. compiled this dataset and used it to construct the 10 eADAGE models from which we generate the eADAGE-based P. aeruginosa KEGG network [7]. We use this same data compendium to generate the NMF P. aeruginosa model. The script used to download and process those datasets into the compendium is available at https://goo.gl/YjOEQl. TCGA pan-cancer dataset: https://doi.org/10.5281/zenodo.56735 The pan-cancer dataset was compiled using data from all 33 TCGA cohorts. It was generated by the TCGA Research Network: http://cancergenome.nih.gov/ [14]. RNA-seq data was downloaded on 8 March 2016 from UCSC Xena (https://xenabrowser.net/datapages/). Gene expression was measured using the Illumina HiSeq technology. More information and the latest version of the dataset can be found at https://xenabrowser.net/datapages/?dataset=TCGA.PANCAN.sampleMap/HiSeqV2&host=https://tcga.xenahubs.net. PathCORE-T analysis: (https://github.com/greenelab/PathCORE-T-analysis/tree/v1.2.0) This repository contains all the scripts to reproduce the analyses described in this paper. The Python scripts here should be used as a starting point for new PathCORE-T analyses. Instructions for setting up a web application for a user's specific PathCORE-T analysis are provided in this repository's README. Overlap correction: (https://github.com/kathyxchen/crosstalk-correction/tree/v1.0.4) Donato et al.'s procedure for overlap correction [10] is a pip-installable Python package 'crosstalk-correction' that is separate from, but listed as a dependency in, PathCORE-T. It is implemented using NumPy [31]. PathCORE-T methods: (https://github.com/greenelab/PathCORE-T/tree/v1.0.2) The methods included in the PathCORE-T analysis workflow (Fig. 4c) are provided as a pip-installable Python package 'pathcore'. It is implemented using Pandas [66], SciPy (specifically, scipy.stats) [67], StatsModels [68], and the crosstalk-correction package. PathCORE-T demo application: (https://github.com/kathyxchen/PathCORE-T-demo/tree/v2.1.0) The project home page, https://pathcore-demo.herokuapp.com provides links to The web application for the eADAGE-based KEGG P. aeruginosa described in the first case study. A view of the NMF-based PID pathway co-occurrence network described in the second case study. A quick view page where users can temporarily load and visualize their own network file (generated from the PathCORE-T analysis). Department of Systems Pharmacology and Translational Therapeutics. Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd., Philadelphia, PA, 19104, USA Kathleen M. Chen , Gregory P. Way & Casey S. Greene Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA Jie Tan Department of Microbiology and Immunology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA Georgia Doing & Deborah A. Hogan Search for Kathleen M. Chen in: Search for Jie Tan in: Search for Gregory P. Way in: Search for Georgia Doing in: Search for Deborah A. Hogan in: Search for Casey S. Greene in: KMC implemented the software, performed the analyses, and drafted the manuscript. JT and GPW contributed computational reagents. KMC, DAH, and CSG designed the project. KMC, JT, GPW, GD, DAH, and CSG interpreted the results. JT, GPW, GD, DAH, and CSG provided critical feedback and revisions on the manuscript. JT and GPW reviewed source code. All authors read and approved the final manuscript. Correspondence to Casey S. Greene. Figure S1. The weight distributions of genes to constructed features for NMF models of various sizes. (EPS 494 kb) PathCORE-T network constructed from a Pseudomonas aeruginosa compendium using 10 eADAGE models analyzed with KEGG pathway annotations. (TSV 16 kb) PathCORE-T network constructed from TCGA data using NMF models analyzed with PID pathway annotations. (TSV 5 kb) Chen, K.M., Tan, J., Way, G.P. et al. PathCORE-T: identifying and visualizing globally co-occurring pathways in large transcriptomic compendia. BioData Mining 11, 14 (2018) doi:10.1186/s13040-018-0175-7 Unsupervised feature construction Pathway interactions
CommonCrawl
Label-free third harmonic generation imaging and quantification of lipid droplets in live filamentous fungi Raman microscopy-based quantification of the physical properties of intracellular lipids Masaaki Uematsu & Takao Shimizu Integrated dual-tomography for refractive index analysis of free-floating single living cell with isotropic superresolution B. Vinoth, Xin-Ji Lai, … Chau-Jern Cheng Live-cell imaging with Aspergillus fumigatus-specific fluorescent siderophore conjugates Joachim Pfister, Alexander Lichius, … Clemens Decristoforo Super-resolution fluorescence-assisted diffraction computational tomography reveals the three-dimensional landscape of the cellular organelle interactome Dashan Dong, Xiaoshuai Huang, … Liangyi Chen Two-color nanoscopy of organelles for extended times with HIDE probes Ling Chu, Jonathan Tyson, … Derek K. Toomre Deuterium-labeled Raman tracking of glucose accumulation and protein metabolic dynamics in Aspergillus nidulans hyphal tips Mitsuru Yasuda, Norio Takeshita & Shinsuke Shigeto MINFLUX nanoscopy delivers 3D multicolor nanometer resolution in cells Klaus C. Gwosch, Jasmin K. Pape, … Stefan W. Hell A negative-solvatochromic fluorescent probe for visualizing intracellular distributions of fatty acid metabolites Keiji Kajiwara, Hiroshi Osaki, … Masayasu Taki Optimized protocol for combined PALM-dSTORM imaging O. Glushonkov, E. Réal, … P. Didier Tanja Pajić1, Nataša V. Todorović2, Miroslav Živić1, Stanko N. Nikolić3, Mihailo D. Rabasović3, Andrew H. A. Clayton4 & Aleksandar J. Krmpot ORCID: orcid.org/0000-0003-2751-73953 Biological fluorescence Biological physics Fat metabolism Fluorescence imaging Fungal biology Fungal physiology Imaging studies Imaging techniques Multiphoton microscopy Nonlinear optics We report the utilization of Third-Harmonic Generation microscopy for label-free live cell imaging of lipid droplets in the hypha of filamentous fungus Phycomyces blakesleeanus. THG microscopy images showed bright spherical features dispersed throughout the hypha cytoplasm in control conditions and a transient increase in the number of bright features after complete nitrogen starvation. Colocalization analysis of THG and lipid-counterstained images disclosed that the cytoplasmic particles were lipid droplets. Particle Size Analysis and Image Correlation Spectroscopy were used to quantify the number density and size of lipid droplets. The two analysis methods both revealed an increase from 16 × 10−3 to 23 × 10−3 lipid droplets/µm2 after nitrogen starvation and a decrease in the average size of the droplets (range: 0.5–0.8 µm diameter). In conclusion, THG imaging, followed by PSA and ICS, can be reliably used for filamentous fungi for the in vivo quantification of lipid droplets without the need for labeling and/or fixation. In addition, it has been demonstrated that ICS is suitable for THG microscopy. Third harmonic generation (THG) microscopy as a label-free nonlinear imaging technique is a powerful tool for visualization of various cells and tissue structures1. THG has been mainly applied to imaging animal cell structures1,2,3,4,5,6,7 and tissues1,4,6,8,9,10,11,12,13,14, as well as the dynamics of cellular processes (functional imaging)1,6,12,15. Also, THG microscopy has been used to study human and fossil vertebrate teeth16, 3D engineered human adipose tissue17, and small organisms (Drosophila melanogaster, zebrafish, Xenopus laevis, early mouse embryos8,18,19,20 and C. elegans21,22). In addition to animal specimens, THG microscopy has also been applied to plants11,23,24,25,26,27, algae26,27 and yeast2,28. To the best of our knowledge, there is a paucity of THG studies on filamentous fungi. The THG phenomenon is a nonlinear coherent scattering process induced by structures with specific properties. In THG, the joint energy of three photons is converted into one photon. As THG is a third-order process, ultra-short laser pulses with high peak power densities at the optical focus are required to ensure sufficient signal. Contrast in THG microscopy is generated at interfaces where there is a large change in refractive index or third-order non-linear susceptibility29,30. Due to higher index of refraction of lipids (R.I.(lipids) = 1.46–1.48 at 1100–480 nm)31 with respect to the cytoplasm (R.I. = 1.360–1.390 at 633 nm)32, the THG signal is efficiently produced at the interface between the aqueous phase and by lipid-rich structures33,34,35. These include cellular membranes and lipid droplets (LDs). Lipid droplets are dynamic cellular organelles which play a key role in lipid homeostasis and energy in eukaryotic cells. Studies of lipid droplet physiology in fungi are still in their infancy but their quantitation has relevance to issues in biomedicine, agriculture, industrial waste and the energy crisis. As mentioned above, THG microscopy is a particularly suitable technique for lipid droplet physiology studies11,35,36. The advantages of THG microscopy are it is non-invasive, produces inherently confocal images, doesn't require fixation or external labelling-similar to Raman-based37,38,39,40,41,42, differential interference contrast (DIC)43 and light sheet microscopy37, and is minimally phototoxic allowing for in vivo studies. A point of difference between Raman-based techniques and THG microscopy is the simpler excitation scheme and minimal risk of aberration artefacts in THG microscopy. Combining THG with fluorescence microscopy is useful to identify the molecular source of the THG-generated signals (i.e. lipophilic fluorescent dyes to target LDs)4,7,36. Once THG-associated structures are identified they can be followed using THG microscopy in situ. Quantitation of images containing LDs can be challenging. The desired parameters include LD number, density, size and morphology. Readily-available image analysis software and programming languages for this purpose are ImageJ, Cell Profiler, Imaris, AMIRA, Volocity, MATLAB, D programming, for both fluorescent images44,45,46,47 and for lipid droplet images taken by label-free11,17,20 techniques. Automated quantitation of lipid droplets uses either thresholding of the images (threshold-based) or watershed methods (morphology-based)48, and are usually optimized for a specific cell line. It would be desirable to have a more general image analysis platform that does not require extensive cell-line specific thresholding. In this regard, Image Correlation Spectroscopy (ICS) is a promising method because it is based on measuring spatially-correlated fluctuations. ICS has been applied to confocal images where it measures the spatial variation of fluorescence intensity fluctuations, which can be further related to particle density and aggregation state49. On the other hand, ICS has been rarely used for nonlinear techniques, only for two photon excitation fluorescence (TPEF)50 or recently for second harmonic generation imaging (SHG)51. The filamentous fungi52 are ubiquitous organisms that contribute profoundly to a wide range of ecosystem processes, including decomposition of organic carbon, carbon storage and nutrient transfer. As an invisible and often overlooked part of carbon cycle, filamentous fungi as saprophytes and plant symbionts (mycorrhizal fungi) create a sink for plant organic carbon and distribute it to below-ground hyphal biomass53. The oleaginous filamentous fungi have the ability to accumulate large amount of carbon in the form of lipids, more than 20% of their biomass54,55 under appropriate conditions. These lipids are considered to be a valuable alternative resource for various biotechnological applications (biodiesel production, high-value chemicals, food/feed additives, and efficient bioremediation of wastewaters)56,57, in a bio-based economy. Additionally, the lipid accumulations have been implicated in the resistance of fungi to toxins58 and virulence of pathogenic fungi59. Moreover, yeast cells modified to lack the lipid droplets entirely, are extremely vulnerable to a variety of stresses60 altogether demonstrating that LD studies could potentially lead to novel antifungal treatments. We have chosen for the THG imaging study of LDs the well-known model species Phycomyces blaekseneanus, oleaginous fungi from the order Mucorales with very rapid growth (from the spores, through exponential growth phase, to stationary phase in under 36 h). The challenge of utilizing THG imaging for filamentous fungi is that the LDs in filamentous fungi are of rather small dimensions (< 1.5 μm) unlike e.g. in white adipocyte cells where LDs dimensions can reach 100 μm61. Our aim is to show that THG microscopy is highly suited for imaging the density and size of LDs in live filamentous fungi. To this end, we will use filamentous fungi in the baseline control condition, with sporadic and small LDs, corresponding to low lipid content conditions62, and fungi with denser LDs brought upon by nitrogen starvation-induced autophagy response63, the conserved cellular mechanism of molecular recycling64. In addition to label-free THG imaging of LDs in fungi, we also present two methods for LDs quantification and analysis. The first method is based on ImageJ/Fiji open source platform, particle analysis tool, which provides measurements of the size, shape and number of LDs. The second method is called Image Correlation Spectroscopy (ICS)65, which provides measurements of density and size of particles through spatial autocorrelation analysis. Our aim is to show that ICS is a good method for quantification of LDs in THG images. Filamentous fungus strain and growth conditions A wild-type strain of oleaginous Zygomycetous fungus Phycomyces blakesleeanus (Burgeff) [NRRL 1555(-)] was used as the model cell system in this study. For optimal growth of the mycelium, spores at concentration of the order 107 spores/ml were plated on 100 mm Petri dishes at 21–23 °C. Standard liquid minimal (SLM) medium for cultivation contained per liter: 20 g of D (+)-glucose (carbon source), 2 g of L-asparagine·H2O (nitrogen source), 5 g KH2PO4, 500 mg MgSO4·7H2O, and microelements/"trace stock" (28 mg CaCl2, 1 mg thiamine hydrochloride, 2 mg citric acid·H2O, 1.8 mg Fe(NO3)3·9H2O, 1 mg ZnSO4·7H2O, 300 µg MnSO4·H2O, 50 µg CuSO4·5H2O, and 50 µg Na2MoO4·2H2O). The glucose was autoclaved separately, and the final pH of the medium was 4.5. The osmolarity was about 200 mOsm. For the nitrogen starvation experiments, the fungi were first grown in the SLM medium and after 22 h were divided into two groups. Group 1 was the control group and group 2 was the nitrogen starved group (N-starved). Fungi from control group 1 were collected by centrifugation (10 min) and resuspended in SLM medium. For group 2, fungal cells were centrifuged (10 min) and resuspended in nitrogen-free medium (SLM medium without L-asparagine) (Fig. 1). The age-matched fungal cultures were imaged at different time points after nitrogen starvation (3, 4.5, 6 h and > 6 h (up to 8.5 h)) at room temperature. All fungal cultures used for imaging were in exponential growth phase (total time from seeding was in the range 24–30.5 h). Data collected from the 6–8.5 h time-points were pooled and represented the group in prolonged nitrogen starvation (labelled 6 h on the graphs). The outline of experimental design of nitrogen starvation. Hyphae cultures are grown in control conditions (Control culture) or in nitrogen-depleted medium (N-starved culture). Time points of sampling are marked as blue dots. Lipid staining Live fungal cells were stained without chemical fixation. To stain the fungal cells, hyphae in exponential growth phase (26 h) were incubated with 40 ng/mL of Nile Red dye (Acros Organics) for 10 min at 20 °C. Nonlinear laser scanning microscopy (NLSM) experimental setup and hyphae imaging The images of live unstained fungal cells were obtained using a bespoke nonlinear laser-scanning microscope, previously described in references66,67, but modified for THG imaging (Fig. 2). For third harmonic generation (THG) imaging of hyphae the following experimental setup, based on significantly modified Zeiss Jenaval upright microscope, was used: Infrared femtosecond pulses were provided by a SESAM mode-locked Yb:KGW laser (Time-Bandwidth Products AG, Time-Bandwidth Yb GLX; Zurich, Switzerland, wavelength 1040 nm, pulse duration 200 fs and repetition rate 83 MHz). The laser wavelength was chosen so that THG signal whose wavelength is 3 times shorter (347 nm) is still in the range of conventional air UV optics. The laser light was first passed through a collimating 1:1 beam expander (L1 and L2) for divergence compensation, and then combined (at BC) with the Ti: Sa laser beam used for TPEF imaging. After that, both beams pass the motorized variable neutral density filter (VNDF) for power regulation and the mechanical shutter. The beams were raster scanned over the sample using two galvanometer mirrors (Cambridge Technologies, 6215H; Bedford, Massachusetts, USA) and a 1:3.75 beam expander (L3 and L4) was used to fill the back aperture of the objective lens and to achieve 4f. configuration. The beams were further directed onto the sample by a short-pass main dichroic mirror (MDM, cut-off at 700 nm) through the high numerical aperture (NA) oil immersion objective lens (Carl Zeiss, EC Plan-Neofluar 40X, NA 1.3). The THG signal was detected in the forward direction (transmission arm), parallel to the direction of laser propagation. First, the signal was collected by high NA aspheric lens (condenser). Then, it was reflected by two dichroic mirrors (DM) that reflect 347 nm but transmit 1040 nm to prevent the laser beam from reaching the detector. Further on, the signal was filtered out from the rest of the laser photons by a bandwidth filter 275–375 nm (Thorlabs FGUV11M) and a Hoya glass UV filter (Newport FSR-U340) with a maximum transmission at 340 nm. The THG signal was detected using a photomultiplier tube (PMT) (Hamamatsu, H7422, Japan), after being focused by a 50 mm focal length lens (L6) onto the entrance window of the PMT. NLSM setup. Ti:Sa—laser for TPEF imaging, Yb:KGW—laser for TPEF and THG imaging, BC—beam combiner, L1 and L2—lenses of 1:1 beam expander for recollimation, VNDF—variable neutral density filter, GSM—galvanometer-scanning mirrors, L3 and L4—lenses of 1:3.75 beam expander for imaging, MDM—main dichroic mirror (cut-off 700 nm), Obj.—microscopic objective 40 × 1.3, Sam.—sample, Con.—aspheric condenser lens, DM—dichroic mirrors reflective for THG (347 nm) and transmissive for Yb laser (1040 nm), F1—Hoya glass UV filter, peak transmission 340 nm, F2—bandpass filter 275–375 nm, L6—focusing lens, THG PMT—photomultiplier tube for THG signal, TL—tube lens, BS/M—beam splitter or mirror toggle, Cam.—camera, F—VIS filter 400–700 nm for autofluorescence or VIS + 570 nm long pass for Nile Red fluorescence, L5—focusing lens, TPEF PMT—photomultiplier tube for TPEF signal, AD/DA—acquisition card. The scheme was created in Microsoft Power Point 2016 (https://www.microsoft.com/en-us/microsoft-365/powerpoint). For the (auto)TPEF imaging a tunable (700–900 nm) Kerr lens mode locked Ti:Sa laser (Mira 900, Coherent Inc. CA, USA) was used, pumped by CW (continuous-wave) frequency doubled Nd:YVO4 laser at 532 nm (VERDI V10, Coherent Inc. CA, USA). The wavelength of the Ti:Sa laser was set to 730 nm for auto TPEF imaging since most of the endogenous fluorophores (NADH, flavins, etc.) can be excited at this wavelength68 on the one hand, and because of the technical limitation (laser tunability range and dichroic mirror cut off) on the other hand. The fluorescent signal was collected in back reflection by the objective lens, passed the MDM, tube lens (TL) and filtered out by VIS (400–700 nm) band pass filter (Canon, taken from the camera EOS50D) for the detection of the autofluorescence excited by Ti:Sa laser. Additionally, 570 nm long pass filter (colored glass, unknown vendor) was used for Nile Red fluorescence which is excited by Yb: KGW laser and detected simultaneously by THG signal. TPEF signals were detected after being focused by 50 mm focal length lens (L5) onto the entrance window of the TPEF PMT. The acquisition was performed by National Instrument card USB-6351 at the rate of 1.2 M sample/s. This enabled high enough frame rate at low resolution for live monitoring, for instance 3 frames per second at 256 × 256 pixels with 6 averages. For high resolution images, it takes 30 s for 1024 × 1024 image with 30 averages. The lateral and axial resolution of the microscope with 40 × 1.3 objective lens were estimated to be 300 nm and 1000 nm, respectively. Bright field images were taken with a Canon EOS 50D digital camera (Tokyo, Japan) whose CMOS sensor was placed at the image plane of the tube lens. Toggle switch BS/M enables utilization either of camera for bright field or TPEF PMT for fluorescence imaging. A specially designed sample holder was used, that enables hyphae with the growing medium to be placed between two coverslips in order meet the criteria for the best NA of the objective lens, but also to avoid losses of the UV THG signal by thick deck glass (Supplementary Figs. S1 and S2 show different imaging conditions of hyphae that were tested in order to find the best one). The #1.5 coverslips (170 μm thickness) were used. 20 μl of hyphae suspension was used to keep the hyphae alive. The holder was placed between objective lens and the aspheric condenser on the motorized table that can be translated in steps of 0.3 μm along the beam propagation direction (z axis) for optical slicing of the sample and 3D imaging. In control versus N-starved group imaging, time points were gathered sequentially. Using a label-free imaging technique, such as THG, enabled us to take images of samples with minimal delay after taking fungi from the culture. The overall time a sample culture was kept under the microscope to acquire at least 3 THG images of live hypha was between 25 and 37 min. Effectively, time points for control and treatment were offset for 30–40 min on one experimental day and on the next day, offset in opposite direction to the other. The exact ranges of time of growth (mean and standard deviation) for all hypha included in experimental groups are collected in Supplemental Fig. S4. THG image analysis of lipid droplets in 2D was performed using ImageJ (W. Rasband, National Institute of Health, Maryland, USA, http://imagej.nih.gov/ij/). Algorithms written in MATLAB (in-house-created code) and VolView software were used for 3D and 4D image processing. Two methods for image analysis were used to quantify LDs number and size, Particle Size Analysis (PSA) and Image Correlation Spectroscopy (ICS). Details of both procedures are in the Supplementary Information. For quantitative image analysis, images of individual hypha under control conditions (n = 44) and after nitrogen starvation (n = 17) were obtained from 6 independently grown cultures. GraphPad Prism was used for graphing and statistical comparisons. The boxes of the box and whisker plots are enclosed by the 25th and 75th percentile range with the line representing the median; the whiskers are extending to the minimal and maximal value, respectively. Histograms of number of LDs were generated from all LD diameters in each group with 0.3 µm binning and each bin value was divided with the sum of hypha areas in the group. Errors in histograms of Number of LDs/hypha area were calculated as: Relative Error (binned N/area) = Relative Error (Number of LDs/hypha area) + Relative Error (Area), and Relative Error (binned N/area) was multiplied by value of Number of LDs/hypha area for that bin. Two-way ANOVA with multiple comparisons and Holm-Sidac correction and unpaired two tailed t test with Welch's correction for unequal variances, were used for the calculation of statistical significances. Where appropriate, unpaired two-sided Mann–Whitney test with was used instead. Confidence level for statistical significance was: 0.05 (*), 0.01 (**), 0.005 (***), 0.0001 (****). THG images, one slice (2D) and 3D reconstruction, of unstained live P. blakesleeanus hyphae in exponential growth phase are shown in Fig. 3a,b, respectively. The THG signal at the cell circumference originates from chitinous cell wall and plasma membrane which follow the cell wall shape. In the cytoplasm, various entities that produce THG signal are visible. The hyphae were placed in the liquid growth medium between two coverslips. The high resolution of the microscopic system (diffraction limited), the thickness of the hyphae (ca 10 µm) and transparency of the medium make possible the whole hyphae to be optically sectioned and a 3D model to be reconstructed (Fig. 3b and Supplementary Video S1 in the Supplementary Information). It is obvious that strong THG signal features are prominent among all the entities in the cytoplasm. According to the literature these are most likely lipid droplets since they have a large index of refraction in comparison with the rest of cytoplasm. In addition, the power dependence of the THG signal originating from LDs is provided in Supplementary Material (Fig. S3). Label-free imaging of Phycomyces blakesleeanus hyphae from the exponential phase in SLM. (a) one THG slice; (b) 3D model built out of 23 THG slices 0.9 µm apart. The average laser power at sample plane was 23–26 mW. (c) Multimodal imaging: bright field (BF) (left), autoTPEF (middle) and THG (right) images of the same live unlabeled hypha. The hypha was plasmolyzed and the retracted plasma membrane is solely visible in the THG image. The average laser power at sample plane was 2.7 mW (TPEF) and 55 mW (THG). Color intensity bar for both, TPEF and THG signals: deep blue—the lowest signal, red the highest signal. All the images were taken with Zeiss 40 × 1.3 oil objective lens. The cell wall and plasma membrane are separated by a very small distance which is not resolvable in the images of native hyphae obtained by diffraction limited techniques (resolution of approximately 250 nm). To visualize the cell wall and the plasma membrane separately, we plasmolyzed the hyphae so the plasma membrane was retracted from the cell wall at a resolvable distance (Fig. 3c). The retracted cytoplasm is clearly visible in bright-field (Fig. 3c left) and autoTPEF images (Fig. 3c middle), but the plasma membrane can be solely distinguished only in the THG image (Fig. 3c right) since its refractive index is different from the cytoplasm. There is no significant overlap of AutoTPEF and THG signal in the hyphae While THG imaging is not necessarily specific for LDs, because the THG signal is produced by any refractive index change, LDs still produce significantly higher THG signal in comparison with other structures in the cytoplasm of a cell like P. blakesleeanus. This fact can be used to extract LDs in a cell, over a broad but still much lower signal range than other cytoplasm entities. As the very first step toward the confirmation that high THG signal features in unlabeled live P. blakesleeanus are LDs, we performed the imaging of the same hyphae by detecting auto fluorescence signal upon two photon excitation at 730 nm (Fig. 4a left). In order to ensure that high THG signal entities (Fig. 4a right) are not artifacts that might be caused by e.g. high laser intensity damage, we merged the two images, THG and autoTPEF (Fig. 4a middle) showing clearly there is no significant increase of TPEF signal at the same locations. The hyphae were in exponential growth phase, as in Fig. 3. TPEF and THG images of Phycomyces blakesleeanus exponential growth phase hyphae in standard liquid medium show that the predominant source of spot wise THG signal are lipid droplets. (a) Merged autoTPEF and THG images of same unlabeled live hypha showing that there is no overlap of autoTPEF and THG signal. The average laser power at sample plane was 28 mW at 1040 nm (for THG) and 3.4 mW at 730 nm (autoTPEF). (b) In vivo colocalization of stained LDs imaged by TPEF and LDs imaged by THG modality. Average laser power at sample plane was 32 mW for both THG and TPEF at 1040 nm. Pearson's correlation coefficient Rtotal = 0.844 (ImageJ, The Colocalization Threshold plugin). All images were taken with Zeiss 40 × 1.3 oil objective lens. Colocalization of lipid droplets signal imaged by TPEF and THG Whilst many label-free imaging studies on various biological samples have shown that strong THG contrast in the cytoplasm arises mostly from LDs11,35,36, in the case of Phycomyces blaekseneanus THG imaging has never been applied to this type of organism. To prove firmly that the cytoplasmic puncta in THG images of hyphae are LDs we performed colocalization experiments (Fig. 4b). The hyphae were stained by Nile Red dye which is considered as a standard for lipids69. The TPEF of Nile Red dye was excited by the same laser used for THG and the TPEF signal was collected through 400–700 nm band pass and 570 nm long pass filters, which effectively isolates the fluorescence signal to the 570–700 nm spectral region. The laser beam was focused with the Zeiss Plan Neofluar 40 × 1.3 objective lens, and both, TPEF and THG signals were detected simultaneously. Before the measurement, a very small volume of the sample (10 μl of fungi suspension) was added between two coverslips. This enables hyphae to stay alive during the imaging but also to be immobilized as close as possible to the coverslip thus achieving the best possible resolution. The quantitative comparison of the TPEF and THG images (colocalization analysis) was performed based on Pearson's correlation coefficient and Image Cross-Correlation Spectroscopy (ICCS). Pearson's correlation coefficient was in the range 0.74 < Rtotal < 0.88 (ImageJ, The Colocalization Threshold plugin). According to the ICCS analysis, the fraction of THG-detected clusters interacting with the TPEF-detected clusters was 0.89 indicating a high degree of spatial correlation between fluctuations generated from the lipid probe and THG signal. The degree of colocalization obtained in our work is in accordance or higher with those obtained in label-free imaging on live and in some fixed samples7,70. Based on the colocalization experiment (Fig. 4b) and the results shown in Fig. 4a one might consider that most round bright features in THG images of Phycomyces blaekseneanus are the lipid droplets. THG image analysis and quantification of lipid droplets For the quantification of LDs, we analyzed a small set of THG images by Particle Size Analysis and Image Correlation Spectroscopy (both available in ImageJ). To test and compare the two methods we used hyphae cultures grown in completely nitrogen-depleted media (N–starved) and their age-matched sister cultures from the same batch grown in standard media as a control. Nitrogen limitation is known to cause autophagy in filamentous fungi71, leading to alterations in lipid metabolism and an increase in the number of LDs72,73. We performed THG imaging on hyphae in exponential growth phase, alternating between control (Fig. 5a) and N-starved (Fig. 5b) age matched hypha batches. From Fig. 5 it is obvious, even by the bare eye, that there is significant increase in LD number after nitrogen starvation for 4.5 h. Once we confirmed that we obtained the expected increase of LD number, we went ahead to test the two methods of quantification and the sensitivity of THG imaging for LDs detection. THG images of N-starved hyphae. (a) control hyphae; (b) N-starved (4.5 h duration of growth in nitrogen-depleted conditions). Both images were taken with Zeiss 40 × 1.3 objective lens whilst average laser power at sample plane was 24 mW (in A) and 20 mW (in B). Violet-lowest THG signal, yellow—highest THG signal. For the PSA method (available in ImageJ as "Analyze particles") the raw THG image (Fig. 6a left) was thresholded and converted to an 8-bit mask, upon which the program automatically counted the number of "particles" representing LDs in images analyzed (Fig. 6a middle). In addition to the number of particles, the diameter and area were quantified as well. Image Correlation Spectroscopy (ICS) and Particle Size Analysis (PSA) on THG images. (a) Processing for PSA and ICS analysis of the same THG image. Left: The unprocessed THG image of Phycomyces blakesleeanus exponential growth phase hyphae in standard liquid medium; middle: 8-bit mask obtained in Particle size analysis; right: background subtracted image for ICS analysis. The image from left (unprocessed THG image) was processed by applying 20 × background subtractions. Both images are displayed at full dynamic range (8 bits). THG image was taken with Zeiss 40 × 1.3 objective lens, while average laser power at sample plane was 27 mW. (b) ICS analysis: The autocorrelation function (G curve) taken as the plot through the center of intensity correlated THG image of a live and unlabeled hyphae. The autocorrelation curve was fit to a Lorentzian function to extract FWHM value as described in Methods section. (c) ICS analysis, the effect of the cell wall removal: The number of LDs obtained from the G curves after each background subtraction for the THG image where the cell wall was manually cropped (red circles) and for the same THG image where cell wall was not cropped prior to the background subtractions (black squares). (d) Comparison of ICS- and PSA—derived data obtained from the same set of THG images of cultures N-starved for 3 h and 6 h and their age-matched controls (n = 3 for each group). The ratio of the number of LDs per unit hyphal area, in N-starved hypha to the number of LDs per unit hypha area in age- matched controls. (e) The agreement of LD number quantification obtained by ICS and PSA. For each image, ICS-obtained LD number is plotted against PSA-obtained LD number for that image. Data for both graphs were obtained from label-free THG images, whose analysis is presented in Table 1. Because of the thresholding and limited resolution of the image (pixel size), the PSA might be insensitive to very small or weak signal entities. As the result, some emerging LDs might be omitted and not shown in the final results. To resolve this issue, we performed ICS which extracts the information on particle properties (number and size) based on the spatial fluctuations of the signal intensity in the images. ICS is also applicable to images that are diffuse. Due to the morphology of the hyphae, it was necessary to pre-process THG images before applying the ICS analysis. Cell wall of hyphae was removed from the image since it hinders the correlation analysis (producing the pedestal at the G curve) because of the sharp discontinuity in intensity at the periphery of the hyphae along the whole circumference. We applied multiple subtractions of the background (average pixel intensity of ROI outside the hypha) until the wall disappears74. The latter procedure is depicted in Fig. 6a right and it is obvious that the THG signal from the majority of LDs is much more intense than the signal from the wall (approx. > 10x). After removal of the cell wall, image correlation procedure was performed in Image J. As the result one obtains a spatial autocorrelation image from which the G curve is extracted by taking an intensity profile through the center of the image. An example of a G curve is shown in Fig. 6b. The number of LDs was calculated using the following formula: $$N_{LD} = \frac{{N_{pix} \cdot N_{pix} }}{{r^{2} \pi \cdot G\left( 0 \right)}}$$ where Npix is the pixel size of the 2n × 2n image (where n is an integer), r is mean radius of LDs taken as half of the FWHM of the G curve, and G (0) is maximal value of the G curve. r and G (0) are extracted from the Lorentzian fit of the G curve (Fig. 6b insert). It should be noted that the morphology of the LDs differs substantially from the morphology of the clusters which are usually examined by ICS analysis. Thus, in our case, multiple subtractions of the background do not lead to the flattening of the curve G versus number of subtractions as might be expected74. The flattening of the G curve shown in the reference 74 is used as criterion how many times the background has to be subtracted before ICS is applied. Our criteria for the number of background subtraction were: (a) cessation of a significant reduction in the number of LDs after each subsequent subtraction (Fig. 6c, black squares), (b) approximate matching of the number of LDs per hyphae with PSA and (c) experience (the cell wall disappears from the image observed by the eye). Upon examination of tens of images, both control and treated hyphae, we concluded that, on average (depending on initial image quality), 20 consecutive subtractions were sufficient for reliable ICS analysis. To check whether the extra removal of the cell wall would give different number of LDs, we performed manual removal of the cell wall solely. It was done by delineation and cropping prior to the multiple background subtractions. After 20 consecutive background subtractions, this method does not give substantially different results in the number of LDs compared to images where the cell wall was not manually cropped (illustrated by the graph in Fig. 6c). LDs analysis by PSA and ICS A comparison of LD number and size obtained by ICS and PSA is in Table 1. The number of LDs per area of hyphae is approximately the same on average, but mean diameter obtained by ICS is slightly lower. This discrepancy might be explained because of different definitions for the object size used in those two methods. Table 1 A comparison of the number and size of lipid droplets obtained by the quantification analysis of the two methods, ICS and PSA. To estimate the change of LD number in treated hyphae, we calculated the ratio of LD number per area in treated hyphae in respect to control ones (Fig. 6d). The total number of LDs after 3 h of starvation shows no significant change. With longer starvation time the number of LDs increases by more than 50%. Using ICS analysis, number of features counted was 80 ± 12% of the LD number that was found by visual inspection (n = 12) and in close correlation with the data obtained by PSA. When numbers of obtained LDs by both methods are plotted for each individual image as a separate point (Fig. 6e), the regression line has a slope close to 1, confirming that ICS is equally reliable as PSA method in detecting and counting LDs. The coefficient of regression R2 was approximately 0.8. Nitrogen starvation induced changes in lipid droplet number and size, as quantified from THG images To fully use THG imaging (exposure time for an image takes maximum 30 s for 1024 × 1024 pixels image with 30 averages) and subsequent analysis as a LD assay, we performed a set of imaging and measurements across time, from filamentous fungi cultures, grown in nitrogen-depleted and control conditions. Fungi cultures were imaged after growing at least 2 h post start of the treatment (nitrogen starvation or control), precautionary step to avoid possible effects of manipulation during preparation for the start of treatment (e.g. centrifugation). The number of LDs per unit cell area (Number of LDs/hypha area) in all imaged hypha in fungi cultures grown in nitrogen depleted media (N-starved) was significantly larger compared to the entire group of control culture hypha (Control) (Fig. 7a). To elucidate the time course of observed induction of increase in LD number, the Control and N-starved groups are broken down to duration-from-start-of-treatment groups each, and values of Number of LDs/hypha area plotted across time (Fig. 7b). The Number of LDs/hypha area in Controls remained almost the same during the time of observation, with the slight, not significant, trend of increase towards the later growth time points (Fig. 7b). N-starved had similar Number of LDs/hypha area to corresponding Control only at the 3 h of treatment time point. We detected twofold increase of Number of LDs/hypha area after 4.5 h treatment, compared to corresponding Control. Significant increase of Number of LDs/hypha area in N-starved hypha, compared to corresponding Control hypha, persisted at longer times of treatment (Fig. 7b). Quantification of LDs from THG images of Phycomyces blaekseneanus hypha. Hypha were cultured without nitrogen or in standard liquid media for 2–6 h (or longer up to 8 h) after the start of treatment. Obtained THG images of LDs were analyzed by PA. n = 6 independent cultures. (a) N-starvation increases number of LDs per unit area. LD number obtained from the individual hypha is normalized to hypha area (in 103 µm2). Control (n = 44), N-starved group (n = 17). The box and whisker plots, enclosed by the 25th and 75th percentile range, median line with whiskers extending minimal to maximal value. Unpaired t test with Welch's correction, two tail, p = 0.0038. (b) Time course of LD number/unit area, showing that the increase of LD number by N-starvation is significant at 4.5 h (p = p = 0.0006) and later times (p = 0.0045), compared to corresponding control. Two-way ANOVA, with Holm-Sidac correction. Mean ± SE, n(Control) = 8; 7; 11; 21 for time points (in h), respectively: 2; 3; 4.5; 6. n(N-starved) = 6; 3; 7 for time points (in h), respectively: 3; 4.5; 6. (c) N-starvation decreases diameter of LDs. LD diameters from Control (n = 1205) and N-starved group (n = 431). The box and whisker plots, enclosed by the 25th and 75th percentile range, median line with whiskers extending minimal to maximal value. Mann–Whitney (p = 0.0008), two-tailed. (d) Time course of LD diameter changes, showing that the decrease by N-starvation is significant only at long starvation times. Two-way ANOVA, Holm-Sidac correction (p < 0.0001), compared to corresponding control. Mean ± SE, n(Control) = 176; 124; 302; 571 for time points (in h), respectively: 2; 3; 4.5; 6. n(N-starved) = 100; 118; 214 for time points (in h), respectively:3; 4.5; 6. (e) Differential distribution of increased number of LDs after 4.5 h and 6 h N-starvation. The largest LDs are lost at longest starvation times. Histograms of LD diameter distributions, 0.3 µm binning, for Control and N-starved group. Number of LDs in each bin of the histogram is divided by sum of hypha area of the appropriate group. Errors are calculated as stated in Methods section. Numbers on x axes represent the upper bin limit. The average diameter of LDs was significantly reduced in N-starved cultures, compared to controls, when entire groups were compared regardless of treatment time (Fig. 7c). As it can be seen from the time course graph (Fig. 7d), average LD diameters were approximately the same from the 2 h treatment time to the longest treatment time in Controls. They were also same in 3 h and 4.5 h N-starved hypha and their corresponding Controls. The effect of N-starvation on average LD size becomes clear only after 6 h or more of treatment (Fig. 7d). The histograms of LD diameters, graphed as Number of LDs/hypha area (Fig. 7e) for the 4.5 h and 6 h time-of-treatment groups, reveal that LDs smaller than 1.6 µm are more numerous in N-starved groups than in corresponding Controls for 4.5 h time point, while at 6 h, only the number of LDs smaller than 1 µm is increased. LD average diameter change between 4.5 and 6 h N-starvation groups seems to be a result of significant loss of population of LDs larger than 0.6 µm during prolonged growth in N-starving conditions. To summarize, the overall change in LDs during growth without available nitrogen is found to be an increase in number of LDs between 3 and 4.5 h time point, followed with the loss of population of larger-than-average LDs during prolonged starvation. Once considered to be passive lipid storage agglomerations, lipid droplets are now recognized as dynamic cellular organelles, serving as ubiquitous central hubs of energy and lipid homeostasis in eukaryotic cells75. Studies of lipid droplet physiology in fungi, although still scarce76, harbor promise of providing novel solutions for a number of important issues: mitigation and modulation of fungal resistance to fungicides and stress, securing the food safety, better understanding how to use fungi as a crucial component of sustainable organic waste reuse and conversion to energy source, to name a few. Phycomyces blakesleeanus, model fungus used in our study, belongs to Mucormycota, the phylogenetic group of fungi able of forming arbuscular mycorrhiza and other mutually beneficial symbiosis77 with terrestrial plants78. During fungi-plant mutually beneficial interaction, a fungi transports nitrogen to a plant, and receives up to 30% of organic C compounds synthesized by a plant78 in return. It is known that organic molecules sent from plant to fungi are lipids79,80, and that lipid droplets form in large amounts in hypha adjacent to the area of contact with the plant81. Similar to Phycomyces, arbuscular mycorrhizal fungi can accumulate significant amount of acquired organic carbon in the form of lipid droplets82. THG imaging of LDs as described here is a method that could be directly applied to living mycorrhizal fungi related to Phycomyces, without the need for any modification of the protocol, or other staining. The fungi culturing conditions used in our study resulted in a fairly modest accumulation of lipid droplets, as expected62. THG imaging analysis enabled us to watch and quantify changes in lipid droplet number, brought upon by complete removal of nitrogen, from such low density/diameter baseline. As expected, complete omission of nitrogen induced only a transient increase in number of lipid droplets, followed by lipid turnover83. THG imaging analysis detected the significant decline of lipid reserve at late stages of growth. Altogether, this shows the usefulness of THG imaging approach for broader exploration of LD in filamentous fungi under various living conditions. Optical imaging techniques are commonly used to study lipid droplets in vivo, but lipids usually have to be labeled with various dyes. On the other hand, prolonged imaging using fluorescent dyes can be phototoxic to cells and may perturb metabolic processes, including lipid metabolism. Hence, label-free imaging methods, are advantageous for the study of living cells84,85,86. THG imaging, a label-free method we have applied to live hyphae of oleaginous fungi Phycomices blakesleeanus, generated images with the characteristic spots of high THG signal intensity attributed to lipid droplets as the products of normal and stressed cellular physiology. Several lines of evidence support this attribution. First, the steep change of refractive index between lipids at the interface of lipid droplets and the rest of the cytoplasm generates high intensity THG signal, according to literature11. Second, to exclude possible laser-damaged spots that would produce high THG signal, we performed TPEF imaging of unstained hyphae showing that autoTPEF images are devoid of any prominent spots, present on a THG image of the same hypha. Third, we have performed colocalization experiments where the hyphae were stained with lipid specific dye and imaged by both, TPEF and THG method. The spots at the both images were mostly overlapped which verifies that the spots contained lipids. In addition, following the same logic of steep changes of refractive index, we have shown that the cell wall and the cell membrane in label-free hyphae can be imaged and distinguished by THG method. There are a number of caveats to be discussed regarding the imaging of lipid stains. Because of the simultaneous detection of both (TPEF and THG) signals restricted number of dyes for live imaging could be used. In this study fixation was not a choice since it alters the structure of LDs87. The dye used in this study, Nile Red, might be not so specific for LDs and it can bind to other bodies and structures in the cell88. The signal originating from other structures than LDs can bleed into the detection band which eventually might affect the colocalization degree. In addition, the degree of colocalization is further deteriorated by the strong THG signal from the cell wall. The THG imaging requires significantly higher laser powers in comparison to the TPEF imaging. Because of that, one has to make a trade-off in terms of applied laser power when detecting both signals simultaneously. The price paid for this trade-off is the loss of some structures (e.g. small LDs, otherwise visible at higher laser powers) in THG images and appearance of weak, blurry TPEF signal from out of focus LDs (otherwise not visible at lower laser powers). To extract quantitative data from THG images, two methods for image analysis were applied, Particle Size Analysis (PSA) and Image Correlation Spectroscopy (ICS). Both methods can quantify the number of lipid droplets and their average size (diameter). Since ICS was primarily developed for fluorescent images and cluster analysis and to the best of our knowledge it was not used so far for THG images, we have tested it by comparing the results to the PSA. The test was performed on the images of the hyphae under normal and stressed (nitrogen starvation) circumstances. The nitrogen starvation is known to cause increased number of lipid droplets72,73 which was confirmed by both methods and the agreement between numbers obtained by both methods was good. Overall, the proposed imaging method (THG) and the method of image analysis (ICS) was shown to be suitable for label-free in vivo studies of lipid droplets of oleaginous fungi. Application of THG method to future studies of lipid droplet dynamics in fungi could help to advance basic understanding of fungi cellular physiology, and then, of processes involved in the cycling of carbon in nature. The data available upon a reasonable request to the corresponding author. Weigelin, B., Bakker, G. J. & Friedl, P. Third harmonic generation microscopy of cells and tissue organization. J. Cell Sci. 129, 245–255 (2016). Yelin, D. & Silberberg, Y. Laser scanning third-harmonic-generation microscopy in biology. Opt. Express 5(8), 169–175 (1999). Barzda, V. et al. Visualization of mitochondria in cardiomyocytes. Opt. Express 13, 8263 (2005). Witte, S. et al. Label-free live brain imaging and targeted patching with third-harmonic generation microscopy. Proc. Natl. Acad. Sci. U. S. A. 108, 5970–5975 (2011). Tsai, C.-K. et al. Imaging granularity of leukocytes with third harmonic generation microscopy. Biomed. Opt. Express 3, 2234 (2012). Weigelin, B., Bakker, G.-J. & Friedl, P. Intravital third harmonic generation microscopy of collective melanoma cell invasion. IntraVital 1, 32–43 (2012). Gavgiotaki, E. et al. Third Harmonic Generation microscopy as a reliable diagnostic tool for evaluating lipid body modification during cell activation: The example of BV-2 microglia cells. J. Struct. Biol. 189, 105–113 (2015). Oron, D. et al. Depth-resolved structural imaging by third-harmonic generation microscopy. J. Struct. Biol. 147, 3–11 (2004). Sun, C.-K. et al. Multiharmonic-generation biopsy of skin. Opt. Lett. 28, 2488 (2003). Aptel, F. et al. Multimodal nonlinear imaging of the human cornea. Investig. Ophthalmol. Vis. Sci. 51, 2459–2465 (2010). Débarre, D. et al. Imaging lipid bodies in cells and tissues using third-harmonic generation microscopy. Nat. Methods 3, 47–53 (2006). Farrar, M. J., Wise, F. W., Fetcho, J. R. & Schaffer, C. B. In vivo imaging of myelin in the vertebrate central nervous system using third harmonic generation microscopy. Biophys. J. 100, 1362–1371 (2011). Genthial, R. et al. Label-free imaging of bone multiscale porosity and interfaces using third-harmonic generation microscopy. Sci. Rep. 7, 1–16 (2017). Gavgiotaki, E. et al. Third Harmonic Generation microscopy distinguishes malignant cell grade in human breast tissue biopsies. Sci. Rep. 10, 1–13 (2020). Canioni, L. et al. Imaging of Ca2+ intracellular dynamics with a third-harmonic generation microscope. Opt. Lett. 26, 515–517 (2001). Chen, Y.-C. et al. Third-harmonic generation microscopy reveals dental anatomy in ancient fossils. Opt. Lett. 40, 1354 (2015). Chang, T. et al. Non-invasive monitoring of cell metabolism and lipid production in 3D engineered human adipose tissues using label-free multiphoton microscopy. Biomaterials 34, 8607–8616 (2013). Débarre, D. et al. Velocimetric third-harmonic generation microscopy: micrometer-scale quantification of morphogenetic movements in unstained embryos. Opt. Lett. 29, 2881 (2004). Sun, C. K. et al. Higher harmonic generation microscopy for developmental biology. J. Struct. Biol. 147, 19–30 (2004). Watanabe, T. et al. Characterisation of the dynamic behaviour of lipid droplets in the early mouse embryo using adaptive harmonic generation microscopy. BMC Cell Biol. 11(1), 1–11 (2010). Tserevelakis, G. J. et al. Imaging Caenorhabditis elegans embryogenesis by third-harmonic generation microscopy. Micron 41, 444–447 (2010). Aviles-Espinosa, R. et al. Cell division stage in C. elegans imaged using third harmonic generation microscopy. In Biomedical Optics and 3-D Imaging (2010), Paper BTuD78 BTuD78 (The Optical Society, Washington, 2013). Yu, M. M. L. et al. In situ analysis by microspectroscopy reveals triterpenoid compositional patterns within leaf cuticles of Prunus laurocerasus. Planta 227, 823–834 (2008). Prent, N. et al. Applications of nonlinear microscopy for studying the structure and dynamics in biological systems. Photonic Appl. Nonlinear Opt. Nanophotonics Microw. Photonics 5971, 597106 (2005). Tokarz, D. et al. Molecular organization of crystalline β-carotene in carrots determined with polarization-dependent second and third harmonic generation microscopy. J. Phys. Chem. B 118, 3814–3822 (2014). Cisek, R. et al. Optical microscopy in photosynthesis. Photosynth. Res. 102, 111–141 (2009). Barzda, V. Non-Linear Contrast Mechanisms for Optical Microscopy 35–54 (Springer, Dordrecht, 2008). Segawa, H. et al. Label-free tetra-modal molecular imaging of living cells with CARS, SHG, THG and TSFG (coherent anti-Stokes Raman scattering, second harmonic generation, third harmonic generation and third-order sum frequency generation). Opt. Express 20, 9551 (2012). Barad, Y., Eisenberg, H., Horowitz, M. & Silberberg, Y. Nonlinear scanning laser microscopy by third harmonic generation. Appl. Phys. Lett. 70, 922–924 (1997). Boyd, R. W. Nonlinear Optics (Academic Press, New York, 2008). Iy, Y., En, L. & Vv, T. Refractive index of adipose tissue and lipid droplet measured in wide spectral and temperature ranges. Appl. Opt. 57, 4839 (2018). Liu, P. Y. et al. Cell refractive index for cell biology and disease diagnosis: past, present and future. Lab Chip 16, 634–644 (2016). Chen, Y.-C., Hsu, H.-C., Lee, C.-M. & Sun, C.-K. Third-harmonic generation susceptibility spectroscopy in free fatty acids. J. Biomed. Opt. 20, 095013 (2015). Small, D. M. et al. Label-free imaging of atherosclerotic plaques using third-harmonic generation microscopy. Biomed. Opt. Express 9, 214 (2018). Bautista, G. et al. Polarized thg microscopy identifies compositionally different lipid droplets in mammalian cells. Biophys. J. 107, 2230–2236 (2014). Tserevelakis, G. J. et al. Label-free imaging of lipid depositions in C. elegans using third-harmonic generation microscopy. PloS One 9(1), e84431 (2014). Siddhanta, S., Paidi, S. K., Bushley, K., Prasad, R. & Barman, I. Exploring morphological and biochemical linkages in fungal growth with label-free light sheet microscopy and Raman spectroscopy. ChemPhysChem 18, 72–78 (2017). Zhang, C., Li, J., Lan, L. & Cheng, J.-X. Quantification of lipid metabolism in living cells through the dynamics of lipid droplets measured by stimulated Raman scattering imaging. Anal. Chem. 89, 4502–4507 (2017). Brackmann, C. et al. CARS microscopy of lipid stores in yeast: The impact of nutritional state and genetic background. J. Raman Spectrosc. 40, 748–756 (2009). Zhang, C. & Boppart, S. A. Dynamic signatures of lipid droplets as new markers to quantify cellular metabolic changes. Anal. Chem. 92, 15943–15952 (2020). Dong, P. T. et al. Polarization-sensitive stimulated Raman scattering imaging resolves amphotericin B orientation in Candida membrane. Sci. Adv. 7, 1–11 (2021). Yasuda, M., Takeshita, N. & Shigeto, S. Inhomogeneous molecular distributions and cytochrome types and redox states in fungal cells revealed by Raman hyperspectral imaging using multivariate curve resolution-alternating least squares. Anal. Chem. 91, 12501–12508 (2019). Kurian, S. M., Pietro, A. . Di. & Read, N. D. Live-cell imaging of conidial anastomosis tube fusion during colony initiation in Fusarium oxysporum. PLoS One 13, e0195634 (2018). Adomshick, V., Pu, Y. & Veiga-Lopez, A. Automated lipid droplet quantification system for phenotypic analysis of adipocytes using Cell Profiler. Toxicol. Mech. Methods 30, 378–387 (2020). Jüngst, C., Klein, M. & Zumbusch, A. Long-term live cell microscopy studies of lipid droplet fusion dynamics in adipocytes. J. Lipid Res. 54, 3419–3429 (2013). Exner, T. et al. Lipid droplet quantification based on iterative image processing. J. Lipid Res. 60, 1333–1344 (2019). Rambold, A. S., Cohen, S. & Lippincott-Schwartz, J. Fatty acid trafficking in starved cells: Regulation by lipid droplet lipolysis, autophagy, and mitochondrial fusion dynamics [Developmental Cell 32 (2015) 678–692]. Dev. Cell 32, 678–692 (2015). Dejgaard, S. Y. & Presley, J. F. New automated single-cell technique for segmentation and quantitation of lipid droplets. J. Histochem. Cytochem. 62, 889–901 (2014). Nohe, A. & Petersen, N. O. Image correlation spectroscopy. Sci. STKE 2007, (2007). Wiseman, P. W., Squier, J. A., Ellisman, M. H. & Wilson, K. R. Two-photo image correlation spectroscopy and image cross-correlation spectroscopy. J. Microsc. 200, 14–25 (2000). Slenders, E. et al. Image Correlation spectroscopy with second harmonic generating nanoparticles in suspension and in cells. J. Phys. Chem. Lett. 9, 6112–6118 (2018). Bahram, M. & Netherway, T. Fungi as mediators linking organisms and ecosystems. FEMS Microbiol. Rev. 46, 1–16 (2022). Parihar, M. et al. The potential of arbuscular mycorrhizal fungi in C cycling: A review. Arch. Microbiol. 202, 1581–1596 (2020). Ratledge, C. Regulation of lipid accumulation in oleaginous microorganisms. Biochem. Soc. Trans. 30, A101–A101 (2002). Cerdá-Olmeda, E. & Avalos, J. Oleaginous fungi: Carotene-rich from Phycomyces. Prog. Lipid Res. 33, 185–192 (1994). Passoth, V. Lipids of yeasts and filamentous fungi and their importance for biotechnology. Biotechnol. Yeasts Filamentous Fungi https://doi.org/10.1007/978-3-319-58829-2_6 (2017). Mhlongo, S. I. et al. The potential of single-cell oils derived from filamentous fungi as alternative feedstock sources for biodiesel production. Front. Microbiol. 12, 57 (2021). Chang, W. et al. Trapping toxins within lipid droplets is a resistance mechanism in fungi. Sci. Rep. 51(5), 1–11 (2015). Liu, N. et al. Lipid droplet biogenesis regulated by the FgNem1/Spo7-FgPah1 phosphatase cascade plays critical roles in fungal development and virulence in Fusarium graminearum. New Phytol. 223, 412–429 (2019). Petschnigg, J. et al. Good fat, essential cellular requirements for triacylglycerol synthesis to maintain membrane homeostasis in yeast. J. Biol. Chem. 284, 30981–30993 (2009). Suzuki, M., Shinohara, Y., Ohsaki, Y. & Fujimoto, T. Lipid droplets: Size matters. J. Electron Microsc. 60, S101–S116 (2011). Nand, K. & Mohrotra, B. S. Mycological fat production in India. II. Effect of hydrogen-ion concentration on fat synthesis. Sydowia 24, 144–152 (1971). Pollack, J. K., Harris, S. D. & Marten, M. R. Autophagy in filamentous fungi. Fungal Genet. Biol. 46, 1–8 (2009). Jaishy, B. & Abel, E. D. Lipids, lysosomes, and autophagy. J. Lipid Res. 57, 1619–1635 (2016). Petersen, N. O., Höddelius, P. L., Wiseman, P. W., Seger, O. & Magnusson, K. E. Quantitation of membrane receptor distributions by image correlation spectroscopy: Concept and application. Biophys. J. 65, 1135–1146 (1993). Bukara, K. et al. Mapping of hemoglobin in erythrocytes and erythrocyte ghosts using two photon excitation fluorescence microscopy. J. Biomed. Opt. 22, 026003 (2017). Despotović, S. Z. et al. Altered organization of collagen fibers in the uninvolved human colon mucosa 10 cm and 20 cm away from the malignant Tumor. Sci. Rep. 101(10), 1–11 (2020). Huang, S., Heikal, A. A. & Webb, W. W. Two-photon fluorescence spectroscopy and microscopy of NAD (P) H and flavoprotein. Biophys. J. 82(5), 2811–2825 (2002). Greenspan, P., Mayer, E. P. & Fowler, S. D. Nile red: A selective fluorescent stain for intracellular lipid droplets. J. Cell Biol. 100, 965 (1985). Yi, Y.-H. et al. Lipid droplet pattern and nondroplet-like structure in two fat mutants of Caenorhabditis elegans revealed by coherent anti-Stokes Raman scattering microscopy. J. Biomed. Opt. 19, 011011 (2013). Chen, Y. et al. Nitrogen-starvation triggers cellular accumulation of triacylglycerol in Metarhizium robertsii. Fungal Biol. 122, 410–419 (2018). Weng, L. C. et al. Nitrogen deprivation induces lipid droplet accumulation and alters fatty acid metabolism in symbiotic dinoflagellates isolated from Aiptasia pulchella. Sci. Rep. 4, 1–8 (2014). Aguilar, L. R. et al. Lipid droplets accumulation and other biochemical changes induced in the fungal pathogen Ustilago maydis under nitrogen-starvation. Arch. Microbiol. 199, 1195–1209 (2017). Rocheleau, J. V., Wiseman, P. W. & Petersen, N. O. Isolation of bright aggregate fluctuations in a multipopulation image correlation spectroscopy system using intensity subtraction. Biophys. J. 84, 4011–4022 (2003). Olzmann, J. A. & Carvalho, P. (2018) Dynamics and functions of lipid droplets. Nat. Rev. Mol. Cell Biol. 203(20), 137–155 (2018). Yu, Y. et al. The role of lipid droplets in Mortierella alpina aging revealed by integrative subcellular and whole-cell proteome analysis. Sci. Rep. 71(7), 1–12 (2017). Bonfante, P. & Venice, F. Mucoromycota: going to the roots of plant-interacting fungi. Fungal Biol. Rev. 34, 100–113 (2020). Smith, S. & Read, D. Mycorrhizal Symbiosis (Academic Press, New York, 2008). Luginbuehl, L. H. et al. Fatty acids in arbuscular mycorrhizal fungi are synthesized by the host plant. Science 356, 1175–1178 (2017). Jiang, Y. et al. Plants transfer lipids to sustain colonization by mutualistic mycorrhizal and parasitic fungi. Science 356, 1172–1173 (2017). Keymer, A. et al. Lipid transfer from plants to arbuscular mycorrhiza fungi. Elife 6, e29107 (2017). Deka, D., Sonowal, S., Chikkaputtaiah, C. & Velmurugan, N. Symbiotic associations: Key factors that determine physiology and lipid accumulation in oleaginous microorganisms. Front. Microbiol. 11, 555312 (2020). Athenaki, M. et al. Lipids from yeasts and fungi: Physiology, production and analytical considerations. J. Appl. Microbiol. 124, 336–367 (2018). Fujita, K. & Smith, N. I. Label-free molecular imaging of living cells. Mol. Cells OS 530–535 (2008). Knaus, H., Blab, G. A., van Jerre Veluw, G., Gerritsen, H. C. & Wösten, H. A. B. Label-free fluorescence microscopy in fungi. Fungal Biol. Rev. 27, 60–66 (2013). Borile, G., Sandrin, D., Filippi, A., Anderson, K. I. & Romanato, F. Label-free multiphoton microscopy: Much more than fancy images. Int. J. Mol. Sci. 22, 2657 (2021). Martins, A. S., Martins, I. C. & Santos, N. C. Methods for lipid droplet biophysical characterization in flaviviridae infections. Front. Microbiol. 9, 1951 (2018). Nile Red. Available at: https://www.thermofisher.com/order/catalog/product/N1142. We thank Dunja Stefanović for invaluable help with growing of some cultures, and Marina Stanić for help with Nile red staining and many useful discussions. Also, we would like to thank Milan Minić for technical help and support. This work was supported by the Ministry of Education, Science and Technological Development, Republic of Serbia [contract numbers: 451-03-68/2022-14/200178 and 451-03-68/2022-14/200007]; the Project HEMMAGINERO [Grant number 6066079] from Program PROMIS, Science Fund of the Republic of Serbia; and the Institute of Physics Belgrade, through the grant by the Ministry of Education, Science and Technological Development of the Republic of Serbia. Faculty of Biology, Institute of Physiology and Biochemistry, University of Belgrade, Studentski trg 16, Belgrade, 11158, Serbia Tanja Pajić & Miroslav Živić Institute for Biological Research "Siniša Stanković", University of Belgrade, National Institute of the Republic of Serbia, Bulevar Despota Stefana 142, Belgrade, 11000, Serbia Nataša V. Todorović Institute of Physics Belgrade, University of Belgrade, National Institute of the Republic of Serbia, Pregrevica 118, Belgrade, 11080, Serbia Stanko N. Nikolić, Mihailo D. Rabasović & Aleksandar J. Krmpot Department of Physics and Astronomy, Optical Sciences Centre, School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Melbourne, VIC, 3122, Australia Andrew H. A. Clayton Tanja Pajić Miroslav Živić Stanko N. Nikolić Mihailo D. Rabasović Aleksandar J. Krmpot T.P. conducted all the experiments, processed the images, and prepared the samples. N.T. designed the protocols and supervised the sample preparation and treatments. M.Ž. supervised the biological part of research. S.N. designed the acquisition and improved the software for imaging. M.R. designed the optical setup, supervised the imaging experiments and image analysis. A.C. supervised image processing, particularly LD quantification. A.K. designed the study, designed the optical setup, and supervised the experiments. All authors took participation in the manuscript preparation and revision. Correspondence to Aleksandar J. Krmpot. Supplementary Information 1. Supplementary Video S1. Pajić, T., Todorović, N.V., Živić, M. et al. Label-free third harmonic generation imaging and quantification of lipid droplets in live filamentous fungi. Sci Rep 12, 18760 (2022). https://doi.org/10.1038/s41598-022-23502-4
CommonCrawl
Nano Convergence Nano-sized graphene oxide coated nanopillars on microgroove polymer arrays that enhance skeletal muscle cell differentiation Hye Kyu Choi1 na1, Cheol-Hwi Kim2 na1, Sang Nam Lee3, Tae-Hyung Kim2 & Byung-Keun Oh ORCID: orcid.org/0000-0002-3268-47051 Nano Convergence volume 8, Article number: 40 (2021) Cite this article The degeneration or loss of skeletal muscles, which can be caused by traumatic injury or disease, impacts most aspects of human activity. Among various techniques reported to regenerate skeletal muscle tissue, controlling the external cellular environment has been proven effective in guiding muscle differentiation. In this study, we report a nano-sized graphene oxide (sGO)-modified nanopillars on microgroove hybrid polymer array (NMPA) that effectively controls skeletal muscle cell differentiation. sGO-coated NMPA (sG-NMPA) were first fabricated by sequential laser interference lithography and microcontact printing methods. To compensate for the low adhesion property of polydimethylsiloxane (PDMS) used in this study, graphene oxide (GO), a proven cytophilic nanomaterial, was further modified. Among various sizes of GO, sGO (< 10 nm) was found to be the most effective not only for coating the surface of the NM structure but also for enhancing the cell adhesion and spreading on the fabricated substrates. Remarkably, owing to the micro-sized line patterns that guide cellular morphology to an elongated shape and because of the presence of sGO-modified nanostructures, mouse myoblast cells (C2C12) were efficiently differentiated into skeletal muscle cells on the hybrid patterns, based on the myosin heavy chain expression levels. Therefore, the developed sGO coated polymeric hybrid pattern arrays can serve as a potential platform for rapid and highly efficient in vitro muscle cell generation. Skeletal muscle is one of the major components in the human body that controls most of the body motions and movements [1, 2]. During the motions, the muscle bundles composed of multiple skeletal muscle cells are contracted or released following the recognition of complex biochemical signals that are mostly generated or transferred from the peripheral and central nervous system through the neuromuscular junctions [3]. Once the muscle tissues are locally damaged, they can be self-healed via multiple sequential steps, including cell division, fusion with the existing muscle fibers, and the final repair [4,5,6]. However, similar to the regeneration mechanism of other organs, the mass loss of muscle bundles or damages that occur due to genetic abnormalities cannot be restored spontaneously in the absence of external regeneration factors [7, 8]. To address such an issue, in vitro muscle cell generation for transplantation or drug screening purposes has emerged as a promising approach, mostly via controlling differentiation of multipotent stem cells or unipotent pre-myogenic cells (i.e., myoblasts) [9,10,11,12,13]. To induce skeletal muscle cell differentiation, one of the most common methods is the use of a cultivation medium containing several myogenic differentiation factors (e.g., FGF, TGF-β, and IGF) [14, 15]. In addition to these soluble factors, it was recently reported that the modulation of the cellular microenvironment, including the extracellular matrix (ECM), can synergistically steer cell fates under the differentiation factor treatments [16,17,18,19,20,21]. Such strategy includes microgrooving of the cell cultivation platforms that further induces the deformation of cellular shape. Considering that skeletal muscle tissue retains the elongated structures to give unidirectional forces, modulating cellular morphology into the elongated shape using the microgroove structure is beneficial for efficient muscle cell generation [22]. Various micropatterned structures with different materials, including metals, silicons, proteins, peptides, and biocompatible synthetic polymers, have been reported, all of which are proven to guide cell shape with a high aspect ratio [23,24,25]. Synthetic polymers (e.g., polydimethylsiloxane, polycaprolactone, polyvinyl alcohol) are particularly advantageous because they are cheap, non-toxic to the cells, and most importantly, can be easily fabricated as various structures via a simple molding technique. However, one of the major drawbacks of such polymers is their hydrophobic property, which prevents cellular adhesion on the platform surfaces [26]. The cell adhesion is extremely important because the majority of cell sources that can generate skeletal muscles in vitro should adhere to the platform surface for further growth and differentiation [27]. Therefore, adding functionality to the polymeric micropatterned substrate is essential for use as the culture substrate for the generation of skeletal muscle cells with improved conversion efficiency [28,29,30,31,32]. In this study, we report a new platform consisting of nanopillar arrays, nano-sized graphene oxide (sGO), and microgrooves for highly efficient skeletal muscle cell differentiation (Fig. 1). The laser interference lithography technique is first applied to the photoresist (PR)-coated silicon micropattern mold to generate periodic homogeneous nanohole patterns. The polydimethylsiloxane (PDMS) polymer is applied to the nanohole-modified silicon mold to reversely replicate the structure, resulting in the generation of nanopillars on microgroove hybrid polymer array (NMPA). Graphene oxide (GO) of different sizes are applied and coated to the substrates to enhance the cell adhesion on the micropatterned polymeric arrays along with the nanopatterns [33]. The structural and chemical characteristics of GO coated NMPA are confirmed by scanning electron microscopy (SEM) and Raman spectroscopy, respectively. The ability of the fabricated hybrid arrays to generate elongated cell shape is assessed based on three different parameters, including the cell spreading, circularity, and aspect ratio of the myoblast cell line, C2C12. Finally, the efficiency of skeletal muscle cell differentiation of C2C12 on differently fabricated platforms is evaluated based on the immunofluorescence images using myosin heavy chain (MHC) as a myogenesis marker. Schematic diagram of necessity of research for muscle differentiation in the treatment of skeletal muscle and nano-sized graphene oxide-modified nanopillars on microgroove polymer array for muscle differentiation Methods/experimental Distilled water (DW) was purified through a Milli-Q system (Millipore, USA). Sylgard 184 base and curing agent kit were purchased from Dow Corning (Midland, MI, USA) for the fabrication of PDMS substrates. Indium tin oxide (ITO)-coated glass was purchased from Omniscience (Korea). The line-micropatterned silicon wafer of 20 μm in width size was custom-ordered from MicroFIT (Korea). Hexamethyldisilazane (HMDS) was purchased from Sigma-Aldrich (USA), and negative PR (AZ nLOF™ 2020), developer (AZ 300 MIF), and thinner (AZ 1500) were obtained from AZ Electronic Materials (USA). The single-layer GO was purchased from Graphene Supermarket (USA). The quartz plate was purchased from Jinzhou BEST Quartz Glass Co., Ltd. (China). The mouse myoblast cell line (C2C12) was acquired from the American Type Culture Collection (ATCC, USA). To prepare the media, Dulbecco's modified Eagle's medium (DMEM) was purchased from Welgene (Korea), and fetal bovine serum (FBS) was purchased from Young In Frontier (Korea). Penicillin, streptomycin, and horse serum were obtained from Gibco (Thermo Fisher Scientific, USA). Before immunostaining the cells, 4% paraformaldehyde from Biosaesang (Korea) was used for fixation. To immunostain the F-actin, Alexa Fluor™ 546 phalloidin was purchased from Thermo Fisher Scientific. Anti-sarcomeric α-actinin antibody (Abcam, UK) and MHC antibody (R&D Systems, USA) were used as primary antibodies. Alexa Fluor™ 488 goat anti-mouse IgG (H+L) (Thermo Fisher Scientific) and goat anti-rabbit IgG (H+L) (Abcam) were used as secondary antibodies. Fabrication of nano-, micro-pattern, and NMPA The micropatterned PDMS of 20 μm in width size was fabricated by casting PDMS in the micropatterned silicon wafer of 20 μm in width size. The nanopatterned PDMS was fabricated by casting PDMS in the nanohole patterns of 400, 600, and 800 nm in size. The nanohole patterns were fabricated using the laser interference lithography technique. In brief, clear ITO substrates were spin-coated with the HMDS and negative PR sequentially. After exposure to the laser, the substrates were baked at 125 °C for 1 min, then immersed in the developer for 1 min. The developed substrates were removed from the developer solution, washed with DW, and dried with N2 gas. The NMPA were fabricated by casting PDMS in the silicon wafer patterned with micro−nano holes. The nanohole patterns were fabricated on the micropatterned silicon wafer by the above method. To produce a rigid substrate for cell attachment, PDMS was cast with a 17:1 base-to-curing agent ratio, followed by an overnight cure at 70 °C. The surface of the fabricated patterns was observed by field-emission SEM (Auriga, Carl Zeiss, Germany) after Pt coating by ion-sputter. The topography of the fabricated nanopillars on microgroove was observed by atomic force microscopy (AFM). Fabrication of GO thin film and GO modification on NMPA The fabrication of GO thin film involved the following three processes: (1) GO suspension (50 µL, 500 mg/L in 70% ethanol) was dispensed on a 15 mm2 quartz plate. The quartz plate coated with GO suspension was heated to 65 °C for 55 s. (2) The quartz plate heated for 55 s was spin-coated for 65 s at rotation speeds of 500, 1000, and 1800 rpm. Each condition corresponds to the manufacturing conditions of extremely large GO (LGO), 10 nm size of sGO (10-sGO), and 5 nm size of sGO (5-sGO) in sequence. (3) Finally, the GO-coated quartz plate was treated with low-oxygen concentration and low-electrical plasma (LOLP). Limited to LGO manufacturing, LOLP treatment was omitted. During the LOLP treatment, the oxygen concentration was set at 30 sccm, and the electrical power was set at 30 W. The GO coated NMPA was prepared by contacting a moisturized GO thin film with the NMPA. The GO thin film was exposed for 10 s to steam derived from a 1:1 ratio of a DW−DMSO mixture heated at 100 °C for 5 min. The moisturized GO thin film was brought into contact with the NMPA at 25 °C for 180 s. Then, the GO thin film was removed from the NMPA. Cell culture and differentiation The substrates used to culture C2C12 cells were washed with PBS and then treated with UV for 30 min. The cells were incubated in a growth medium containing high-glucose DMEM supplemented with 10% FBS and 1% penicillin/streptomycin for 3 days at 37 °C in a 5% CO2 atmosphere. For differentiation, the cells were grown in a differentiation medium containing 2% of horse serum, 1% penicillin/streptomycin, and DMEM. The differentiation medium was replaced every 2 days for 10 days. Cell immunostaining To fix the cells, the substrates were immersed in 4% formaldehyde solution for 10 min. Triton-X solution (0.2%) was used to permeabilize the cells. The cells were blocked using a solution of 1% bovine serum albumin (BSA). Between the steps, the cells were washed three times with Dulbecco's phosphate-buffered saline (DPBS). After the washing steps, the cells were incubated with a primary antibody (1000:1) at 4 °C overnight. For primary antibodies, sarcomeric α-actinin and MHC were used. The washing steps were repeated three times, and then the cells were incubated with secondary antibodies, Alexa Fluor™ 488 goat anti-mouse IgG (H+L) (FITC) and goat anti-rabbit IgG (H+L), for 2 h at room temperature. Cell nuclei were stained with Hoechst (3 µg/mL) for 3 min, and F-actin was stained with Alexa Fluor™ 546 phalloidin. Immunostained cells were observed using a confocal laser scanning microscope (Carl Zeiss GmbH, Jena, Germany). Confocal images were analyzed using ZEN Black software (Carl Zeiss GmbH). Cell behaviors on nano- and micropatterns To confirm the ability of nanopillars to increase cell adhesion on generated on PDMS substrates, C2C12, an immortalized mouse myoblast, was chosen as a model cell line [34]. C2C12 is an adherent cell line and should be cultivated on a surface that is functionalized to promote cell adhesion. Interestingly, periodic nanostructures, including nanogrooves and nanopillars, have proven effective in enhancing cell adhesion even without the use of ECM proteins and peptides [35,36,37,38]. Therefore, in this study, the uniform PDMS nanopillar patterns were fabricated by laser interference lithography, as specified in Additional file 1: Fig. S1 [23]. To investigate the effects of nanopattern sizes on cell adhesion and morphology, nanohole with different sizes (e.g., 400, 600, and 800 nm) were generated on the silicon mold and were reversely replicated using PDMS, as shown in Fig. 2a. Based on the size distribution result (Fig. 2b) and high-resolution vertical SEM image (Fig. 2c), we found that all the PDMS nanopillars were uniformly fabricated. Such uniformity in size and shape is reported to be critical for facilitating cell adhesion to the desired substrates [23]. Confirmation of the effect of nano- and micropatterns for cell behaviors. a FE-SEM images of PR nanohole patterns and PDMS nanopillar patterns, and b size distribution of PDMS patterns. c 3D SEM image of PDMS nanopillar patterns. d Confocal microscope images of the cells on bare PDMS, and 400, 600, and 800 nm PDMS patterns (left to right) immunostained with actin (Red) and Hoechst (Blue). e−g Cell spreading area (e), circularity (f), and cell aspect ratio (g) of the cells on bare PDMS, and 400, 600, and 800 nm PDMS pattern. h FE-SEM images of micropatterned silicon wafer, PDMS and side view image of micropatterned PDMS. i Confocal microscope images of the cells on bare PDMS and micropatterned PDMS immunostained with actin (Red) and Hoechst (Blue). j Cell orientation analysis of the cells on the bare PDMS and micropatterned PDMS. (*** p ≤ 0.001) As shown in the images of the cells double-stained for F-actin and nuclei (Fig. 2d), C2C12 cells on the nanopatterned PDMS showed a more elongated and stretched morphology than those on bare PDMS, regardless of the nanopattern sizes. For intensive analysis of cell behavior on each substrate, we calculated the cell spreading area, circularity, and cell aspect ratio based on multiple actin/DAPI fluorescence cells (Fig. 2e−g). The cell spreading area on nanopatterned PDMS was increased compared with the bare PDMS substrate (Fig. 2e). Specifically, the 600 nm-sized nanopatterned PDMS showed a cell spreading area of 679.7 µm2, which was significantly greater than those of the other substrates (428.2 µm2 for bare PDMS, 616.3 µm2 for 400 nm-sized nanopatterned PDMS, and 535.0 µm2 for 800 nm-sized nanopatterned PDMS). The cell circularity was considered next. Cells with a circularity of 1 are perfectly circular without any stretch from the surface and structural deformation. By contrast, a circularity of 0 means that the roughness of the outer area of the cell is extremely high, which is considered as cell growth. We found that the highest circularity corresponded to bare PDMS, showing a value of 0.12, followed by the 400-, 600-, and 800 nm-sized nanopatterned substrates, with values of 0.071, 0.062, and 0.064, respectively (Fig. 2f). Similar to the cell spreading results, the 600 nm-sized nanopatterned substrate showed the lowest circularity among all the groups, indicating that more branches stretch out from the cells; that is, the cells show enhanced adhesion. Unlike these two parameters, the cell aspect ratio was not affected by the nanopatterns, as shown in Fig. 2g. Taken together, we found that the 600 nm-sized PDMS nanopillar arrays are highly effective in enhancing cell adhesion and spreading. In addition to an effort to improve cell attachment to the platform, we further investigated the effects of PDMS microgroove patterns on cell morphology. The fabrication process of the micropattern is described in Additional file 1: Fig. S2. As shown in Fig. 2h, the micropatterns on the fabricated micropatterned PDMS were uniformly aligned in one direction. The size and height of the micropatterns on the PDMS were about 20 and 2 μm, respectively, which is consistent with those of the micropatterns on the silicon wafers. After the C2C12 cells were seeded on the micropatterned PDMS, the cells were immunostained to confirm the alignment of the cells (Fig. 2i). As shown in the confocal microscopy images, aligned microstructures of the C2C12 cells were obtained on the micropatterned PDMS compared with the cells on the bare PDMS. To acquire the quantitative data on the cell alignment from the images, spatial point pattern analysis was undertaken using the J-function in the ImageJ program. An initial color survey of the cells on the substrates was performed (Additional file 1: Fig. S3). It revealed that the cells on the bare PDMS showed randomly arranged structures, and the bar graph demonstrated the absence of any dominant angle of cell alignment (Fig. 2j). By contrast, the cells on the micropatterned PDMS were aligned horizontally, and 73.6% of the intensity was focused within 30° from a dominant line that represented 0° in the graph. Therefore, we could adapt the micropatterned PDMS to fabricate the NMPA for cell alignment. Cell behavior on various GO-modified substrates GO have been used as composing materials for various bio-integrated platforms due to biocompatible characteristics and highly adhesive nature of GO [39, 40]. To attach the C2C12 cells to the substrates stably and further control the myogenesis, GO sheets were coated on the substrates through a transfer technique, as previously reported [41]. The GO could be simply modified on the desired substrate via the drop-coating method. However, the large stacked GO sheets would cover the entire polymer surface and could thus eliminate the nanopattern effects. Therefore, three different types of GO, including LGO, 10-sGO, and 5-sGO, were applied to the substrates. The transfer of the GO onto the PDMS substrates is described in Additional file 1: Fig. S4. As shown in Fig. 3a and Additional file 1: Fig. S5, all types of GO were successfully transferred to the PDMS substrates regardless of their size. As a proven cytophilic material, the spreading area of C2C12 cells was increased on all GO-coated substrates (Fig. 3b and c). Specifically, the cell spreading area was calculated to be 633.1, 978.3, 1039.5, and 1154.6 µm2 for bare PDMS, 5-sGO, LGO, and 10-sGO substrates, respectively. Moreover, the circularity values were 0.13, 0.13, and 0.17 for the LGO, 10-sGO, and 5-sGO groups, respectively (Fig. 3d), indicating that GO induces branch-like formations that stretch out from the outer cell membrane area due, in part, to the enhanced filopodia and lamellipodia formation. No significant differences were observed in the cell aspect ratios among all tested groups, proving the excellent cytophilic property of GO that is similar to the periodic nanopatterns. Confirmation of the effect of graphene oxide on cell behaviors. a FE-SEM images of PR nanohole patterns and PDMS nanopillar patterns, and b size distribution of PDMS patterns. Raman intensity map of the bare PDMS, LGO-, 10-sGO-, and 5-sGO-modified PDMS. b Confocal microscope images of the cells on bare PDMS, LGO-, 10-sGO-, and 5-sGO-modified PDMS immunostained with actin (Red) and Hoechst (Blue). c−e Cell spreading area (c), circularity (d), and cell aspect ratio (e) of the cells on bare PDMS, LGO-, 10-sGO-, and 5-sGO-modified PDMS. f Schematic diagram of trypsin and centrifugation treatment process. g, h Cell ratio remaining on the PDMS, LGO-, 10-sGO-, and 5-sGO-modified PDMS after trypsin (g) and centrifugation (h) treatment. (* p ≤ 0.5, ** p ≤ 0.01, *** p ≤ 0.001) To further study the cell adhesion strength on GO coated NMPA, the cell adhesion was intentionally weakened via trypsin treatment and centrifugation as chemical and mechanical methods, respectively (Fig. 3f). The trypsin treatment time and rotating speed (i.e., centrifugal force) were optimized as 80 s and 8 min, respectively (Additional file 1: Fig. S6). The centrifugal force applied to the cells was calculated to be 0.269 mN based on the following equation: $$\text{F}=m\times {\omega }^{2}r$$ After 80 s of the trypsin treatment, 52.7% of the cells remained on the bare PDMS substrate, while 68.2%, 71.7%, and 69.6% of the cells remained attached to the LGO-, 10-sGO-, and 5-sGO-coated PDMS substrates, respectively (Fig. 3g). In the mechanical detachment experiments, 74.0%, 92.0%, 92.8%, and 95.5% of the cells were found to be attached to the bare PDMS, LGO-, 10-sGO-, and 5-sGO-coated PDMS substrates, respectively, when the centrifugal force was applied for 8 min (Fig. 3h). Taken together, the results indicate that GO is effective in not only enhancing the cell spreading (i.e., cell adhesion area) but also the cell adhesion on the artificial polymer substrates, both of which are incredibly important for in vitro skeletal muscle cell differentiation. Cell behaviors on GO coated NMPA After confirming the excellent properties of each core component, including PDMS nanopillars and sGO for improving cell adhesion, and the microgrooves for cell morphology modulation, we attempted to fabricate all combined sG-NMPA (Additional file 1: Fig. S7). As shown in Fig. 4a, the 600 nm-sized nanohole patterns were successfully fabricated on the 20 μm-sized microgrooved Si wafer. Then, PDMS was used to reversely replicate the structure of the nanohole pattern-modified Si wafer. The height of fabricated nanopillars on microgroove was 121.98 nm (Additional file 1: Fig. S8). Afterward, LGO, 10-sGO, and 5-sGO were modified on each nano-, micro-patterned PDMS, and NMPA substrate via the contact-printing method under high humidity. As shown in Fig. 4b, owing to the size of LGO (average size: 0.5−10 μm), they were heavily stacked on the surface, resulting in uniform G-band (i.e., in-plane vibration of sp2-bonded carbon atoms) distribution in the Raman spectrum for all the substrates regardless of their topographical differences. For 10-sGO and 5-sGO, the structural morphology of both microgrooves and nanopillars resulted in variations of G-band intensity (Fig. 4b). Among microgrooves and nanopatterns, the PDMS nanopillars showed superior GO absorption efficiency over the microgrooves-only substrate due to increased hydrophobic property, which lowered the surface energy level. After confirming successful GO transfer to NMPA, C2C12 cells were seeded on each platform and stained with phalloidin and Hoechst to visualize F-actin expression and the nucleus shape, respectively (Fig. 4c). We specifically focused on the alignment and direction of the C2C12 cells, which are important indicators to guide skeletal muscle cell differentiation through cell morphology manipulation. As hypothesized, cells on the bare PDMS substrate showed random orientation and growth with limited cell size. However, the patterned PDMS substrates forced cells to exhibit a unidirectional morphology with a close cell-to-cell network that mimics the structure of in vivo muscle tissue (Fig. 4d). To better compare the cell alignment tendency on each substrate, the degree of alignment was converted to numerical values based on the following equation: [42], when σcell is the standard deviation of cell angles. $${\text{Cell alignment index = }}\frac{{\left( {180/\sqrt {12} } \right)}}{{{\sigma _{{\text{cell}}}}}}$$ Investigation of the cell behaviors on the GO-coated NMPA. a FE-SEM images of micro-nano pattern on silicon wafer and NMPA. b Raman intensity map of the LGO-, 10-sGO-, and 5-sGO-coated micropattern, nanopattern, and NMPA. c Confocal microscope images of the cells on bare PDMS, NMPA, LG-NMPA, 10-sG-NMPA, and 5-sGO-NMPA immunostained with actin (Red) and Hoechst (Blue). d The cell orientation graph of the cells on the bare PDMS, micropatterned PDMS, NMPA, LG-NMPA, 10-sG-NMPA, and 5-sGO-NMPA. e Cell alignment index of the cells on bare PDMS, micropatterned PDMS, NMPA, LG-NMPA, 10-sG-NMPA, and 5-sGO-NMPA. f−h Cell spreading area (f), circularity (g), and cell aspect ratio (h) of the cells on the bare PDMS, micropatterned PDMS, nanopatterned PDMS, NMPA, LG-NMPA, 10-sG-NMPA, and 5-sGO-NMPA. (* p ≤ 0.5, *** p ≤ 0.001) As shown in Fig. 4e, the calculated cell alignment index of cells on patterned substrates was 2.06, 2.41, 2.21, 2.07, and 2.50 for microgrooves, NMPA, LGO-coated NMPA (LG-NMPA), 10-sGO-coated NMPA (10-sG-NMPA), and 5-sGO-coated NMPA (5-sG-NMPA), respectively. The results indicate that microgrooves are critical in guiding cellular morphology into an elongated shape, while sGO modification does not harm the guiding ability of microgrooves, especially 5-sGO. Regarding the effects of GO modification, cell spreading increased on the bare PDMS nanopattern and the GO-coated substrates, while the circularity decreased for cells from all the substrates compared with bare PDMS, which is consistent with the results we obtained without microgrooves (Figs. 2e−g and 3c−e). Taken together, we can conclude that the developed sG-NMPA are highly effective in both manipulating cell morphology into an elongated shape and enhancing cell adhesion, which is advantageous for the generation of skeletal muscle cell differentiation. Enhanced skeletal muscle cell differentiation on GO-coated NMPA To confirm the ability of the fabricated hybrid platform to guide myogenic differentiation, C2C12 cells were finally cultured. As hypothesized, cells on bare PDMS were detached immediately due to the low cell adhesion strength (Additional file 1: Fig. S9, Additional file 2: Video S1). The detachment of C2C12 cells on flat PDMS surfaces usually occurs owing to the high contractility of the cells compared to their adhesion strength to the substrates [43, 44]. Remarkably, by contrast, cells on the GO-coated nanopillar-modified microgrooves were observed to be stable throughout the differentiation. After 10 days of differentiation, cells remaining on PDMS surfaces showed a circular shape, having a low morphological aspect ratio and high circularity. However, the cells on the GO-coated NMPA substrates showed higher cell spreading area and more elongated morphology than the cells on the bare PDMS. After that, the differentiated cells were further stained with two different myogenesis markers, α-actinin and MHC (Fig. 5b). α-Actinin is an actin-binding cytoskeletal protein critical for focal adhesion [45]. MHC is one of the motor proteins in muscle filaments [46]. As shown in Fig. 5b, both α-actinin and MHC were highly expressed in the GO-modified nanopillar groups, especially in sGO-coated substrates. To better compare the skeletal differentiation based on the immunofluorescence images, the mean intensities of MHC expressed as red color were divided by the number of nuclei and plotted. The mean values of the differentiated cells were calculated to be 0.118, 0.178, 0.197, and 0.248 for bare PDMS, LG-NMPA, 10-sG-NMPA, and 5-sG-NMPA, respectively (Fig. 5c). Although the variations were found to be high for all the groups, sG-NMPA showed 10.7% (5 nm) and 39.3% (10 nm) higher MHC expression levels than the LGO-coated nanopillar-modified microgrooves. Based on this observation, we concluded that sGO conserves the distinct nanopillar and microgroove co-existing structure of the fabricated PDMS substrate and also enhances cell adhesion, which results in enhanced skeletal muscle cell differentiation (Fig. 5d). Confirmation of myogenic differentiation of the cells. a Brightfield images of the cells on the bare PDMS, LG-NMPA, 10-sG-NMPA, and 5-sGO-NMPA. b Confocal microscope images of the cells on bare PDMS, LG-NMPA, 10-sG-NMPA, and 5-sGO-NMPA immunostained with α-actin (Green), MHC (Red), and Hoechst (Blue). c The graph for mean intensity divided by number of nuclei. d Schematic diagram of the correlation between GO size and cell differentiation based on the mean intensity data. (* p ≤ 0.5) In this study, a GO-coated NMPA that enhances skeletal muscle cell differentiation was fabricated. The micropattern, nanopattern, and GO were employed in the hybrid pattern array for guiding cell differentiation and function. Firstly, the micropattern was used for cell alignment with the pattern. Next, the nanopattern and GO were used to enhance the adhesion of cells on the polymer substrate. Altogether, the hybrid pattern array showed enhanced cell spreading area, including cell alignment. Similar to the cell behaviors on the hybrid pattern array, the myogenic differentiation of the cells on the hybrid pattern array was enhanced compared with the bare PDMS substrate. Furthermore, we investigated the optimized condition of the GO, especially the size of the GO on the patterned substrate. From the results, 5 nm-sized GO on the hybrid pattern array enhanced the differentiation of cells and stable culture of the cells on the polymer substrate simultaneously. The results indicate that the proposed GO-coated NMPA is an appropriate platform for skeletal muscle cell differentiation on the rigid polymer substrate. Therefore, the sGO-coated NMPA can be utilized in regenerative medicine, which requires control of cell differentiation and is a fundamental technology of biorobots composed of muscle cells and the polymer substrate. W.R. Frontera, J. Ochala, Calcif. Tissue Int. 96, 183 (2015) S. Levenberg, J. Rouwkema, M. Macdonald, E.S. Garfein, D.S. Kohane, D.C. Darland, R. Marini, C.A. van Blitterswijk, R.C. Mulligan, P. A. D'Amore, R. Langer. Nat. Biotechnol. 23, 879 (2005) Y. Morimoto, M. Kato-Negishi, H. Onoe, S. Takeuchi, Biomaterials 34, 9413 (2013) H. Yin, F. Price, M.A. Rudnicki, Physiol. Rev. 93, 23 (2013) M. Park, Y.J. Heo, Biochip J. 15, 1 (2021) T. Laumonier, J. Menetrey, J. Exp. Orthop. 3, 15 (2016) Z. Puthucheary, S. Harridge, N. Hart, Crit. Care Med. 38, S676 (2010) M.H. Moussa, G.G. Hamam, A.E. Abd Elaziz, M.A. Rahoma, A.A. Abd, D.A.A. El Samad, El-Waseef, M.A. Hegazy, Tissue Eng. Regen. Med. 17, 887 (2020) P. Bi, A. Ramirez-Martinez, H. Li, J. Cannavino, J.R. McAnally, J.M. Shelton, E. Sanchez-Ortiz, R. Bassel-Duby, E.N. Olson, Science 356, 323 (2017) T. Hu, M. Shi, X. Zhao, Y. Liang, L. Bi, Z. Zhang, S. Liu, B. Chen, X. Duan, B. Guo, Chem. Eng. J. 428, 131017 (2022) T. Braun, M. Gautel, Nat. Rev. Mol. Cell Biol. 12, 349 (2011) J. Lim, H. Ching, J.K. Yoon, N.L. Jeon, Y. Kim, Nano Converg. 8, 12 (2021) S.A. Salem, Z. Rashidbenam, M.H. Jasman, C.C.K. Ho, I. Sagap, R. Singh, M.R. Yusof, Z. Md. Zainuddin, R. B. Haji Idrus, and M.H. Ng, Tissue Eng. Regen. Med. 17, 553 (2020) S. Weis, T.T. Lee, A. del Campo, A.J. García, Acta Biomater. 9, 8059 (2013) J. Son, H.-H. Kim, J.-H. Lee, W.-I. Jeong, J.-K. Park, Biochip J. 15, 77 (2021) P. Chen, Y. Miao, F. Zhang, J. Huang, Y. Chen, Z. Fan, L. Yang, J. Wang, Z. Hu, Theranostics 10, 11673 (2020) J.H. Lee, J. Luo, H.K. Choi, S.D. Chueng, K.B. Lee, J.W. Choi, Nanoscale 12, 9306 (2020) J. Lee, S. Lee, S.M. Kim, H. Shin, Biomater. Res. 25, 14 (2021) M.S. Kang, S.J. Jeong, S.H. Lee, B. Kim, S.W. Hong, J.H. Lee, D.-W. Han, Biomater. Res. 25, 4 (2021) C. Park, S. Park, D. Lee, K.S. Choi, H.-P. Lim, J. Kim, Tissue Eng. Regen. Med. 14, 481 (2017) J.K. Virdi, P. Pethe, Tissue Eng. Regen. Med. 18, 199 (2021) M.T. Lam, S. Sim, X. Zhu, S. Takayama, Biomaterials 27, 4340 (2006) J.H. Lee, H.K. Choi, L. Yang, S.D. Chueng, J.W. Choi, K.B. Lee, Adv. Mater. 30, e1802762 (2018) M.P. Sousa, E. Arab-Tehrany, F. Cleymand, J.F. Mano, Small 15, e1901228 (2019) C. Zhao, X. Wang, L. Gao, L. Jing, Q. Zhou, J. Chang, Acta Biomater. 73, 509 (2018) M. Ferrari, F. Cirisano, M.C. Morán, Colloids Interfaces 3, 48 (2019) A. Jiao, C.T. Moerk, N. Penland, M. Perla, J. Kim, A.S.T. Smith, C.E. Murry, D.H. Kim, J. Biomed. Mater. Res. A 106, 1543 (2018) S. Chaterji, P. Kim, S.H. Choe, J.H. Tsui, C.H. Lam, D.S. Ho, A.B. Baker, D.H. Kim, Tissue Eng. Part A 20, 2115 (2014) K. Cheng, W.S. Kisaalita, Biotechnol. Prog. 26, 838 (2010) Y. Yang, X. Wang, T.C. Huang, X. Hu, N. Kawazoe, W.B. Tsai, Y. Yang, G. Chen, J. Mater. Chem. B 6, 5424 (2018) M. Yeo, G. Kim, Acta Biomater. 107, 102 (2020) L. Zhao, S. Mei, P.K. Chu, Y. Zhang, Z. Wu, Biomaterials 31, 5072 (2010) T.H. Kim, S. Shah, L. Yang, P.T. Yin, M.K. Hossain, B. Conley, J.W. Choi, K.B. Lee, ACS Nano 9, 3780 (2015) S. Burattini, P. Ferri, M. Battistelli, R. Curci, F. Luchetti, E. Falcieri, Eur. J. Histochem., 223 (2004) R.V. Goreham, A. Mierczynska, L.E. Smith, R. Sedev, K. Vasilev, Rsc Advances 3, 10309 (2013) Y.B. Lee, E.M. Kim, H. Byun, H.K. Chang, K. Jeong, Z.M. Aman, Y.S. Choi, J. Park, H. Shin, Biomaterials 165, 105 (2018) A.A. Ketebo, C. Park, J. Kim, M. Jun, S. Park, Nano Converg. 8, 19 (2021) T.H. Kim, C.H. Yea, S.T. Chueng, P.T. Yin, B. Conley, K. Dardir, Y. Pak, G.Y. Jung, J.W. Choi, K.B. Lee, Adv. Mater. 27, 6356 (2015) C.-H. Kim, I.R. Suhito, N. Angeline, Y. Han, H. Son, Z. Luo, T.-H. Kim, Adv. Healthc. Mater. 9, 1901751 (2020) H.J. Yoo, Y.G. Li, W.Y. Cui, W. Chung, Y.-B. Shin, Y.-S. Kim, C. Baek, J. Min, Nano Converg. 8, 31 (2021) C.H. Kim, Y. Han, Y. Choi, M. Kwon, H. Son, Z. Luo, T.H. Kim, Small 17, e2103596 (2021) A. Ray, Z.M. Slama, R.K. Morford, S.A. Madden, P.P. Provenzano, Biophys. J. 112, 1023 (2017) V. Gribova, C.Y. Liu, A. Nishiguchi, M. Matsusaki, T. Boudou, C. Picart, M. Akashi, Biochem. Biophys. Res. Commun. 474, 515 (2016) A.M. Almonacid Suarez, Q. Zhou, P. van Rijn, M.C. Harmsen, J. Tissue Eng. Regen. Med. 13, 2234 (2019) C.K. Choi, M. Vicente-Manzanares, J. Zareno, L.A. Whitmore, A. Mogilner, A.R. Horwitz, Nat. Cell Biol. 10, 1039 (2008) L. Wells, K.A. Edwards, S.I. Bernstein, EMBO 15, 4454 (1996) This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science, ICT & Future Planning (Grant No. 2019R1F1A1061120), Korea Ministry of Environment (MOE) as Program (Grant No. 2020003030001), and Uniance Gene Inc. Hye Kyu Choi and Cheol-Hwi Kim contributed equally to this work Department of Chemical and Biomolecular Engineering, Sogang University, Seoul, 04170, South Korea Hye Kyu Choi & Byung-Keun Oh School of Integrative Engineering, Chung-Ang University, Seoul, 06974, Korea Cheol-Hwi Kim & Tae-Hyung Kim Uniance Gene Inc., Seoul, 04107, South Korea Sang Nam Lee Hye Kyu Choi Cheol-Hwi Kim Tae-Hyung Kim Byung-Keun Oh HKC and C-HK contributed equally to this work. HKC, C-HK and B-KO designed and wrote the manuscript. HKC, C-HK, SNL and T-HK in performing the experiment. All authors read and approved the final manuscript. Correspondence to Sang Nam Lee, Tae-Hyung Kim or Byung-Keun Oh. S. N. Lee was employed by the company Uniance Gene Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Additional file 1: Fig. S1. Schematic diagram of the fabrication process of PDMS nanopattern. Fig. S2. Schematic diagram of the fabrication process of PDMS micropattern. Fig. S3. Color survey data of aligned cells on the bare PDMS (a) and micropatterned PDMS (b). Fig. S4. Schematic diagram of the fabrication process of GO-coated PDMS substrates. Fig. S5. Raman spectrum of the bare PDMS, and LGO-, 10-sGO, and 5-sGO-coated PDMS substrates. Fig. S6.. Optimization of trypsin (a) and centrifugation (b) treatment time for the cells on bare PDMS substrate. (* p ≤ 0.5, ** p ≤ 0.01, *** p ≤ 0.001). Fig. S7. Schematic diagram of the fabrication process of GO-coated NMPA. Fig. S8. Topology analysis of nanopillars on microgroove. AFM image (a) and height size and pitch size (b) of nanopillars on microgroove. Fig. S9. Morphology of the cells on the bare PDMS (a), NMPA (b), and 5-sG-NMPA (c) after 5 days of differentiation. Additional file 2: Video S1. Detachment of the skeletal muscle cells from the bare PDMS substrate. Video plays at 8x speed. Choi, H.K., Kim, CH., Lee, S.N. et al. Nano-sized graphene oxide coated nanopillars on microgroove polymer arrays that enhance skeletal muscle cell differentiation. Nano Convergence 8, 40 (2021). https://doi.org/10.1186/s40580-021-00291-6 Received: 09 November 2021 Nano-sized graphene oxide Myogenesis Micro−nano hybrid pattern Cell behavior
CommonCrawl
Express Letter The spectral scalings of magnetic fluctuations upstream and downstream of the Venusian bow shock S. D. Xiao1, M. Y. Wu1, G. Q. Wang1, Y. Q. Chen2 & T. L. Zhang1,3 We statistically investigate the spectral scalings of magnetic fluctuations at the upstream and downstream regions near the Venusian bow shock and perform a differentiation by shock geometry. Based on the Venus Express data, 115 quasi-parallel (\(Q_{\parallel }\)) bow shock crossings and 303 quasi-perpendicular (\(Q_{ \bot }\)) bow shock crossings are selected. The statistical results suggest that the bow shock tends to modify the upstream spectra flatter to 1/f noise in the magnetohydrodynamics (MHD) regime and steeper to turbulence in the kinetic regime after the magnetic fluctuations crossing the bow shock, and this modification for the \(Q_{\parallel }\) and \(Q_{ \bot }\) bow shocks is basically consistent. However, the upstream spectral scalings are associated with the shock geometry. The changes of the spectral scalings of magnetic fluctuations near the \(Q_{\parallel }\) bow shocks are not as significant as near the \(Q_{ \bot }\) bow shock crossings. That might result from the fluctuations generated by the backstreaming ions which can escape across the \(Q_{\parallel }\) bow shock into the foreshock. Our results suggest that the energy cascade and dissipation near Venus can be modified by the Venusian bow shock, and the \(Q_{\parallel }\) bow shock plays an important role on the energy injection and dissipation in the solar wind interaction with Venus. The large dispersion of spectral scalings indicates that this fluctuation environment is complicated, and the shock geometry is not the only key factor in the fluctuations across the Venusian bow shock. Other possible factors in the shock modification to the upstream fluctuations will be explored in future. As a typical unmagnetized planet, Venus has no global intrinsic magnetic field. An induced magnetosphere is created by the solar wind (SW) interaction with Venus (e.g., Zhang et al. 2008a), consisting of the magnetic barrier (e.g., Zhang et al. 1991, 2008b; Xiao and Zhang 2018) and the magnetotail (e.g., Rong et al. 2014; Xiao et al. 2016). The Venusian-induced magnetosphere can deflect the upstream SW, and a bow shock and a magnetosheath are formed above (e.g., Zhang et al. 2008c; Phillips and McComas 1991). The near-Venusian space environment is a natural laboratory to investigate the SW interaction with unmagnetized planetary bodies (e.g., Luhmann 1986). In the near-Venusian space, magnetic field fluctuations play an important role in the transformation of momentum and energy, and their properties are widely reported (e.g., Luhmann et al. 1983; Guicking et al. 2010; Du et al. 2010; Xiao et al. 2017). The power of these fluctuations generally exhibits a frequency function as \(P \propto 1/f^{\alpha }\) (where \(P\) is the power spectral density (PSD), \(f\) is the frequency, and \(\alpha\) is the spectral scaling index). The index α is considered as an indicator of the nature of fluctuations, and it generally has different values for the frequency ranges above and below the local proton gyrofrequency (\(f_{p} = eq/m_{p}\)). In the magnetohydrodynamics (MHD) frequency range, the Kolmogorov scaling value of \(\alpha\) ~ 5/3 is expected for turbulence. Energy cascade occurs in this regime, where the energy is transferred from lower to higher frequencies. In the kinetic frequency range, the turbulence has a larger value of \(\alpha\) ~ 2.8, in which regime the magnetic energy dissipates into plasma or accumulates by exciting some dispersive waves, like the whistler. The spectral scaling indices have been extensively used to analyze the magnetic field fluctuations and turbulence in the solar wind (e.g., Alexandrova et al. 2008, 2009; Kiyani et al. 2009; Bruno and Carbone 2013), in the planetary space environments near Earth (e.g., Vörös et al. 2004, 2007; Vörös 2011), Mars (e.g., Ruhunusiri et al. 2017), and Venus (e.g., Vörös et al. 2008a, b; Dwivedi et al. 2015; Xiao et al. 2018, 2020a, b). Although Venus has a similar size compared to Earth, the Venusian bow shock and magnetosheath are much smaller, with a scale ratio of ~ 1/10 (e.g., Slavin et al. 1979). This might result in some differences between their space plasma environments. Vörös et al. (2008a, b) reported a survey of the spectral scalings of magnetic fluctuations in the Venusian magnetosheath and wake. They observed 1/f noise in the dayside magnetosheath, wavy structures near the terminator, and MHD turbulence at the magnetosheath post terminator boundary layer and near the nightside bow shock. The observed 1/f noise in the Venusian magnetosheath may indicate that the energy cascade between different scales is absent and the fluctuations are controlled by multiple uncorrelated driving sources (e.g., Vörös et al. 2007). Xiao et al. (2018) further examined the magnetic fluctuations in the dayside Venusian magnetosheath and found a clear difference of the turbulence distributions between downstream of the quasi-parallel (\(Q_{\parallel }\)) and the quasi-perpendicular (\(Q_{ \bot }\)) shocks. It is speculated that turbulence can be rapidly developed along the streamlines or penetrate into the Venusian magnetosheath downstream of the \(Q_{\parallel }\) bow shock. It suggests that the shock geometry has an effect on the spectral scalings of downstream magnetic fluctuations, and the bow shock plays a role on the turbulence distribution in the near-Venusian space. Xiao et al. (2020a) presented a description of the spectral scalings of magnetic fluctuations at the Venusian bow shock crossings. In terms of the spectral scalings, the dayside–nightside shock crossings exhibit a clear asymmetry. Noisy fluctuations dominate at the dayside shock crossing, and more MHD turbulence is present at the nightside shocks. Moreover, this distribution at the bow shock seems independent on the shock geometry. However, we still rarely know how the bow shock modifies the upstream magnetic fluctuations and turbulence from the SW. Besides, previous investigations of turbulence near Venus are mainly focused on the MHD regime. It is believed that the spectral scalings of magnetic field fluctuations in the kinetic regime are quite different from those in the MHD regime. A recent research presented the global spatial distribution of the spectral scaling indices of magnetic fluctuations in both of MHD and kinetic regimes, and the global distribution suggests that the kinetic effects on magnetic energy dissipation are common in the near-Venusian space (Xiao et al. 2020b). The SW turbulence can be modified by the Venusian bow shock in both MHD and kinetic regime, and kinetic turbulence extensively occurs in the Venusian magnetosheath and the induced magnetosphere. It is believed that kinetic turbulence plays a prominent role in magnetic energy dissipation and particle heating in such an environment. In this paper, we aim to differentiate the spectral scalings of the magnetic field fluctuations upstream and downstream of the Venusian bow shock and the shock geometry effects. The spectral scaling indices will be examined in both MHD and kinetic regimes. The investigation of the Venusian bow shock modifications to the upstream SW turbulence can help us better understand the energy injection, cascade, and dissipation in the SW interaction with Venus. Data and methods Venus Express (Svedhem et al. 2007; Titov et al. 2006), as the first ESA's Venus exploration mission, was launched in November 2005 and arrived at Venus in April 2006. Venus Express provides a great opportunity to examine the spectral scalings of the magnetic fluctuations near the Venusian bow shock. In this study, the magnetic field data measured by Venus Express magnetometer (Zhang et al. 2006) at a sampling rate of 32 Hz are used, and the bow shock crossings can be identified by the sudden change in the magnitude of the magnetic field between the SW and the Venusian magnetosheath, as shown in Fig. 1a. A Venusian bow shock crossing event observed by Venus Express on 30 Dec. 2006. a The time series of the total magnetic field, and the shock crossing interval is shown in shadow. b The magnetic field observations near the bow shock in the aberrated VSO coordinate system, and the upstream and downstream intervals are indicated. c, d The PSDs of the magnetic fluctuations for the upstream and downstream intervals Figure 1a shows the total magnetic field time series containing a bow shock crossing on Dec. 30, 2006. To examine the spectral scalings of the upstream and downstream magnetic fluctuations near the Venusian bow shock, the upstream and downstream intervals need to be selected during this shock crossing event. For this inbound case, a 256-s upstream interval is selected before the shock, and a 128-s downstream interval is selected after the shock crossing, as indicated in Fig. 1b. Figure 1b shows the magnetic field observations near the bow shock crossing in the aberrated Venus solar orbital (VSO) coordinate system, where the X axis points antiparallel to the average SW flow (with an aberration angle of 5°) from Venus, the Z axis is perpendicular to the ecliptic plane and toward north, and the Y axis completes the right-handed Cartesian coordinate system. Based on these two intervals, the PSDs are calculated via wavelet transforms (Torrence and Compo 1998): $${\text{PSD}}\left( f \right) = \frac{2\Delta t}{N}\mathop \sum \limits_{j = 1}^{N} \left[ {W_{x}^{2} \left( {t_{j} ,f} \right) + W_{y}^{2} \left( {t_{j} ,f} \right) + W_{z}^{2} \left( {t_{j} ,f} \right)} \right],$$ where \({\Delta }t\) is the sampling time, \(N\) is the length of the time series, and \(W_{x}\), \(W_{y}\), and \(W_{z}\) are the wavelet transforms of the x, y, and z components of the magnetic field, respectively. The upstream and downstream PSDs are, respectively, shown in Fig. 1c, d with dashed \(f_{p}\). An obvious spectral break near the \(f_{p}\) is present in the downstream PSD, while it is not so clear for the upstream. The spectral break between the MHD and kinetic regimes is generally around \(f_{p}\). However, the spectral break is not always precisely at \(f_{p}\). For example, the spectral break may correlate better with the ion plasma frequency under some conditions (Chen et al. 2014). Due to the lack of ion data, here we can distinguish these two regimes using \(f_{p}\) and ignore the transition range around \(f_{p}\) to eliminate the interference. In this study, we refer to the frequency range below \(f_{p} /2\) as the MHD regime and the frequency range above \(2f_{p}\) as the kinetic regime. The value of \(f_{p}\) is typically exhibited as 0.1 Hz in the upstream SW near Venus and 0.3 Hz in the downstream Venusian magnetosheath. The upstream interval is selected longer to cover a lower frequency range. Then the spectral indices in the MHD regime (\(\alpha_{{\text{m}}}\)) and the kinetic regime (\(\alpha_{{\text{k}}}\)) can be estimated as the slopes of the power–frequency log–log plot of the corresponding PSD in the frequency ranges of concern. Based on the methods described above, a statistical study of the spectral indices is performed at the upstream and downstream regions near the Venusian bow shock in the next section. Statistical observations To statistically differentiate the spectral scaling indices for the upstream and downstream regions near the Venusian bow shock and emphasize the shock geometry effects, we examine the Venus Express magnetic field data for ~ 7 years (2006.05–2012.08) and identify the bow shock crossings. Firstly, we select the orbits when the interplanetary magnetic field (IMF) is relatively steady; that is, the directional changes of the IMF between inbound and outbound crossings of bow shock (~ 15-min time interval) are less than 30°. Secondly, the shape of the Venusian bow shock can be estimated based on the positions of these two bow shock crossings and the conic section equation \(R = L/\left( {1 + e\cos \theta } \right)\) with a focus at (x0, 0, 0) and a fixed \(e\) of 1.03 during solar minimum (2006–2010) or 1.095 during solar maximum (2011–2012) as reported by Shan et al. (2015), where \(R\) is the bow shock distance from the conic focus, \(L\) is the conic section semi-latus rectum, and \(e\) is the eccentricity. At last, we find 209 orbits/418 shock crossings with steady IMF and well-determined bow shock model. We calculate the PSDs of the upstream and downstream intervals of these shock crossings, and then the values of \(\alpha_{{\text{m}}}\) and \(\alpha_{{\text{k}}}\) can be estimated in the corresponding frequency ranges, with the method described above. Figure 2 shows the histograms of \(\alpha\) for the upstream and the downstream intervals of the 418 shock crossings in the MHD and kinetic regimes. We find that, as shown in Fig. 2a, b, the median values of \(\alpha_{{\text{m}}}\) are 1.26 and 1.03 and the mean values are 1.25 and 1.09 with the standard deviations of 1.07 and 1.10 for the upstream and the downstream intervals, respectively. The statistical distributions indicate that these magnetic fluctuations are in a developing and mixed state. From upstream to downstream, the values of \(\alpha_{{\text{m}}}\) show a decreasing trend and the downstream fluctuations tend toward 1/f noise in the MHD regime. This can be ascribed to the fact that collisionless shock physics is mediated by particles and that the injection scale tends to be close to the proton kinetic scale. Since the analysis is performed close to the bow shock, the turbulence has not fully developed yet. Similar features can also be observed downstream of the Earth bow shock (e.g., Yordanova et al. 2008). Figure 2c, d indicates the median values of \(\alpha_{{\text{k}}}\) are 2.27 and 2.88 and the mean values are 2.26 and 2.80 with the standard deviations of 0.72 and 0.54 for the upstream and the downstream intervals, respectively. The values of \(\alpha_{{\text{k}}}\) for the downstream tend to be larger and concentrated, and the downstream distribution indicates that the kinetic turbulence dominates behind the bow shock. As reported by a previous study (Xiao et al. 2020b), well-developed turbulence is a common phenomenon in the pristine SW near Venus. However, Fig. 2 shows that although some spectral indices indicating the turbulence can still be found, these upstream intervals are rarely turbulence dominant. We infer that the SW fluctuations have been modified before they reach the bow shock. Histograms of the spectral indices for a, c the upstream and b, d the downstream intervals of the bow shock crossings. a, b The histograms for the MHD regime. c, d The histograms for the kinetic regime To be noticed, we can find that the histograms in Fig. 2 exhibit large dispersions. That indicates the complex and diverse magnetic fluctuations near the Venusian bow shock, and the spectral scalings of the fluctuations could be affected by multiple factors. Some previous studies suggested that the bow shock geometry could affect the fluctuations and turbulence in the near-Venusian space (e.g., Luhmann et al. 1983; Xiao et al. 2018). To further investigate the bow shock modifications to the SW fluctuations and turbulence, we split the bow shock crossings into two categories of \(Q_{\parallel }\) (\(\theta_{{{\text{BN}}}}\) < 45°) and \(Q_{ \bot }\) (\(\theta_{{{\text{BN}}}}\) > 45°) geometries. The shock normal angle \(\theta_{{{\text{BN}}}}\) can be calculated by the average upstream magnetic field and the estimated local bow shock normal. The average upstream magnetic field is obtained in the 256-s upstream interval of each shock crossing event. The shock normal is determined by the estimated bow shock model with the method described above. Consequently, 115 \(Q_{\parallel }\) events and 303 \(Q_{ \bot }\) events are obtained, and then we can examine the shock geometry effects on the spectral scalings of fluctuations near the bow shock. Here we show two examples of the magnetic field observations in Fig. 3 for the \(Q_{\parallel }\) and the \(Q_{ \bot }\) bow shock crossings. Figure 3a, b presents the total magnetic field, and Fig. 3c, d presents the magnetic fields in the aberrated VSO coordinate system. Figure 3a, c presents a \(Q_{\parallel }\) bow shock (\(\theta_{{{\text{BN}}}}\) ~ 28.4°) crossing event on May 16, 2006. Figure 3b, d presents a \(Q_{ \bot }\) bow shock (\(\theta_{{{\text{BN}}}}\) ~ 84.0°) crossing event on Nov. 18, 2006. This indicates the high level of excited fluctuations upstream of the \(Q_{\parallel }\) shock by backstreaming ions in comparison with the \(Q_{ \bot }\) configuration. The time series of the magnetic field magnitude and components for the two types of shock geometry encountered during the Venusian bow shock crossings. a, c The magnetic field observations near a \(Q_{\parallel }\) bow shock on 2006 May 16. b, d The magnetic field observations near a \(Q_{ \bot }\) bow shock on 2006 November 18 Figure 4 shows the histograms of \(\alpha\) for the upstream and the downstream intervals of the \(Q_{\parallel }\) (red) and \(Q_{ \bot }\) (blue) bow shock crossings. Figure 4a, b shows the histograms of \(\alpha_{{\text{m}}}\). The median values of \(\alpha_{{\text{m}}}\) for the downstream intervals of \(Q_{\parallel }\) and \(Q_{ \bot }\) bow shocks are 0.96 and 1.05, respectively. The similar distributions suggest that, in the MHD regime, the effects of the bow shock on the SW fluctuations and turbulence are independent on the shock geometry. However, the values of \(\alpha_{{\text{m}}}\) for the upstream intervals are obviously related to the shock geometry. For the upstream intervals of \(Q_{\parallel }\) and \(Q_{ \bot }\) bow shocks, the median values of \(\alpha_{{\text{m}}}\) are 0.82 and 1.43, respectively. Both distributions upstream and downstream of the \(Q_{\parallel }\) bow shocks present two peaks at ~ 1 and ~ 1.5. The spectral scalings do not show significant differences between the fluctuations upstream and downstream of the \(Q_{\parallel }\) shocks. The MHD fluctuations upstream of the \(Q_{ \bot }\) bow shocks are also in a mixed state that some fluctuations might be modified or still in developing, but the distribution is more like the SW with pre-existing turbulence. Obvious differences exist between the upstream and downstream intervals of the \(Q_{ \bot }\) shocks, which could result from the waves excited behind the \(Q_{ \bot }\) shocks. We infer that the pre-existing MHD turbulence from the SW could be more prone to reaching the \(Q_{ \bot }\) bow shocks but start to be modified before reaching the \(Q_{\parallel }\) bow shocks. The modification might be due to the waves generated near the bow shock. Histograms of the spectral indices for a, c the upstream and b, d the downstream intervals of the \(Q_{\parallel }\) (red) and \(Q_{ \bot }\)(blue) bow shock crossings. a, b The histograms for the MHD regime. c, d The histograms for the kinetic regime Figure 4c, d shows the histograms of \(\alpha_{{\text{k}}}\). The median values of \(\alpha_{{\text{k}}}\) are 2.88 and 2.87 for the downstream intervals of \(Q_{\parallel }\) and \(Q_{ \bot }\) bow shocks, respectively. The distributions indicate that the kinetic turbulence is significantly dominant for the downstream intervals, and it is not shock geometry dependent. The upstream kinetic fluctuations also show a difference between the \(Q_{\parallel }\) and \(Q_{ \bot }\) bow shock crossings. The median values of \(\alpha_{{\text{k}}}\) for the upstream intervals of \(Q_{\parallel }\) and \(Q_{ \bot }\) bow shocks are, respectively, 2.70 and 2.11. The statistical kinetic spectral scalings of the fluctuations near the \(Q_{\parallel }\) bow shocks do not change significantly as near the \(Q_{ \bot }\) bow shocks, but the values of \(\alpha_{{\text{k}}}\) for the downstream intervals have a much more concentrated distribution. This indicates that the downstream kinetic turbulence is mainly developed behind the bow shock but not penetrate from the upstream. The kinetic turbulence can also be developed upstream of the \(Q_{\parallel }\) bow shocks. We infer that the SW kinetic fluctuations could be affected by the Venusian bow shock at the upstream region and the upstream difference of the \(Q_{\parallel }\) and \(Q_{ \bot }\) bow shock might result from the more backstreaming ions upstream of the \(Q_{\parallel }\) bow shock. Based on the results from Fig. 4, we find the shock geometry does influence the spectral scalings of magnetic fluctuations upstream and downstream of the Venusian bow shock. However, a large dispersion is still shown in some histograms. Therefore, the shock geometry is one key factor but not the only factor in the propagation of the fluctuations across the Venusian bow shock. In this paper, we use the magnetic field data of Venus Express from 2006.05 to 2012.08 to investigate the spectral scalings variations of the magnetic field fluctuations near the Venusian bow shock. The spectral indices are calculated in the MHD and kinetic regimes for the upstream and downstream intervals close to the Venusian bow shock. There are 115 \(Q_{\parallel }\) and 303 \(Q_{ \bot }\) shock crossings selected in this study. Based on the statistical results, the shock effects on the spectral scalings near the Venusian bow shocks are examined. We find the Venusian bow shock tends to flatten the spectra of upstream MHD fluctuations and steepen the kinetic spectra. At the downstream regions, the MHD magnetic fluctuations and turbulence tend to be modified to 1/f noise and the kinetic turbulence can be fully developed behind the shock, which is consistent with the previous studies (e.g., Vörös et al. 2008a; Xiao et al. 2020b). This suggests that the energy cascade and dissipation near Venus can be modified by the Venusian bow shock. The spectral indices for the downstream intervals show this shock modification is independent on the shock geometry. However, we find the upstream spectral scalings are associated with the shock geometry. In the MHD regime, the spectra upstream of \(Q_{\parallel }\) bow shocks are flatter than that of \(Q_{ \bot }\) bow shocks. In the kinetic regime, the spectra upstream of \(Q_{\parallel }\) bow shocks are steeper than that of \(Q_{ \bot }\) bow shocks. As reported by a previous study, the \(\alpha_{{\text{m}}}\) in the pristine solar wind far upstream of the Venusian bow shock is near Kolmogorov scaling value and the values of \(\alpha_{{\text{k}}}\) are typically ~ 2.5–3 (Xiao et al. 2020b). Our results in this study indicate that the bow shock effects on the SW fluctuations could begin at the upstream region, which might result from reflected ions and newborn pickup ions. For example, the ULF waves excited by backstreaming ions exhibit their frequency range of 0.3–0.5 \(f_{p}\) (e.g., Shan et al. 2016), which could lead to a decrease of \(\alpha_{{\text{m}}}\), and the whistler-mode waves generated upstream of the bow shock exhibit their frequency range from several \(f_{p}\) to 2 Hz (e.g., Russell 2007), which could lead to an increase of \(\alpha_{{\text{k}}}\). For validating the PSDs in the spacecraft frame, the possibility of violating the Taylor hypothesis need to be considered, especially at high frequencies and in an environment close to a shock (e.g., Klein et al. 2014). Unfortunately, because of the limited observational data, it is hard to detect the violation of the Taylor hypothesis in this study. Multiple wave modes can be generated near the bow shock, and sometimes the spectral shape might not be clear for the fluctuations in the shock upstream and shock downstream regions. The values of \(\alpha\) in this region exhibit a large variation. To interpret these fluctuations, we further examine their compression and rotation senses (Arthur et al. 1976; Means 1972), as with some prior studies on magnetic field fluctuations near Venus (e.g., Guicking et al. 2010; Du et al. 2010; Xiao et al. 2017). The histograms of these fluctuation properties are shown in Fig. 5 for the \(Q_{\parallel }\) (red) and \(Q_{ \bot }\)(blue) events. Histograms of a–d the transverse and compressional ratio and e–h the ellipticity of fluctuations for the upstream and the downstream intervals of the \(Q_{\parallel }\) (red) and \(Q_{ \bot }\) (blue) bow shock crossings. The upper panels show the histograms for the MHD regime, and the lower panels show the histograms for the kinetic regime The transverse and compressional ratio (\(\zeta\)) of fluctuations is defined as \(\left( {P_{ \bot } - P_{\parallel } } \right)/P_{{\text{T}}}\), where \(P_{ \bot }\) is the transverse power of the fluctuations with respect to the ambient magnetic field, \(P_{\parallel }\) is the compressional power, and \(P_{{\text{T}}}\) is the total power. The range of \(\zeta\) is from − 1 to 1. A positive (negative) ratio means the transverse power is higher (lower) than the compressional power, and a ratio of 1 indicates that this fluctuation is purely transverse. Figure 5a–d shows the histograms of \(\zeta\) in the MHD and kinetic regimes. In the MHD regime, we can find the upstream fluctuations are mainly transverse, while more compressional fluctuations are present downstream, especially behind the \(Q_{ \bot }\) bow shock. That might be due to the compressional waves generated by the mirror mode instability (e.g., Volwerk et al. 2008). In the kinetic regime, the transverse fluctuations dominate in the region near the Venusian bow shock, and this nature is shock geometry independent. The ellipticity (\(\varepsilon\)) of fluctuations is defined as the ratio of the minor to major axis of the polarization ellipse transcribed by the field variations of the components transverse to the ambient field (Samson and Olson 1980). The sign indicates the direction of rotation of the polarization ellipse, i.e., the rotation sense about the ambient field (Means 1972). Negative signs refer to left-handed polarized waves and positive signs refer to right-handed polarized waves. The waves of linear and circular polarization correspond to the \(\varepsilon\) of 0 and ± 1, respectively. Figure 5e–h shows the histograms of \(\varepsilon\) in the MHD and kinetic regimes. The histograms indicate that many right-handed polarized fluctuations can be generated behind the bow shock, and they could be excited by the pickup ions. In addition, we consider that these left-handed polarized waves in kinetic regime, which are hardly observed downstream, are related to the upstream whistler-mode waves or the so-called 1-Hz waves generated at the bow shock and propagating upstream (e.g., Russell 2007; Xiao et al. 2020c). These waves are elliptically polarized and intrinsically right-handed polarized; however, they are observed as left-handed elliptically polarized in spacecraft frame due to the Doppler shifting effect. At the \(Q_{\parallel }\) shocks, some ions can escape into the foreshock region, and a variety of large-amplitude waves can be excited by these backstreaming ions (e.g., Delva et al. 2011; Collinson et al. 2012; Shan et al. 2014). These waves generated in the foreshock can be convected to the downstream side across the \(Q_{\parallel }\) bow shock (e.g., Luhmann et al. 1983; Shan et al. 2014). That might be a reason of the statistical finding that there are no significant variations of the spectral scalings of magnetic fluctuations near the \(Q_{\parallel }\) bow shocks. This suggests that the \(Q_{\parallel }\) bow shock is an important channel of the energy injection and dissipation in the interaction of the SW with Venus. More fluctuations and higher energies are injected at \(Q_{\parallel }\) rather than \(Q_{ \bot }\) shocks. At the \(Q_{ \bot }\) shocks, these ions will generally gyrate back and then excite waves behind the shocks (e.g., Volwerk et al. 2008). These waves could result in the changes of downstream spectral scalings. However, in some cases, the waves upstream of \(Q_{ \bot }\) shocks can be transmitted into the downstream, and the downstream spectra could also be similar to the upstream except an enhanced amplitude (e.g., Lu et al. 2009). Thereby, the spectral scalings variations across the bow shock could be controlled by multiple factors. In this study, we find the shock geometry is an important factor. Other possible factors affecting the Venusian bow shock modification to the SW fluctuations will be our future topics. Venus Express magnetic field data are available in the ESA's Planetary Science Archive (ftp://psa.esac.esa.int/pub/mirror/VENUS-EXPRESS/). Alexandrova O, Carbone V, Veltri P, Sorriso-Valvo L (2008) Small-scale energy cascade of the solar wind turbulence. Astrophys J 674(2):1153–1157 Alexandrova O, Saur J, Lacombe C, Mangeney A, Mitchell J, Schwartz SJ et al (2009) Universality of solar-wind turbulent spectrum from mhd to electron scales. Phys Rev Lett 103(16):165003 Arthur CW, McPherron RL, Means JD (1976) A comparative study of three techniques for using the spectral matrix in wave analysis. Radio Sci 11(10):833–845. https://doi.org/10.1029/RS011i010p00833 Bruno R, Carbone V (2013) The solar wind as a turbulence laboratory. Living Rev Sol Phys 10(1):1–208 Chen CHK, Leung L, Boldyrev S, Maruca BA, Bale SD (2014) Ion-scale spectral break of solar wind turbulence at high and low beta. Geophys Res Lett 41:8081–8088. https://doi.org/10.1002/2014GL062009 Collinson GA, Wilson LB, Sibeck DG, Shane N, Zhang TL, Moore TE, Coates AJ, Barabash S (2012) Short large-amplitude magnetic structures (SLAMS) at Venus. J Geophys Res 117:A10221. https://doi.org/10.1029/2012JA017838 Delva M, Mazelle C, Bertucci C, Volwerk M, Vörös Z, Zhang TL (2011) Proton cyclotron wave generation mechanisms upstream of Venus. J Geophys Res 116:A02318. https://doi.org/10.1029/2010JA015826 Du J, Zhang TL, Baumjohann W, Wang C, Volwerk M, Vörös Z, Guicking L (2010) Statistical study of low-frequency magnetic field fluctuations near Venus under the different interplanetary magnetic field orientations. J Geophys Res 115:A12251. https://doi.org/10.1029/2010JA015549 Dwivedi NK, Schmid D, Narita Y, Kovács P, Vörös Z, Delva M, Zhang T (2015) Statistical investigation on the power-law behavior of magnetic fluctuations in the Venusian magnetosheath. Earth Planets Space 67(1):137 Guicking L, Glassmeier KH, Auster HU, Delva M, Motschmann U, Narita Y, Zhang TL (2010) Low-frequency magnetic field fluctuations in Venus' SW interaction region: Venus Express observations. Ann Geophys 28:951–967 Kiyani KH, Chapman SC, Khotyaintsev YV, Dunlop MW, Sahraoui F (2009) Global scale-invariant dissipation in collisionless plasma turbulence. Phys Rev Lett 103(7):075006 Klein K, Howes G, Tenbarge J (2014) The violation of the Taylor hypothesis in measurements of solar wind, turbulence. Astrophys J Lett 790(2):L20 Lu Q, Hu Q, Zank GP (2009) The interaction of Alfvén waves with perpendicular shocks. Astrophys J 706(1):687 Luhmann JG (1986) The SW interaction with Venus. Space Sci Rev 44:241–306 Luhmann JG, Tatrallyay M, Russell CT, Winterhalter D (1983) Magnetic field fluctuations in the Venus magnetosheath. Geophys Res Lett 10(8):655–658. https://doi.org/10.1029/GL010i008p00655 Means JD (1972) Use of the three-dimensional covariance matrix in analyzing the polarization properties of plane waves. J Geophys Res 77(28):5551–5559. https://doi.org/10.1029/JA077i028p05551 Phillips JL, McComas DJ (1991) The magnetosheath and magnetotail of Venus. Space Sci Rev 55:1–80. https://doi.org/10.1007/BF00177135 Rong ZJ, Barabash S, Futaana Y, Stenberg G, Zhang TL, Wan WX, Wei Y, Wang X-D, Chai LH, Zhong J (2014) Morphology of magnetic field in near-Venus magnetotail: Venus Express observations. J Geophys Res Space Phys 119:8838–8847. https://doi.org/10.1002/2014JA020461 Ruhunusiri S, Halekas JS, Espley JR, Mazelle C, Brain D, Harada Y, DiBraccio GA, Livi R, Larson DE et al (2017) Characterization of turbulence in the Mars plasma environment with MAVEN observations. J Geophys Res Space Phys 122(1):656–674. https://doi.org/10.1002/2016JA023456 Russell CT (2007) Upstream whistler-mode waves at planetary bow shocks: a brief review. J Atmos Sol Terr Phys 69(14):1739–1746 Samson JC, Olson JV (1980) Some comments on the descriptions of the polarization states of waves. Geophys J Roy Astron Soc 61:115–129. https://doi.org/10.1111/j.1365-246X.1980.tb04308.x Shan L, Lu Q, Wu M, Gao X, Huang C, Zhang T, Wang S (2014) Transmission of large-amplitude ULF waves through a quasi-parallel shock at Venus. J Geophys Res Space Phys 119:237–245. https://doi.org/10.1002/2013JA019396 Shan L, Lu Q, Mazelle C, Huang C, Zhang T, Wu M et al (2015) The shape of the Venusian bow shock at solar minimum and maximum: revisit based on vex observations. Planet Space Sci 109–110:32–37. https://doi.org/10.1016/j.pss.2015.01.004 Shan L, Mazelle C, Meziane K, Delva M, Lu Q, Ge YS, Du A, Zhang T (2016) Characteristics of quasi-monochromatic ULF waves in the Venusian foreshock. J Geophys Res Space Phys 121:7385–7397. https://doi.org/10.1002/2016JA022876 Slavin JA, Elphic RC, Russell CT, Intriligator DS, Wolfe JH (1979) Position and shape of the Venus bow shock: Pioneer Venus Orbiter observations. Geophys Res Lett 6(11):901–904. https://doi.org/10.1029/GL006i011p00901 Svedhem H, Titov DV, McCoy D, Lebreton JP, Barabash S, Bertaux JL et al (2007) The first European mission to Venus. Planet Space Sci 55(12):1636–1652. https://doi.org/10.1016/j.pss.2007.01.013 Titov DV, Svedhem H, Koschny D, Hoofs R, Barabash S, Bertaux JL et al (2006) Venus Express science planning. Planet Space Sci 54(13–14):1279–1297. https://doi.org/10.1016/j.pss.2006.04.017 Torrence C, Compo GP (1998) A practical guide to wavelet analysis. Bull Amer Meteor Soc 79:61–78 Volwerk M, Zhang TL, Delva M, Vörös Z, Baumjohann W, Glassmeier K-H (2008) First identification of mirror mode waves in Venus' magnetosheath? Geophys Res Lett 35:L12204. https://doi.org/10.1029/2008GL033621 Vörös Z (2011) Magnetic reconnection associated fluctuations in the deep magnetotail: ARTEMIS results. Nonlinear Process Geophys 18:861–869. https://doi.org/10.5194/npg-18-861-2011 Vörös Z, Baumjohann W, Nakamura R, Volwerk M, Runov A, Zhang TL et al (2004) Magnetic turbulence in the plasma sheet. J Geophys Res 109:A11215. https://doi.org/10.1029/2004JA010404 Vörös Z, Baumjohann W, Nakamura R, Runov A, Volwerk M, Asano Y, Jankovičová D, Lucek EA, Rème H (2007) Spectral scaling in the turbulent Earth's plasma sheet revisited. Nonlinear Process Geophys 14:535–541. https://doi.org/10.5194/npg-14-535-2007 Vörös Z, Zhang TL, Leubner MP, Volwerk M, Delva M, Baumjohann W, Kudela K (2008a) Magnetic fluctuations and turbulence in the Venus magnetosheath and wake. Geophys Res Lett 35:L11102. https://doi.org/10.1029/2008GL033879 Vörös Z, Zhang TL, Leubner MP, Volwerk M, Delva M, Baumjohann W (2008b) Intermittent turbulence, noisy fluctuations, and wavy structures in the Venusian magnetosheath and wake. J Geophys Res 113:E00B21. https://doi.org/10.1029/2008JE003159 Xiao SD, Zhang TL (2018) Solar cycle variation of the Venus magnetic barrier. Planet Space Sci 158:53–62. https://doi.org/10.1016/j.pss.2018.05.006 Xiao SD, Zhang TL, Baumjohann W (2016) Hemispheric asymmetry in the near-Venusian magnetotail during solar maximum. J Geophys Res Space Phys 121(5):4542–4547. https://doi.org/10.1002/2015JA022093 Xiao SD, Zhang TL, Wang GQ (2017) Statistical study of low-frequency magnetic field fluctuations near Venus during the solar cycle. J Geophys Res Space Phys 122:8409–8418. https://doi.org/10.1002/2017JA023878 Xiao SD, Zhang TL, Vörös Z (2018) Magnetic fluctuations and turbulence in the Venusian magnetosheath downstream of different types of bow shock. J Geophys Res Space Phys 123:8219–8226. https://doi.org/10.1029/2018JA025250 Xiao SD, Zhang TL, Vörös Z, Wu MY, Wang GQ, Chen YQ (2020a) Turbulence near the Venusian bow shock: Venus Express observations. J Geophys Res Space Phys. https://doi.org/10.1029/2019JA027190 Xiao SD, Wu MY, Wang GQ, Wang G, Chen YQ, Zhang TL (2020b) Turbulence in the near-Venusian space: Venus Express observations. Earth Planet Phys 4(1):82–87. https://doi.org/10.26464/epp2020012 Xiao SD, Wu MY, Wang GQ, Chen YQ, Zhang TL (2020c) Survey of 1-Hz waves in the near-Venusian space: Venus Express observations. Planet Space Sci 187:104933. https://doi.org/10.1016/j.pss.2020.104933 Yordanova E, Vaivads A, Andre M, Buchert SC, Voeroes Z (2008) Magnetosheath plasma turbulence and its spatiotemporal evolution as observed by the Cluster spacecraft. Phys Rev Lett 100(20):205003 Zhang TL, Luhmann JG, Russell CT (1991) The magnetic barrier at Venus. J Geophys Res 96:11145–11153 Zhang TL, Baumjohann W, Delva M, Auster H-U, Balogh A, Russell CT et al (2006) Magnetic field investigation of the Venus plasma environment: expected new results from Venus Express. Planet Space Sci 54(13–14):1336–1343. https://doi.org/10.1016/j.pss.2006.04.018 Zhang TL et al (2008a) Induced magnetosphere and its outer boundary at Venus. J Geophys Res 113:E00B20. https://doi.org/10.1029/2008JE003215 Zhang TL, Delva M, Baumjohann W et al (2008b) Initial Venus Express magnetic field observations of the magnetic barrier at solar minimum. Planet Space Sci 56(6):790–795 Zhang TL et al (2008c) Initial Venus Express magnetic field observations of the Venus bow shock location at solar minimum. Planet Space Sci 56:785–789. https://doi.org/10.1016/j.pss.2007.09.012 The authors are grateful to the supports of NSFC (41904156, 41974205, 41774171, 41774167, 41804157), China Postdoctoral Science Foundation (2019M651271), and the pre-research Project on Civil Aerospace Technologies (No. D020103) funded by CNSA. The authors also acknowledge the financial support of Shenzhen Science and Technology Research Program (JCYJ20180306171918617) and Shenzhen Science and Technology Program (Group No. KQTD20180410161218820), and the support of CAS Center for Excellence in Comparative Planetology. This work was supported by NSFC grants (41904156, 41974205, 41774171, 41774167 and 41804157), the project funded by China Postdoctoral Science Foundation (2019M651271), the pre-research Project on Civil Aerospace Technologies (No. D020103) funded by CNSA, Shenzhen Science and Technology Research Program (JCYJ20180306171918617), and Shenzhen Science and Technology Program (Group No. KQTD20180410161218820). Tielong Zhang was supported by CAS Center for Excellence in Comparative Planetology. Harbin Institute of Technology, Shenzhen, China S. D. Xiao, M. Y. Wu, G. Q. Wang & T. L. Zhang CAS Key Laboratory of Geospace Environment, University of Science and Technology of China, Hefei, China Y. Q. Chen Space Research Institute, Austrian Academy of Sciences, Graz, Austria T. L. Zhang S. D. Xiao M. Y. Wu G. Q. Wang SDX initiated the investigation and prepared the original manuscript. TLZ supervised the investigation. MYW and GQW participated in the discussions and reviewed the manuscript. YQC and TLZ gave the suggestions of the manuscript. All authors read and approved the final manuscript. Correspondence to T. L. Zhang. Xiao, S.D., Wu, M.Y., Wang, G.Q. et al. The spectral scalings of magnetic fluctuations upstream and downstream of the Venusian bow shock. Earth Planets Space 73, 13 (2021). https://doi.org/10.1186/s40623-020-01343-7 Magnetic fluctuations Venusian bow shock Spectral scalings Shock geometry 3. Space science
CommonCrawl
Journal of electromagnetic engineering and science The Korean Institute of Electromagnetic Engineering and Science (한국전자파학회) Physics > Particle Physics/Field Theory JEES is an official English journal of the Korean Institute of Electromagnetic and Science. The objective of JEES is to publish academic as well as industrial research results and foundings on electromagnetic engineering and science. In particular, electromagnetic field theory and its applications, high frequency components, circuits, antennas, and systems, electromagnetic wave environment, and relevant industrial developments are in the scope of the Journal. With the Journal researches on electromagnetic wave-related engineering and sciences will be nourished, and ultimately contributed to improve the welfare of the human race and national development. KSCI SCOPUS Analysis of Microwave-Induced Thermoacoustic Signal Generation Using Computer Simulation Dewantari, Aulia;Jeon, Se-Yeon;Kim, Seok;Nikitin, Konstantin;Ka, Min-Ho 1 https://doi.org/10.5515/JKIEES.2016.16.1.1 PDF KSCI Computer simulations were conducted to demonstrate the generation of microwave-induced thermoacoustic signal. The simulations began with modelling an object with a biological tissue characteristic and irradiating it with a microwave pulse. The time-varying heating function data at every particular point on the illuminated object were obtained from absorbed electric field data from the simulation result. The thermoacoustic signal received at a point transducer at a particular distance from the object was generated by applying heating function data to the thermoacoustic equation. These simulations can be used as a foundation for understanding how thermoacoustic signal is generated and can be applied as a basis for thermoacoustic imaging simulations and experiments in future research. Microwave Negative Group Delay Circuit: Filter Synthesis Approach Park, Junsik;Chaudhary, Girdhari;Jeong, Junhyung;Jeong, Yongchae 7 This paper presents the design of a negative group delay circuit (NGDC) using the filter synthesis approach. The proposed design method is based on a frequency transformation from a low-pass filter (LPF) to a bandstop filter (BSF). The predefined negative group delay (NGD) can be obtained by inserting resistors into resonators. To implement a circuit with a distributed transmission line, a circuit conversion technique is employed. Both theoretical and experimental results are provided for validating of the proposed approach. For NGD bandwidth and magnitude flatness enhancements, two second-order NGDCs with slightly different center frequencies are cascaded. In the experiment, group delay of $5.9{\pm}0.5ns$ and insertion loss of $39.95{\pm}0.5dB$ are obtained in the frequency range of 1.935-2.001 GHz. High Selectivity Coupled Line Impedance Transformer with Second Harmonic Suppression Kim, Phirun;Park, Junsik;Jeong, Junhyung;Jeong, Seungho;Chaudhary, Girdhari;Jeong, Yongchae 13 https://doi.org/10.5515/JKIEES.2016.16.1.13 PDF KSCI This paper presents a design of an impedance transformer (IT) with high frequency selectivity characteristics. The frequency selectivity can be controlled by even- and odd-mode impedance of a shunt coupled transmission line (TL). For experimental validation, a 50- to $20-{\Omega}$ IT was implemented at a center frequency ($f_0$) of 2.6 GHz for the long-term evolution signal. The measured results were in good agreement with the simulations, showing a return loss higher than 19 dB over a passband bandwidth of 0.63 GHz (2.28-2.91 GHz) and good sharp frequency selectivity characteristic near to the passband. The series coupled TL provides a transmission zero at 5.75 GHz, whereas the shunt coupled TL provides three transmission zeros located at 2 GHz, 3.1 GHz, and 7.14 GHz. Design of Dual-Band Bandpass Filters for Cognitive Radio Application of TVWS Band Kwon, Kun-An;Kim, Hyun-Keun;Yun, Sang-Won 19 This paper presents a novel design for dual-band bandpass filters. The proposed filters are applicable to the carrier aggregation of the TV white space (TVWS) band and long-term evolution (LTE) band for cognitive radio applications. The lower passband is the TVWS band (470-698 MHz) whose fractional bandwidth is 40 %, while the higher passband is the LTE band (824-894 MHz) with 8 % fractional bandwidth. Since the two passbands are located very close to each other, a transmission zero is inserted to enhance the rejection level between the two passbands. The TVWS band filter is designed using magnetic coupling to obtain a wide bandwidth, and the LTE band filter is designed using dielectric resonators to achieve good insertion loss characteristics. In addition, in the proposed design, a transmission zero is placed with cross-coupling. The proposed dual-band bandpass filter is designed as a two-port filter (one input/one output) as well as a three-port filter (one common input/two outputs). The measured performances show good agreement with the simulated performances. RF Conductivity Measurement of Conductive Zell Fabric Nguyen, Tien Manh;Chung, Jae-Young 24 This study presents a conductivity measurement technique that is applicable at radio frequencies (RF). Of particular interest is the measurement of the RF conductivity of a flexible Zell fabric, which is often used to implement wearable antennas on clothes. First, the transmission coefficient is measured using a planar microstrip ring resonator, where the ring is made of a Zell fabric. Then, the fabric's conductivity is determined by comparing the measured transmission coefficient to a set of simulation data. Specifically, a MATLAB-based root-searching algorithm is used to find the minimum of an error function composed of measured and simulation data. Several error functions have been tested, and the results showed that an error function employing only the magnitude of the transmission coefficient was the best for determining the conductivity. The effectiveness of this technique is verified by the measurement of a known copper foil before characterizing the Zell fabric. The conductivity of the Zell fabric at 2 GHz appears to be within the order of $10^4S/m$, which is lower than the DC conductivity of $5{\times}10^5S/m$. Design of a Switchplexer Based on a Microstrip Ring Resonator with Single-Switch Operation Park, Wook-Ki;Ahn, Chi-Hyung;Kim, In-Ryeol;Oh, Soon-Soo 29 This paper proposes a reconfigurable microstrip diplexer, also known as a switchplexer, for use with single-switch operation of frequency band and output path. The proposed switchplexer is composed of a rectangular ring resonator and a switch on a shorting pin, which is inserted between the ring resonator and the ground plane. The rotated main current distributions occurring at different switch statuses provide the diplexer's different operating bands and different output ports. The performance of the simply designed switchplexer is successfully demonstrated via simulation and measurement. Quadruple Band-Notched Trapezoid UWB Antenna with Reduced Gains in Notch Bands Jin, Yunnan;Tak, Jinpil;Choi, Jaehoon 35 A compact ultra-wide band antenna with a quadruple band-notched characteristic is proposed. The proposed antenna consists of a slotted trapezoid patch radiator, an inverted U-shaped band stop filter, a pair of C-shaped band stop filters, and a rectangular ground plane. To realize the quadruple notch-band characteristic, a U-shaped slot, a complementary split ring resonator, an inverted U-shaped band stop filter, and two C-shaped band stop filters are utilized in this antenna. The antenna satisfies the -10 dB reflection coefficient bandwidth requirement in the frequency band of 2.88-12.67 GHz, with a band-rejection characteristic in the WiMAX (3.43-3.85 GHz), WLAN (5.26-6.01 GHz), X-band satellite communication (7.05-7.68 GHz), and ITU 8 GHz (8.08-8.87 GHz) signal bands. In addition, the proposed antenna has a compact volume of $30mm{\times}33.5mm{\times}0.8mm$ while maintaining omnidirectional patterns in the H-plane. The experimental and simulated results of the proposed antenna are shown to be in good agreement. 6-18 GHz Reactive Matched GaN MMIC Power Amplifiers with Distributed L-C Load Matching Kim, Jihoon;Choi, Kwangseok;Lee, Sangho;Park, Hongjong;Kwon, Youngwoo 44 A commercial $0.25{\mu}m$ GaN process is used to implement 6-18 GHz wideband power amplifier (PA) monolithic microwave integrated circuits (MMICs). GaN HEMTs are advantageous for enhancing RF power due to high breakdown voltages. However, the large-signal models provided by the foundry service cannot guarantee model accuracy up to frequencies close to their maximum oscillation frequency ($F_{max}$). Generally, the optimum output load point of a PA varies severely according to frequency, which creates difficulties in generating watt-level output power through the octave bandwidth. This study overcomes these issues by the development of in-house large-signal models that include a thermal model and by applying distributed L-C output load matching to reactive matched amplifiers. The proposed GaN PAs have successfully accomplished output power over 5 W through the octave bandwidth. Split Slant-End Stubs for the Design of Broadband Efficient Power Amplifiers Park, Youngcheol;Kang, Taeggu 52 This paper suggests a class-F power amplifier with split open-end stubs to provide a broadband high-efficiency operation. These stubs are designed to have wide bandwidth by splitting wide open-end stubs into narrower stubs connected in shunt in an output matching network for class-F operation. In contrast to conventional wideband class-F designs, which theoretically need a large number of matching lines, this method requires fewer transmission lines, resulting in a compact circuit implementation. In addition, the open-end stubs are designed with slant ends to achieve additional wide bandwidth. To verify the suggested design, a 10-W class-F power amplifier operating at 1.7 GHz was implemented using a commercial GaN transistor. The measurement results showed a peak drain efficiency of 82.1% and 750 MHz of bandwidth for an efficiency higher than 63%. Additionally, the maximum output power was 14.45 W at 1.7 GHz. Mode Analysis of Cascaded Four-Conductor Lines Using Extended Mixed-Mode S-Parameters Zhang, Nan;Nah, Wansoo 57 In this paper, based on the mode analysis of four-conductor lines, the extended mixed-mode chain-parameters and S-parameters of four-conductor lines are estimated using current division factors. The extended mixed-mode chain-parameters of cascaded four-conductor lines are then obtained with mode conversion. And, the extended mixed-mode S-parameters of cascaded four-conductor lines can be predicted from the transformation of the extended chain-parameters. Compared to the extended mixed-mode S-parameters of four-conductor lines, the cross-mode S-parameters are induced in the extended mixed-mode S-parameters of cascaded four-conductor lines, due to the imbalanced current division factors of cascaded two sections. The generated cross-mode S-parameters make the equivalent different- and common-mode conductors not independent from each other again. In addition, a new mode conversion, which applies the imbalanced current division factors, between the extended mixed-mode S-parameters and standard S-parameters is also proposed in this paper. Finally, the validity of the proposed extended mixed-mode S-parameters and mode conversion is confirmed by a comparison of the simulated and estimated results of shielded cable.
CommonCrawl
arXiv.org > hep-th > arXiv:hep-th/9305010v1 hep-th INSPIRE HEP (refers to | cited by ) High Energy Physics - Theory Title:Derivation of the Verlinde Formula from Chern-Simons Theory and the G/G model Authors:M. Blau, G. Thompson (Submitted on 4 May 1993 (this version), latest version 5 May 1993 (v2)) Abstract: We give a derivation of the Verlinde formula for the $G_{k}$ WZW model from Chern-Simons theory, without taking recourse to CFT, by calculating explicitly the partition function $Z_{\Sigma\times S^{1}}$ of $\Sigma\times S^{1}$ with an arbitrary number of labelled punctures. By a suitable gauge choice, $Z_{\Sigma\times S^{1}}$ is reduced to the partition function of an Abelian topological field theory on $\Sigma$ (a deformation of non-Abelian BF and Yang-Mills theory) whose evaluation is straightforward. This relates the Verlinde formula to the Ray-Singer torsion of $\Sigma\times S^{1}$. We derive the $G_{k}/G_{k}$ model from Chern-Simons theory, proving their equivalence, and give an alternative derivation of the Verlinde formula by calculating the $G_{k}/G_{k}$ path integral via a functional version of the Weyl integral formula. From this point of view the Verlinde formula arises from the corresponding Jacobian, the Weyl determinant. Also, a novel derivation of the shift $k\ra k+h$ is given, based on the index of the twisted Dolbeault complex. Comments: This version (hep-th/9305010v1) was not stored by arXiv. A subsequent replacement was made before versioning was introduced. Subjects: High Energy Physics - Theory (hep-th) Cite as: arXiv:hep-th/9305010 (or arXiv:hep-th/9305010v1 for this version) From: Blau Matthias [view email] [v1] Tue, 4 May 1993 13:39:34 UTC (0 KB) [v2] Wed, 5 May 1993 08:52:21 UTC (39 KB)
CommonCrawl
Shayla Redlin Hume Caution: May Contain Stable Sets Posted on June 4, 2022 In this blog post, we'll talk about the container method and two seemingly unrelated applications of it. Section 1 explains what the container method is, Section 2 describes two applications of the container method, and Section 3 shows how the applications are actually related. A lot of the content in Section 1 and Section 2 comes from the course on the container method taught in Spring 2020 by Jorn van der Pol at the University of Waterloo. Part of the content in Section 3 is related to my current research with Jorn van der Pol and Peter Nelson. 1. What is the container method? Many problems (real world and math) can be expressed with graphs and can be solved by counting various objects in these graphs. In this instance, we are interested in counting objects that can be represented by stable sets in a graph. It turns out there are many problems which can be represented this way, so we would like a method to count the number of stable sets in certain graphs. The container method, first used by Kleitman and Winston, is such a method. We will use a version of the container method for graphs, but note that there are other versions for hypergraphs. Before we get started, I'll mention that by "stable" sets, I mean independent sets in a graph, but since matroids will come up later, I'm calling these sets stable instead of independent. That is, a stable set is a set of vertices where no two are adjacent. I'll also mention some notation that will be used. The graph $G$ induced on vertex set $X$, denoted $G[X]$, is the graph with vertex set $X$ where two vertices are adjacent if and only if they are adjacent in $G$. We denote the number of edges and vertices of a graph $G$ by $e(G)$ and $v(G)$, respectively. Recall that the number of elements in a set $A$ is denoted $|A|$. Lastly, the set $\{1,\dots,n\}$ is denoted $[n]$. Every subset of a stable set is itself stable, so $2^{\text{ size of a largest stable set}}$ is a lower bound for the number of stable sets in a graph. It turns out this is often "almost" correct, up to a $(1+\text{o}(1))$ factor in the exponent, where o$(1)$ is a function of $v(G)$ that goes to 0 as $v(G)$ goes to infinity. The container method is a way to upper bound the number of stable sets by creating a bounded number of containers where each contains a bounded number of stable sets. The figure above gives a visualization of the container method, although note that the stable sets are not all disjoint. I'll give a high level description of the theorem and then we'll get into the proof a little bit in the next subsection. Container Theorem Sketch: Let $G$ be a graph where the number of edges induced by a "large" set of vertices is "large". Then there exists a collection $\mathcal{C}$ of containers (i.e. subsets of $V(G)$) such that: every stable set of $G$ is a subset of one of the containers in $\mathcal{C}$; the number of containers is "small"; and the size of each container is "small". The idea is that the number of stable sets is bounded above by $\sum_{C\in \mathcal{C}}2^{|C|}$, which is at most $|\mathcal{C}|\cdot 2^{\max_{C\in \mathcal{C}} |C|}$. The theorem gives us an upper bound for this (based on the definition of "small"). Usually when we use the container method in a proof, we start with a corresponding lower bound for the number of stable sets in the applicable graph. We also usually prove a supersaturation lemma before using the container method. The goal of the supersaturation lemma is to show that the condition of the theorem holds. That is, it shows that the number of edges induced by a "large" set of vertices is "large". Proof and Scythe Algorithm To prove our container theorem, we construct the containers using a version of the Scythe Algorithm and then show that the constructed containers satisfy the desired definitions of "small". There are different versions of the Scythe algorithm for different applications, but they all have a similar flavour. Essentially, the algorithm takes in a stable set $I$ and outputs a subset (known as the fingerprint of $I$) and a superset (the container of $I$). I'll outline the algorithm more specifically below. Note that a maximum degree ordering of a set of vertices $A$ in a graph $G$ is a sequence $(v_1,\dots,v_{|A|})$ of the elements of $A$ where $v_i$ is the maximum degree vertex in $G[A-\{v_1\dots,v_{i-1}\}]$ with the smallest index. Scythe Algorithm Input: graph G, natural number $q$, stable set $I$ with $|I|\ge q$. Start with $S = \emptyset$ and $A = V(G)$. Repeat $q$ times: Let $v$ be the first element of $I$ in a maximum degree ordering of $A$. Move $v$ from $A$ to $S$. Remove the neighbours of $v$ from $A$. Output: $(S,A)$. We can make a few claims about the sets $S$ and $A$; some follow easily and some require more proof. First, we claim that $S$ is the fingerprint of $I$ and $S\cup A$ (not $A$) is the container. In the algorithm, we only move vertices of $I$ into $S$, so clearly $S \subseteq I$. No neighbours of vertices in $I$ can be in $I$ themselves and the only vertices removed from $S\cup A$ are neighbours of those in $I$, so it follows that $I\subseteq S\cup A$. We want the size of each container to be "small" and to do this we can try to bound the sizes of $S$ and $A$. Since the algorithm repeats $q$ times, we know that $|S| = q$. In order to bound $|A|$, we can figure out how many vertices are removed in each step. A lower bound for the number removed is determined using the placement of $v$ in the maximum degree ordering of $A$ and the fact that the number of edges in induced by a "large" set of vertices is "large". I'll skip the details, but suppose $R$ is found to be an upper bound for the size of each $A$. Then $|S\cup A|\le R + q$. That is: $$\text{size of each container } = |S| + |A| \le q+R.$$ We also want the number of containers to be "small", so we claim that for a graph $G$ with $N$ vertices, the number of outputs that the algorithm produces is relatively small. The first step is to show that if input $I$ produces output $(S,A)$, then input $S$ also produces output $(S,A)$. Have a look back at the algorithm and consider how it would run with input $I$ versus input $S$ to convince yourself that this is true. Now we know that the fingerprints determine the containers, so the number of containers is bounded above by the number of possible fingerprints, which is ${N\choose q}$ since $|S|=q$, plus the number of stable sets with size less than $q$, which is at most $\sum_{i=1}^{q-1} {N\choose i}$. That is: $$\# \text{ containers } \le \sum_{i=1}^q {N \choose i}.$$ Now we have shown that all parts of the Container Theorem hold for some definition of "small". The precise theorem is given in the next section. Formal theorem and upper bound Container Theorem (Kohayakawa, Lee, Rödl, Samotij): Let $q,N \in \mathbb{N}$, $R \in \mathbb{R}^+$, and $\beta \in [0,1]$ such that $R \ge e^{-\beta q}N$. Let $G$ be an $N$ vertex graph where $$ e(G[U]) \ge \beta {|U| \choose 2} \text{ for every } U\in V(G) \text{ with at least } R \text{ vertices.} $$ There exists a collection $\mathcal{C}$ of containers such that: the number of containers is at most $\sum_{i\le q} {N\choose i}$; and the size of each container is at most $R+q$. Since each stable set is contained in one of the containers, we can bound the number of stable sets by $\sum_{C\in \mathcal{C}} (\#\text{ stable sets contained in } C)$. The number of stable sets contained in a container $C$ is at most the number of subsets of $C$, which is $2^{|C|}$. Now it follows that: $$ \# \text{ stable sets of } G \le \sum_{C\in \mathcal{C}} 2^{|C|} \le |\mathcal{C}|\cdot 2^{\max_{C\in \mathcal{C}} |C|}. $$ 2. Seemingly unrelated applications The following two problems demonstrate how the container method can be used. They both use a version of the method described above, but otherwise seem to be unrelated. It turns out these applications come up when looking at extensions and co-extensions of matroids that arise from cliques. In fact, both applications have come up in the research I'm doing now with Peter and Jorn. I'll get into this connection in Section 3 after I describe each problem and how the container method is applied. Application 1: Antichains in the Boolean lattice The Boolean lattice $\mathcal{P}([n])$ is the power set of $[n]$ partially ordered by $\subseteq$. For example, consider $\mathcal{P}([2])$, which is shown in the figure below. The elements of the Boolean lattice are $\emptyset,\{1\},\{2\},\{1,2\}$ and two sets are related if one contains the other as a subset. A set $\mathcal{A}\in \mathcal{P}([n])$ is an \textbf{antichain} if its elements are pairwise unrelated. For example, $\{\{1\},\{2\}\}$ is an antichain in the example above. Let's think about $\mathcal{P}([n])$ for general $n$ again. Question: How many antichains are there? By Sperner's theorem, we know the "middle" row is the largest antichain. That is, the largest antichain has size ${n \choose \lfloor n/2 \rfloor}$. Also, every subset of an antichain is itself an antichain, so the number of antichains is at least $2^{n\choose \lfloor n/2 \rfloor}$. It turns out the actual number is "almost" equivalent and we can use the container method to find the similar upper bound. First, define a graph $G$ whose vertices are the elements of $\mathcal{P}([n])$ (i.e. subsets of $[n]$) and two sets are adjacent in $G$ if and only if one set contains the other. Notice that $G$ has $N=2^n$ vertices and its stable sets correspond to antichains in $\mathcal{P}([n])$. Antichains of $\mathcal{P}([n])$ $\longleftrightarrow$ Stable sets of $G$ In order to apply the Container Theorem, we need to show that the number of edges induced by "large" sets of vertices is "large", so we prove a supersaturation lemma. Supersaturation Lemma (Kleitman, 1968 **citation**): If $U\subseteq V(G)$ and $|U|={n\choose n/2}+x$ for $x\ge 0$, then $e(G[U]) \ge x(\lfloor n/2 \rfloor + 1)$. Then we choose appropriate values for $R$, $q$, and $\beta$ so that the conditions of the theorem are satisfied. We choose $R = (1+\frac{1}{\sqrt{n}}){n\choose n/2}$, $q = \frac{\log n}{n}2^n$, and $\beta = \frac{n}{2^n}$ because reasons. Feel free to check that $R\ge e^{-\beta q}N$. Or you can take my word for it. Similarly, you can check that $e(U) \ge \beta {n\choose n/2}$ using the Supersaturation Lemma. Using the values of $R$, $q$, and $\beta$, and the result of the Container Theorem we get: $$ |\mathcal{C}|\le \sum_{i\le q} {N\choose i} \le 2^{\text{o}(1){n \choose n/2}}, $$ and, for all $C\in \mathcal{C}$, $$ |C| \le R + q = (1+\text{o}(1)){n\choose n/2}. $$ As discussed earlier, the number of stable sets is at most $|\mathcal{C}| \cdot 2^{\max |C|}$. Therefore: $$ \# \text{ antichains } \le 2^{\text{o}(1){n \choose n/2}} \cdot 2^{(1+\text{o}(1)){n\choose n/2}}\\ = 2^{(1+\text{o}(1)){n\choose n/2}}. $$ Since $2^{{n\choose n/2}} \le \# \text{ antichains } \le 2^{(1+\text{o}(1)){n\choose n/2}}$, we find that the number of antichains in the Boolean lattice is $2^{(1+\text{o}(1)){n\choose n/2}}$. This was originally proved by Kleitman, although not by using this nice application of the container method. The details of this application are originally from Balogh, Treglown, and Wagner. Application 2: Biased graphs A biased graph is a pair $(G,\mathcal{B})$ where $G$ is a graph and $\mathcal{B}$ is a collection of cycles such that if two cycles $C,C'$ in $\mathcal{B}$ intersect in a non-empty path, then the third cycle in $C\cup C'$ is also in $\mathcal{B}$. Such a configuration of cycles is called a $\Theta$-subgraph and an example is shown in the figure below. In other words, $\mathcal{B}$ satisfies the $\Theta$-property: for each $\Theta$-subgraph of $G$, either 0, 1, or 3 of its cycles are in $\mathcal{B}$. We call a biased graph $(G,\mathcal{B})$ scarce if $\mathcal{B}$ contains at most 1 cycle from each $\Theta$-subgraph of $G$. If we're interested in the number of biased graphs, then we could start by considering the number of biased cliques. The number of graphs on $n$ vertices is $2^{{n\choose 2}}$ and each one is a subgraph of $K_n$, so the number of biased graphs is at most $2^{{n\choose 2}}$ times the number of biased cliques. Question: How many biased cliques are there? In the rest of this section, we'll answer this question by following Peter and Jorn's work in their paper "On the number of biased graphs". Consider the set of Hamilton cycles of a clique $K_n$ and denote it by $\mathcal{B}$. At most one Hamilton cycle can be in a $\Theta$-subgraph, so $(K_n,\mathcal{B})$ is a biased clique. In fact, it is a scarce biased clique. Similarly, $(K_n,\mathcal{B}')$ is a (scarce) biased clique for each $\mathcal{B}'\subseteq \mathcal{B}$. The number of Hamilton cycles is $\frac{1}{2}(n-1)!$, so now we have a lower bound for the number of biased cliques. $\#$ biased cliques $\ge 2^{\frac{1}{2}(n-1)!}$ As with antichains in the Boolean lattice, it turns out that this is "almost" correct, although it takes a bit more work prove it in this case. However, by using containers and a few other tricks, we can prove that the number of biased graphs is at most $2^{(1+\text{o}(1))\frac{1}{2}(n-1)!}$. The first step, however, is to prove the result for scarce biased cliques. Define the Overlap Graph, denoted $\Omega_n$, to be the graph whose vertices are the cycles of $K_n$ where two cycles $C,C'$ are adjacent in $\Omega_n$ if and only if $C\cup C'$ is a $\Theta$-graph. Notice that the stable sets of $\Omega_n$ contain at most one cycle from each $\Theta$-subgraph of $K_n$, so they correspond to scarce biased cliques. Scarce biased cliques $\longleftrightarrow$ Stable sets of $\Omega_n$ We can show that the unique maximum stable set in $\Omega_n$ is the set of Hamilton cycles of $K_n$ (which is done in Lemma 2.2 of Peter and Jorn's paper), but let's move on now to finding the upper bound for the number of stable sets. First, we need to show that the number of edges induced by a "large" set of vertices is "large". That is, we need a supersaturation result. Supersaturation Lemma (Peter and Jorn): Let $\alpha \ge c/n$ where $c>0$ is some constant and let $\mathcal{B}\subseteq V(\Omega_n)$. If $|\mathcal{B}| \ge (1+\alpha)\frac{1}{2}(n-1)!$, then $e(\Omega[\mathcal{B}])\ge \frac{\alpha}{8}n!$. The proof of this Supersaturation Lemma has some tricky details, but the underlying idea is quite nice. The proof is by contradiction. We start by defining a mapping $\Phi$ from each cycle $C$ to permutations $\sigma$ of $[n]$ where either the first or the last $|C|$ elements of $\sigma$ describe a cyclic ordering of $C$. (This is shown in the figure below.) Notice that for two cycles $C,C'$, we have $|\Phi(C)\cap \Phi(C')|\le 4$. Let $\mathcal{B}_k$ be the cycles in $\mathcal{B}$ on $k$ vertices (i.e. $k$-cycles). Then the idea is to find an upper bound for each $|\mathcal{B}_k|$ by bounding $|\Phi(\mathcal{B}_k)|$ and the number of permutations in the image of two $k$-cycles, using the knowledge that the intersection of the $\Phi$-images of two cycles is small. This is where the details get complicated (see the proof of Lemma 2.3 Peter and Jorn's paper), but basically, using these upper bounds, we find that $|\mathcal{B}| = \sum_{0\le k\le n-3} |\mathcal{B}_{n-k}|$ is too small, which gives the contradiction. The next step is to apply the container method using this supersaturation result. Peter and Jorn prove a slightly different container theorem using a slightly different Scythe algorithm. One difference is that the loop in their algorithm repeats until $A$ is a certain size rather than exactly $q$ times. However, it works in a similar way and once all the details are worked out, we find that: $$ \# \text{ scarce biased cliques } \le 2^{(1+\text{o}(1))\frac{1}{2}(n-1)!} $$ We now need to show that the number of biased cliques is "almost" the same as the number of scarce biased cliques. The proof of this is quite clever, so I'll sketch it now. Of the three cycles in a $\Theta$-subgraph of $K_n$, at least one has length at most $\frac{2}{3}(n+1)$. We call cycles of length at most $\frac{2}{3}(n+1)$ small cycles. The goal is to show that each biased clique can be determined by a scarce biased clique and a set of small cycles. We define a mapping $\psi: \{$biased cliques$\} \rightarrow \{$scarce biased cliques$\} \times \{$sets of small cycles$\}$ as follows. Let $(K_n,\mathcal{B})$ be a biased clique where the elements of $\mathcal{B}$ are partially ordered from smallest to largest. For each triple of cycles $\{C_1,C_2,C_3\}\in \mathcal{B}$ that are in a $\Theta$-subgraph where $|C_1|\le |C_2|\le |C_3|$, remove $C_1$ and $C_3$ from $\mathcal{B}$, to obtain $\mathcal{B}'$. Notice that $(K_n,\mathcal{B}')$ is a scarce biased clique. Let $\mathcal{X}$ denote the set of small cycles in $\mathcal{B}$. Now, let $\psi((K_n,\mathcal{B})) = ((K_n,\mathcal{B}'),\mathcal{X})$. Now we need to show that $\psi$ is an injection. Suppose it's not. That is, suppose two biased cliques with biases $\mathcal{B}_1$ and $\mathcal{B}_2$ map to the scarce biased clique with bias $\mathcal{B}$. Let $C$ be the smallest cycle in one bias, say $\mathcal{B}_1$, but not in the other, and not in $\mathcal{X}$. Since $\mathcal{B}_1$ and $\mathcal{B}_2$ both map to $\mathcal{B}$, we know $C$ gets deleted from $\mathcal{B}_1$; thus, $C$ is either $C_1$ or $C_3$ in a triple of cycles $\{C_1,C_2,C_3\}\in \mathcal{B}_1$ that make up a $\Theta$-subgraph. Since $C$ is not in $\mathcal{X}$, it is not a small cycle, and hence $C=C_3$. But then $\mathcal{B}_2$ contains $C_1$ and $C_2$, that is, two cycles from a $\Theta$-subgraph, which is a contradiction. Thus, we have proved that $\psi$ is a injection. Since $\psi$ is a bijection, it follows that the number of biased cliques is at most the number of scarce biased cliques times the number of sets of small cycles. That is: $$ \# \text{ biased cliques } \le \# \text{ scarce biased cliques }\cdot 2^{\text{o}((n-1)!)} = 2^{(1+\text{o}(1))\frac{1}{2}(n-1)!}\cdot 2^{\text{o}((n-1)!))} = 2^{(1+\text{o}(1))\frac{1}{2}(n-1)!} $$ This completes the proof that the number of scarce biased cliques is "almost" the same at the number of biased cliques. Combining this with the lower bound from before we find: $$ \# \text{ biased cliques } = 2^{(1+\text{o}(1))\frac{1}{2}(n-1)!}. $$ Finally, recall that the number of biased graphs is at most $2^{{n\choose 2}}$ times the number of biased cliques. After a little bit of analysis depending on $n$ and the o$(1)$ term, we get: $$ \# \text{ biased graphs } = 2^{{n\choose 2}}\cdot 2^{(1+\text{o}(1))\frac{1}{2}(n-1)!} = 2^{(1+\text{o}(1))\frac{1}{2}(n-1)!}. $$ 3. Connection to matroids I'm going to go ahead and assume that you have some basic knowledge of matroids. If this is not the case, I recommend reading James Oxley's survey paper What is a matroid? Or, you could look at the first few slides of the talk I gave in the C&O Undergraduate Research Seminar on May 26, 2022. A matroid $N$ is a single-element extension of a matroid $M$ if $M\setminus e = N$ for some element $e\in E(M)$. That is, $N$ is the result of removing $e$ from $E(M)$ and removing all independent sets that contain $e$ from $\mathcal{I}$. For convenience, we'll refer to single-element extensions as simply extensions. To me, matroids seem like mysterious objects. They generalize matrices and graphs, but there are many matroids beyond those related to matrices and graphs (i.e. non-representable matroids). In fact, almost all matroids are not representable, which Peter proved a few years ago. It is interesting to wonder how many matroids there are. The number of matroids on $n$ elements is only known for $n\le 9$ although there are upper and lower bounds for larger $n$. These bounds indicate that the number of matroids on $n$ elements is doubly exponential in $n$. One way to approach enumerating all matroids on $n$ elements is to consider the number of ways to "add" an element to a matroid. I think this is also an interesting question on its own. So now we're interested in attempting to find the number of extensions (or coextensions) of a matroid. This is a difficult question in general, so one idea is to start by considering extensions (and coextensions) of matroids that are dense, highly symmetric, and well understood. We'll consider extensions (and coextensions) of graphic matroids $M(K_n)$ that arise from cliques, which we'll call clique matroids. Question: How many extensions of $M(K_n)$ are there? In his 1965 paper, Crapo proves a collection of theorems which imply that there is a bijection between the linear subclasses of a matroid $M$ and the extensions of $M$. So now we are interested in determining the number of linear subclasses. For clique matroids, a linear subclass is equivalent to a bias $\mathcal{B}\subseteq \{$unordered nontrivial bipartitions of $V(K_n)\}$ that satisfies the tripartition property: for each nontrivial tripartition $(A,B,C)$ of $V(K_n)$, $$ |\mathcal{B} \cap \{\{A\cup B,C\},\{A\cup C, B\},\{A, B\cup C\}\}| \in \{0,1,3\}. $$ We say the bias is scarce if this intersection is always at most 1. At this point you might be wondering how this relates to the two applications discussed earlier. This definition looks similar to that of biased graphs, but where does the Boolean lattice come in? While this set up is very similar to biased graphs, they are actually related to coextensions, as we'll see in the coextensions subsection. It turns out that these scarce biases that represent extensions of $M(K_n)$ correspond to intersecting antichains in the Boolean lattice. An intersecting antichain is an antichain where no two elements are disjoint. I'll give a bit of intuition for how these scarce biases are related to intersecting antichains. Consider a graph $\Pi_n$ whose vertices are the unordered nontrivial bipartitions of $V(K_n)$ where two vertices $\{A,B\},\{A',B'\}$ are adjacent if and only if $A\subseteq A'$ or $B \subseteq B'$ or $A\cap A' = \emptyset$ or $B\cap B' = \emptyset$. Notice that the stable sets in $\Pi_n$ correspond to scarce biases (which in turn correspond to extensions). Since the bipartitions are unordered, we could instead consider ordered bipartitions where one element is fixed in, say, the first set. So choosing these biparitions is equivalent to choosing a subset of $[n-1]$. From here, it doesn't take too much effort to see that stable sets in $\Pi_n$ correspond to intersecting antichains in $\mathcal{P}([n-1])$. The largest intersecting antichain in $\mathcal{P}([n-1])$ has size ${n-1\choose \lceil n/2 \rceil}= \frac{1}{2} {n \choose n/2}$, which gives a lower bound for the number of extensions of $M(K_n)$: $$ \# \text{ extensions of } M(K_n) \ge 2^{\frac{1}{2} {n \choose n/2}}. $$ The number of intersecting antichains is clearly less than the number of antichains, so by the final result in the antichains subsection: $$ \# \text{ scarce biases } \le 2^{(1+\text{o}(1)){n-1\choose (n-1)/2}} = 2^{(1+\text{o}(1))\frac{1}{2}{n\choose n/2}}. $$ Finally, we can use a lemma very similar to the one used to show that the number of scarce biased graphs is "almost" the same as the number of biased graphs to show that, in this setting, the number of scarce biases is "almost" the same as the number of biases. This gives us our final result: $$ \# \text{ extensions of } M(K_n) = 2^{(1+\text{o}(1))\frac{1}{2}{n\choose n/2}}. $$ Coextensions A matroid $N$ is a single-element coextension of a matroid $M$ if $M/e = N$ for some element $e\in E(M)$. For convenience, we'll refer to single-element coextensions as simply coextensions. Question: How many coextensions of $M(K_n)$ are there? We can use Crapo's equivalence between extensions and linear subclasses here as well. For the duals of clique matroids, a linear subclass is equivalent to a bias $\mathcal{B}\subseteq \{$cycles of $K_n\}$ that satisfies the $\Theta$-property: for each $\Theta$-subgraph of $K_n$ with cycles $C_1,C_2,C_3$, $$ |\mathcal{B} \cap \{C_1,C_2,C_3\}| \in \{0,1,3\}. $$ Yup, you read that right, this is the definition of a biased clique! So now we know coextensions of $M(K_n)$ can be encoded by biased cliques. In this case, all of the hard work was done when we analyzed biased graphs in the biased graphs subsection, so we immediately get: $$ \# \text{ coextensions of } M(K_n) = 2^{(1+\text{o}(1))\frac{1}{2}(n-1)!}. $$ Jim Geelen while teaching Graph Minors in Fall 2016: "Without much loss of generality..." Student: "Can I use [that phrase] on my assignments?" Jim Geelen: "Without much loss of marks, yeah."
CommonCrawl
Do note that this isn't an extensive list by any means, there are plenty more 'smart drugs' out there purported to help focus and concentration. Most (if not all) are restricted under the Psychoactive Substances Act, meaning they're largely illegal to sell. We strongly recommend against using these products off-label, as they can be dangerous both due to side effects and their lack of regulation on the grey/black market. After trying out 2 6lb packs between 12 September & 25 November 2012, and 20 March & 20 August 2013, I have given up on flaxseed meal. They did not seem to go bad in the refrigerator or freezer, and tasted OK, but I had difficulty working them into my usual recipes: it doesn't combine well with hot or cold oatmeal, and when I tried using flaxseed meal in soups I learned flaxseed is a thickener which can give soup the consistency of snot. It's easier to use fish oil on a daily basis. That first night, I had severe trouble sleeping, falling asleep in 30 minutes rather than my usual 19.6±11.9, waking up 12 times (5.9±3.4), and spending ~90 minutes awake (18.1±16.2), and naturally I felt unrested the next day; I initially assumed it was because I had left a fan on (moving air keeps me awake) but the new potassium is also a possible culprit. When I asked, Kevin said: Never heard of OptiMind before? This supplement promotes itself as an all-natural nootropic supplement that increases focus, improves memory, and enhances overall mental drive. The product first captured our attention when we noticed that their supplement blend contains a few of the same ingredients currently present in our editor's #1 choice. So, of course, we grew curious to see whether their formula was as (un)successful as their initial branding techniques. Keep reading to find out what we discovered… Learn More... Another moral concern is that these drugs — especially when used by Ivy League students or anyone in an already privileged position — may widen the gap between those who are advantaged and those who are not. But others have inverted the argument, saying these drugs can help those who are disadvantaged to reduce the gap. In an interview with the New York Times, Dr. Michael Anderson explains that he uses ADHD (a diagnosis he calls "made up") as an excuse to prescribe Adderall to the children who really need it — children from impoverished backgrounds suffering from poor academic performance. A number of different laboratory studies have assessed the acute effect of prescription stimulants on the cognition of normal adults. In the next four sections, we review this literature, with the goal of answering the following questions: First, do MPH (e.g., Ritalin) and d-AMP (by itself or as the main ingredient in Adderall) improve cognitive performance relative to placebo in normal healthy adults? Second, which cognitive systems are affected by these drugs? Third, how do the effects of the drugs depend on the individual using them? At small effects like d=0.07, a nontrivial chance of negative effects, and an unknown level of placebo effects (this was non-blinded, which could account for any residual effects), this strongly implies that LLLT is not doing anything for me worth bothering with. I was pretty skeptical of LLLT in the first place, and if 167 days can't turn up anything noticeable, I don't think I'll be continuing with LLLT usage and will be giving away my LED set. (Should any experimental studies of LLLT for cognitive enhancement in healthy people surface with large quantitative effects - as opposed to a handful of qualitative case studies about brain-damaged people - and I decide to give LLLT another try, I can always just buy another set of LEDs: it's only ~$15, after all.) Four of the studies focused on middle and high school students, with varied results. Boyd, McCabe, Cranford, and Young (2006) found a 2.3% lifetime prevalence of nonmedical stimulant use in their sample, and McCabe, Teter, and Boyd (2004) found a 4.1% lifetime prevalence in public school students from a single American public school district. Poulin (2001) found an 8.5% past-year prevalence in public school students from four provinces in the Atlantic region of Canada. A more recent study of the same provinces found a 6.6% and 8.7% past-year prevalence for MPH and AMP use, respectively (Poulin, 2007). As I am not any of the latter, I didn't really expect a mental benefit. As it happens, I observed nothing. What surprised me was something I had forgotten about: its physical benefits. My performance in Taekwondo classes suddenly improved - specifically, my endurance increased substantially. Before, classes had left me nearly prostrate at the end, but after, I was weary yet fairly alert and happy. (I have done Taekwondo since I was 7, and I have a pretty good sense of what is and is not normal performance for my body. This was not anything as simple as failing to notice increasing fitness or something.) This was driven home to me one day when in a flurry before class, I prepared my customary tea with piracetam, choline & creatine; by the middle of the class, I was feeling faint & tired, had to take a break, and suddenly, thunderstruck, realized that I had absentmindedly forgot to actually drink it! This made me a believer. My predictions were substantially better than random chance7, so my default belief - that Adderall does affect me and (mostly) for the better - is borne out. I usually sleep very well and 3 separate incidents of horrible sleep in a few weeks seems rather unlikely (though I didn't keep track of dates carefully enough to link the Zeo data with the Adderall data). Between the price and the sleep disturbances, I don't think Adderall is personally worthwhile. Finally, it's not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference). Fatty acids are well-studied natural smart drugs that support many cognitive abilities. They play an essential role in providing structural support to cell membranes. Fatty acids also contribute to the growth and repair of neurons. Both functions are crucial for maintaining peak mental acuity as you age. Among the most prestigious fatty acids known to support cognitive health are: As mentioned earlier, cognitive control is needed not only for inhibiting actions, but also for shifting from one kind of action or mental set to another. The WCST taxes cognitive control by requiring the subject to shift from sorting cards by one dimension (e.g., shape) to another (e.g., color); failures of cognitive control in this task are manifest as perseverative errors in which subjects continue sorting by the previously successful dimension. Three studies included the WCST in their investigations of the effects of d-AMP on cognition (Fleming et al., 1995; Mattay et al., 1996, 2003), and none revealed overall effects of facilitation. However, Mattay et al. (2003) subdivided their subjects according to COMT genotype and found differences in both placebo performance and effects of the drug. Subjects who were homozygous for the val allele (associated with lower prefrontal dopamine activity) made more perseverative errors on placebo than other subjects and improved significantly with d-AMP. Subjects who were homozygous for the met allele performed best on placebo and made more errors on d-AMP. Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8). Caffeine (Examine.com; FDA adverse events) is of course the most famous stimulant around. But consuming 200mg or more a day, I have discovered the downside: it is addictive and has a nasty withdrawal - headaches, decreased motivation, apathy, and general unhappiness. (It's a little amusing to read academic descriptions of caffeine addiction9; if caffeine were a new drug, I wonder what Schedule it would be in and if people might be even more leery of it than modafinil.) Further, in some ways, aside from the ubiquitous placebo effect, caffeine combines a mix of weak performance benefits (Lorist & Snel 2008, Nehlig 2010) with some possible decrements, anecdotally and scientifically: As professionals and aging baby boomers alike become more interested in enhancing their own brain power (either to achieve more in a workday or to stave off cognitive decline), a huge market has sprung up for nonprescription nootropic supplements. These products don't convince Sahakian: "As a clinician scientist, I am interested in evidence-based cognitive enhancement," she says. "Many companies produce supplements, but few, if any, have double-blind, placebo-controlled studies to show that these supplements are cognitive enhancers." Plus, supplements aren't regulated by the U.S. Food and Drug Administration (FDA), so consumers don't have that assurance as to exactly what they are getting. Check out these 15 memory exercises proven to keep your brain sharp. The Nootroo arrives in a shiny gold envelope with the words "proprietary blend" and "intended for use only in neuroscience research" written on the tin. It has been designed, says Matzner, for "hours of enhanced learning and memory". The capsules contain either Phenylpiracetam or Noopept (a peptide with similar effects and similarly uncategorised) and are distinguished by real flakes of either edible silver or gold. They are to be alternated between daily, allowing about two weeks for the full effect to be felt. Also in the capsules are L-Theanine, a form of choline, and a types of caffeine which it is claimed has longer lasting effects. Most diehard nootropic users have considered using racetams for enhancing brain function. Racetams are synthetic nootropic substances first developed in Russia. These smart drugs vary in potency, but they are not stimulants. They are unlike traditional ADHD medications (Adderall, Ritalin, Vyvanse, etc.). Instead, racetams boost cognition by enhancing the cholinergic system. How much of the nonmedical use of prescription stimulants documented by these studies was for cognitive enhancement? Prescription stimulants could be used for purposes other than cognitive enhancement, including for feelings of euphoria or energy, to stay awake, or to curb appetite. Were they being used by students as smart pills or as "fun pills," "awake pills," or "diet pills"? Of course, some of these categories are not entirely distinct. For example, by increasing the wakefulness of a sleep-deprived person or by lifting the mood or boosting the motivation of an apathetic person, stimulants are likely to have the secondary effect of improving cognitive performance. Whether and when such effects should be classified as cognitive enhancement is a question to which different answers are possible, and none of the studies reviewed here presupposed an answer. Instead, they show how the respondents themselves classified their reasons for nonmedical stimulant use. Known widely as 'Brahmi,' the Bacopa Monnieri or Water Hyssop, is a small herb native to India that finds mention in various Ayurvedic texts for being the best natural cognitive enhancer. It has been used traditionally for memory enhancement, asthma, epilepsy and improving mood and attention of people over 65. It is known to be one of the best brain supplement in the world. The Defense Department reports rely on data collected by the private real estate firms that operate base housing in partnership with military branches. The companies' compensation is partly determined by the results of resident satisfaction surveys. I had to re-read this sentence like 5 times to make sure I understood it correctly. I just can't even. Seriously, in what universe did anyone think that this would be a good idea? All of the coefficients are positive, as one would hope, and one specific factor (MR7) squeaks in at d=0.34 (p=0.05). The graph is much less impressive than the graph for just MP, suggesting that the correlation may be spread out over a lot of factors, the current dataset isn't doing a good job of capturing the effect compared to the MP self-rating, or it really was a placebo effect: The question of whether stimulants are smart pills in a pragmatic sense cannot be answered solely by consideration of the statistical significance of the difference between stimulant and placebo. A drug with tiny effects, even if statistically significant, would not be a useful cognitive enhancer for most purposes. We therefore report Cohen's d effect size measure for published studies that provide either means and standard deviations or relevant F or t statistics (Thalheimer & Cook, 2002). More generally, with most sample sizes in the range of a dozen to a few dozen, small effects would not reliably be found. Table 3 lists the results of 24 tasks from 22 articles on the effects of d-AMP or MPH on learning, assessed by a variety of declarative and nondeclarative memory tasks. Results for the 24 tasks are evenly split between enhanced learning and null results, but they yield a clearer pattern when the nature of the learning task and the retention interval are taken into account. In general, with single exposures of verbal material, no benefits are seen immediately following learning, but later recall and recognition are enhanced. Of the six articles reporting on memory performance (Camp-Bruno & Herting, 1994; Fleming, Bigelow, Weinberger, & Goldberg, 1995; Rapoport, Busbaum, & Weingartner, 1980; Soetens, D'Hooge, & Hueting, 1993; Unrug, Coenen, & van Luijtelaar, 1997; Zeeuws & Soetens 2007), encompassing eight separate experiments, only one of the experiments yielded significant memory enhancement at short delays (Rapoport et al., 1980). In contrast, retention was reliably enhanced by d-AMP when subjects were tested after longer delays, with recall improved after 1 hr through 1 week (Soetens, Casaer, D'Hooge, & Hueting, 1995; Soetens et al., 1993; Zeeuws & Soetens, 2007). Recognition improved after 1 week in one study (Soetens et al., 1995), while another found recognition improved after 2 hr (Mintzer & Griffiths, 2007). The one long-term memory study to examine the effects of MPH found a borderline-significant reduction in errors when subjects answered questions about a story (accompanied by slides) presented 1 week before (Brignell, Rosenthal, & Curran, 2007). (On a side note, I think I understand now why modafinil doesn't lead to a Beggars in Spain scenario; BiS includes massive IQ and motivation boosts as part of the Sleepless modification. Just adding 8 hours a day doesn't do the world-changing trick, no more than some researchers living to 90 and others to 60 has lead to the former taking over. If everyone were suddenly granted the ability to never need sleep, many of them would have no idea what to do with the extra 8 or 9 hours and might well be destroyed by the gift; it takes a lot of motivation to make good use of the time, and if one cannot, then it is a curse akin to the stories of immortals who yearn for death - they yearn because life is not a blessing to them, though that is a fact more about them than life.) Endoscopy surgeries, being minimally invasive, have become more popular in recent times. Latest studies show that there is an increasing demand for single incision or small incision type of surgery as an alternative to traditional surgeries. As aging patients are susceptible to complications, the usage of minimally invasive procedures is of utmost importance and the need of the hour. There are unexplained situations of bleeding, iron deficiency, abdominal pain, search for polyps, ulcers, and tumors of the small intestine, and inflammatory bowel disease, such as Crohn's disease, where capsule endoscopy diagnoses fare better than traditional endoscopy. Also, as capsule endoscopy is less invasive or non-invasive, as compared to traditional endoscopy, patients are increasingly preferring the usage of capsule endoscopy as it does not require any recovery time, which is driving the smart pill market. When I worked on the Bulletproof Diet book, I wanted to verify that the effects I was getting from Bulletproof Coffee were not coming from modafinil, so I stopped using it and measured my cognitive performance while I was off of it. What I found was that on Bulletproof Coffee and the Bulletproof Diet, my mental performance was almost identical to my performance on modafinil. I still travel with modafinil, and I'll take it on occasion, but while living a Bulletproof lifestyle I rarely feel the need.
CommonCrawl
Fast wide-field upconversion luminescence lifetime thermometry enabled by single-shot compressed ultrahigh-speed imaging Ratiometric upconversion nanothermometry with dual emission at the same wavelength decoded via a time-resolved technique Xiaochen Qiu, Qianwen Zhou, … Fuyou Li Continuous-wave near-infrared stimulated-emission depletion microscopy using downshifting lanthanide nanoparticles Liangliang Liang, Ziwei Feng, … Xiaogang Liu Dynamic nanoimaging of extended objects via hard X-ray multiple-shot coherent diffraction with projection illumination optics Yuki Takayama, Keizo Fukuda, … Yasushi Kagoshima Nanosecond-resolution photothermal dynamic imaging via MHZ digitization and match filtering Jiaze Yin, Lu Lan, … Ji-Xin Cheng Adaptive dynamic range shift (ADRIFT) quantitative phase imaging Keiichiro Toda, Miu Tamamitsu & Takuro Ideguchi Label-free metabolic and structural profiling of dynamic biological samples using multimodal optical microscopy with sensorless adaptive optics Rishyashring R. Iyer, Janet E. Sorrells, … Stephen A. Boppart Bond-selective transient phase imaging via sensing of the infrared photothermal effect Delong Zhang, Lu Lan, … Ji-Xin Cheng Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation M. Pascucci, S. Ganesan, … M. Guillon Single upconversion nanoparticle imaging at sub-10 W cm−2 irradiance Qian Liu, Yunxiang Zhang, … Steven Chu Xianglei Liu1 na1, Artiom Skripka ORCID: orcid.org/0000-0003-4060-42901 na1 nAff2, Yingming Lai1, Cheng Jiang1, Jingdan Liu ORCID: orcid.org/0000-0002-6009-59671, Fiorenzo Vetrone1 & Jinyang Liang1 114 Altmetric Imaging and sensing Imaging techniques Photoluminescence lifetime imaging of upconverting nanoparticles is increasingly featured in recent progress in optical thermometry. Despite remarkable advances in photoluminescent temperature indicators, existing optical instruments lack the ability of wide-field photoluminescence lifetime imaging in real time, thus falling short in dynamic temperature mapping. Here, we report video-rate upconversion temperature sensing in wide field using single-shot photoluminescence lifetime imaging thermometry (SPLIT). Developed from a compressed-sensing ultrahigh-speed imaging paradigm, SPLIT first records wide-field luminescence intensity decay compressively in two views in a single exposure. Then, an algorithm, built upon the plug-and-play alternating direction method of multipliers, is used to reconstruct the video, from which the extracted lifetime distribution is converted to a temperature map. Using the core/shell NaGdF4:Er3+,Yb3+/NaGdF4 upconverting nanoparticles as the lifetime-based temperature indicators, we apply SPLIT in longitudinal wide-field temperature monitoring beneath a thin scattering medium. SPLIT also enables video-rate temperature mapping of a moving biological sample at single-cell resolution. Temperature is an important parameter associated with many physical, chemical, and biological processes1. Accurate and real-time (i.e., the actual time during which the event occurs) temperature sensing at microscopic scales is essential to both industrial applications and scientific research, including the examination of internal strains in turbine blades2, control of the synthesis of ionic liquids3, and theranostics of cancer4. In the past decade, photoluminescence lifetime imaging (PLI) has emerged as a promising approach to temperature sensing5. Because photoluminescence can be both excited and detected optically, the resulting non-contact PLI possesses a high spatial resolution6,7,8. This advantage not only overcomes the intrinsic limitation in spatial resolution of imaging thermography due to the long wavelengths of thermal radiation but also avoids heat-transfer-induced inaccuracy in conventional contact methods9. Moreover, independent of prior knowledge of samples' physical properties (e.g., emissivity and Grüneisen coefficient10,11), PLI brings in higher flexibility in sample selection. Furthermore, PLI is less susceptible than the intensity-based measurements to inhomogeneous signal attenuation, stray light, photobleaching, light's path length, and excitation intensity variations12,13,14,15,16. Finally, PLI does not rely on the concentration of labeling agents8, which eliminates the need for special ratiometric probes17. Overcoming many challenges in previous methods, PLI is becoming a popular choice for optical thermometry18,19,20,21,22. The success of PLI in temperature mapping depends on two essential constituents: temperature indicators and optical imaging instruments. Recent advances in biochemistry, materials science, and molecular biology have discovered numerous labeling agents23,24,25,26 for PLI-based temperature sensing. Among them, lanthanide-doped upconverting nanoparticles (UCNPs) are ideal candidates. Leveraging the long-lived excited states provided by the lanthanide ions, UCNPs can sequentially absorb two (or more) low-energy near-infrared photons and convert them to one higher-energy photon. This upconversion process allows using excitation power densities several orders of magnitude lower than those needed for simultaneous multi-photon absorption27,28. The near-infrared excitation, with smaller tissue extinction coefficients, also gains deeper penetration29. Besides, the upconverted luminescence, particularly the Boltzmann-coupled emission bands in co-doped erbium/ytterbium (Er3+/Yb3+) systems, is highly sensitive to temperature changes30,31. Moreover, long-lived (i.e., microseconds to milliseconds) photoluminescence of UCNPs circumvents interferences from autofluorescence and scattering during image acquisition, which translates into improved imaging contrast and detection sensitivity. Finally, because of advances in their synthesis and surface functionalization coupled with the innovation of core/shell engineering, over the years, UCNPs have become much brighter, photostable, biocompatible, and non-toxic32. As a result of these salient merits, UCNPs are one of the frontrunners in temperature indicators for PLI. Advanced optical imaging is the other indispensable constituent in PLI-based temperature mapping33. To detect photoluminescence on the time scale of microseconds to milliseconds, like that produced by UCNPs, most PLI techniques use point-scanning time-correlated single-photon counting (TCSPC)34. Although they possess high signal-to-noise ratios, the scanning operation leads to an excessively long imaging time to form a two-dimensional (2D) lifetime map because extended pixel dwell time is required to record the long-lived emission35. To accelerate data acquisition, wide-field PLI modalities based on parallel collection in time-domain and frequency-domain have been developed36. In the time domain, these techniques extend the TCSPC technique to wide-field imaging (e.g., TimepixCam37 and Tpx3Cam38). Photoluminescence decay over a 2D field of view (FOV) is synthesized from >100,000 frames, which requires the emission to be precisely repeatable. Alternatively, the frequency-domain wide-field PLI techniques39,40 use phase difference between the intensity-modulated excitation and the received photoluminescence signal to determine the 2D lifetime distribution. Nevertheless, limited by the range of frequency synthesizers, the measurable lifetimes are mostly restricted to ≤100 µs, which is shorter than the lifetimes of most UCNPs. Akin to the time-domain techniques, these systems rely on the integration over many periods of modulation intensity, during which the sample must remain stationary. Thus far, existing PLI techniques fall short in 2D temperature sensing of moving samples with a micrometer-level spatial resolution. To surmount these limitations, we report an optical temperature mapping modality, termed single-shot photoluminescence lifetime imaging thermometry (SPLIT). Synergistically combining dual-view optical streak imaging with compressed sensing41, SPLIT records wide-field luminescence decay of Er3+, Yb3+ co-doped NaGdF4 UCNPs in real time, from which a lifetime-based 2D temperature map is obtained in a single exposure. Largely advancing existing optical thermometry techniques in detection capabilities, SPLIT enables longitudinal 2D temperature monitoring beneath a thin scattering medium and dynamic temperature tracking of a moving biological sample at single-cell resolution. Operating principle of SPLIT The schematic of the SPLIT system is shown in Fig. 1. A 980-nm continuous-wave laser (BWT, DS3-11312-113-LD) is used as the light source. The laser beam passes through a 4f system consisting of two 50-mm focal length lenses (L1 and L2, Thorlabs, LA1255). An optical chopper (Scitec Instruments, 300CD) is placed at the back focal plane of lens L1 to generate 50-µs optical pulses. Then, the pulse passes through a 100-mm focal length lens (L3, Thorlabs, AC254-100-B) and is reflected by a short-pass dichroic mirror (Edmund Optics, 69-219) to generate a focus on the back focal plane of an objective lens (Nikon, CF Achro 4×, 0.1 numerical aperture, 11-mm field number). This illumination scheme produces wide-field illumination (1.5 × 1.5 mm2 FOV) to UCNPs at the object plane. Fig. 1: Schematic of the SPLIT system. The illustration shows data acquisition and image reconstruction of luminescence intensity decay in a letter "C". L1–L5, lens. The near-infrared excited UCNPs emit light in the visible spectral range. The decay of light intensity over the 2D FOV is a dynamic scene, denoted by \(I(x,{y},{t})\). The emitted light is collected by the same objective lens, transmits through the dichroic mirror, and is filtered by a band-pass filter (Thorlabs, MF542-20 or Semrock, FF01-660/30-25). Then, a beam splitter (Thorlabs, BS013) equally divides the light into two components. The reflected component is imaged by a CMOS camera (FLIR, GS3-U3-23S6M-C) with a camera lens (Fujinon, HF75SA1) via spatiotemporal integration (denoted as the operator \({{{{{\bf{T}}}}}}\)) as View 1, whose optical energy distribution is denoted by \({E}_{1}\left({x}_{1},{y}_{1}\right)\). The transmitted component forms an image of the dynamic scene on a transmissive encoding mask with a pseudo-random binary pattern (Fineline Imaging, 50% transmission ratio; 60-µm encoding pixel size). This process of spatial encoding is denoted by the operator \({{{{{\bf{C}}}}}}\). Then, the spatially encoded scene is imaged by a mechanical streak camera. In particular, the scene is relayed to the sensor plane of an electron-multiplying (EM) CCD camera (Nüvü Camēras, HNü 1024) by a 4f imaging system consisting of two 100-mm focal length lenses (L4 and L5, Thorlabs, AC254-100-A). A galvanometer scanner (Cambridge Technology, 6220H), placed at the Fourier plane of the 4f imaging system, temporally shears the spatially encoded frames linearly to different spatial locations along the \({x}_{2}\) axis of the EMCCD camera according to their time of arrival. This process of temporal shearing is denoted by the operator \({{{{{\bf{S}}}}}}\). Finally, the spatially encoded and temporally sheared dynamic scene is recorded by the EMCCD camera via spatiotemporal integration to form View 2, whose optical energy distribution is denoted by \({E}_{2}\left({x}_{2},{y}_{2}\right)\). By combining the image formation of \({E}_{1}\left({x}_{1},{y}_{1}\right)\) and \({E}_{2}\left({x}_{2},{y}_{2}\right)\), the data acquisition of SPLIT is expressed by $$E={{{{{\bf{T}}}}}}{{{{{\bf{M}}}}}}I,$$ where \(E\) denotes the concatenation of measurements \({\left[{E}_{1},{\alpha E}_{2}\right]}^{T}\) (the superscript T denotes the transpose), \({{{{{\bf{M}}}}}}\) denotes the linear operator \({\left[{{{{{\bf{1}}}}}},\alpha {{{{{\bf{SC}}}}}}\right]}^{T}\), and \(\alpha\) is a scalar factor introduced to balance the energy ratio between the two views during measurement42. The hardware of the SPLIT system is synchronized for capturing both views (detailed in "Methods") that are calibrated before data acquisition (detailed in Supplementary Note 1 and Supplementary Fig. 1). After data acquisition, \(E\) is processed by an algorithm that retrieves the datacube of the dynamic scene by leveraging the spatiotemporal sparsity of the dynamic scene and the prior knowledge of each operator43,44. Developed from the plug-and-play alternating direction method of multipliers (PnP-ADMM) framework45,46, the reconstruction algorithm of SPLIT solves the minimization problem of $$\hat{I}=\mathop{{{{{{\rm{argmin}}}}}}}\limits_{\,I}\left\{\frac{1}{2}\Vert{{{{{\bf{T}}}}}}{{{{{\bf{M}}}}}}I-{E}\Vert_{2}^{2}+R(I)+{{{{{{\bf{I}}}}}}}_{+}(I)\right\}.$$ Here, \({\Vert\!\!\cdot\!\!\Vert }_{2}\) represents the l2 norm. The fidelity term, \(\frac{1}{2}\Vert{{{{{\bf{T}}}}}}{{{{{\bf{M}}}}}}I-{E}\Vert_ {2}^{2}\), represents the similarity between the measurement and the estimated result. \(R(\bullet )\) is the implicit regularizer that promotes sparsity in the dynamic scene. \({{{{{{\bf{I}}}}}}}_{+}(\cdot )\) represents a non-negative intensity constraint. Compared to existing reconstruction schemes47,48,49, PnP-ADMM implements a variable splitting strategy with a state-of-the-art denoiser to obtain fast and closed-form solutions to each sub-optimization problem, which produces a high image quality in reconstruction (see Supplementary Notes 2 and 3 and Supplementary Fig. 2). The retrieved datacube of the dynamic scene has a sequence depth (i.e., the number of frames in a reconstructed movie) of 12–100 frames, each containing 460 × 460 \((x,y)\) pixels. The imaging speed is tunable from 4 to 33 thousand frames per second (kfps) (detailed in "Methods"). The reconstructed datacube is then converted to a photoluminescence lifetime map. In particular, for each (\(x,y\)) point, the area under the normalized intensity decay curve is integrated to report the value of the photoluminescence lifetime50. Finally, using the approximately linear relationship between the UCNPs' lifetime and the physiologically relevant temperature range (20–46 °C in this work)51,52, the 2D temperature distribution, \(T\left(x,y\right)\), is calculated by $$T\left(x,y\right)={c}_{{{{{{\rm{t}}}}}}}+\frac{1}{{S}_{{{{{{\rm{a}}}}}}}}\int \frac{\hat{I}\left(x,y,t\right)}{\hat{I}\left(x,y,0\right)}{{{{{\rm{d}}}}}t},$$ where \({c}_{{{{{{\rm{t}}}}}}}\) is a constant, and \({S}_{{{{{{\rm{a}}}}}}}\) is the absolute temperature sensitivity33. The derivation of Eq. (3) is detailed in Supplementary Note 4. Leveraging the intrinsic frame rate of the EMCCD camera, the SPLIT system can generate lifetime-determined temperature maps at a video rate of 20 Hz. Quantification of the system's performance of SPLIT We prepared a series of core/shell UCNP samples to showcase SPLIT's capabilities. These UCNPs shared the same NaGdF4: 2 mol% Er3+, 20 mol% Yb3+ active core of 14.6 nm in size, while differed by the thickness of their undoped NaGdF4 passive shell of 1.9, 3.5, and 5.6 nm (Fig. 2a and detailed in Supplementary Note 5). All of the UCNP samples were of pure hexagonal crystal phase (Supplementary Fig. 3). Under the 980-nm excitation, upconversion emission bands of all samples were measured at around 525/545 nm and 660 nm, which correspond to the 2H11/2/4S3/2 → 4I15/2 and 4F9/2 → 4I15/2 radiative transitions, respectively (Fig. 2b–c). Fig. 2: Quantification of the performance of the SPLIT system. a Images of core/shell UCNPs acquired with a transmission electron microscope. Scale bar: 25 nm. b Normalized upconversion spectra of UCNPs shown in (a). c Simplified energy level diagram of Yb3+-Er3+ energy transfer upconversion excitation and emission. d Temporally projected image of photoluminescence intensity decay of the 5.6 nm-thick-shell UCNPs covered by a negative resolution target. e Comparison of averaged light fluence distribution along the horizontal bars (blue) and vertical bars (orange) of Element 5 in Group 4 on the resolution target. Error bar: standard deviation. f Lifetime images of UCNPs with the shell thicknesses of 1.9, 3.5, and 5.6 nm covered by transparencies of letters "C", "A", and "N" in green emission. g Time-lapse averaged emission intensities of the samples. h Histograms of photoluminescence lifetimes in the letters shown in (f). To characterize SPLIT's spatial resolution, we covered the 5.6 nm-thick-shell UCNP sample with a negative USAF resolution target (Edmund Optics, 55-622). Operating at 33 kfps, SPLIT recorded the photoluminescence decay (Supplementary Movie 1). The temporally projected datacube reveals that the intensity and contrast in the reconstructed image degrade with the decreased spatial feature sizes, eventually leading to the loss of structure whose size approaches that of the encoding pixel (Fig. 2d). The effective spatial resolution was thus determined to be 20 µm (Fig. 2e). Under these experimental conditions, the minimum power density for the SPLIT system was quantified to be 0.06 W mm−2 (detailed in Supplementary Note 6 and Supplementary Fig. 4). To demonstrate SPLIT's ability to distinguish different lifetimes, we imaged the UCNPs with shell thicknesses of 1.9, 3.5, and 5.6 nm, covered by transparencies of letters "C", "A", and "N", respectively, using a single laser pulse (Supplementary Movie 2). The lifetime maps of these samples are shown in Fig. 2f, which reveals the averaged lifetimes for the 4S3/2 excited state of samples "C", "A", and "N" to be 142, 335, and 478 µs, respectively (Fig. 2g–h). These results were verified by using the standard TCSPC method (detailed in Supplementary Note 7 and Supplementary Fig. 5). SPLIT's reconstruction algorithm shows a superb performance to existing mainstream algorithms popularly used in single-shot compressed ultrafast imaging41,42,47,48. By using the experimental data, the comparison demonstrates that the dual-view PnP-ADMM algorithm used by SPLIT is more powerful in preserving spatial features while maintaining a low background, which enables a more accurate lifetime quantification and the ensuing temperature mapping (detailed in Supplementary Note 8 and Supplementary Fig. 6). Single-shot temperature mapping using SPLIT We used the 5.6 nm-thick-shell UCNPs as the temperature indicator for SPLIT. The UCNPs' temperature was controlled by a heating plate placed behind the sample. To image the green (4S3/2) and red (4F9/2) upconversion emissions, the sample was covered by transparencies of a lily flower and a maple leaf, respectively. The temperature of the entire sample was measured with both a Type K thermocouple (Omega, HH306A) and a thermal camera (FLIR, E4) as references. The reconstructed lifetime images in the 20–46 °C temperature range are shown in Fig. 3a–b (see the full evolution in Supplementary Movie 3). Plotted in Fig. 3c–d, the time-lapse averaged intensity over the entire FOV shows that the averaged lifetimes of green and red emissions decrease from 489 to 440 µs and from 458 to 398 µs, which is due to their enhanced multi-phonon deactivation at higher temperatures. We further plotted the relationship between the temperatures and lifetimes for both emission channels (Fig. 3e). Finally, the temperature sensitivities in the preset temperature range were calculated to be \({S}_{{{{{{\rm{a}}}}}}}=-1.90\,{\upmu }{{{{{\rm{s}}}}}}\,{^\circ {{{{{\rm{C}}}}}}}^{-1}\,\)for the green emission and \({S}_{{{{{{\rm{a}}}}}}}=-2.40\,{\upmu }{{{{{\rm{s}}}}}}\,{^\circ {{{{{\rm{C}}}}}}}^{-1}\) for the red emission (see detailed calculation and further analysis in Supplementary Note 9 and Supplementary Fig. 7). Compared to the green emission, the higher temperature sensitivity of the red emission results from the greater energy separation between its emitting state and the adjacent lower-laying excited state (Fig. 2c). Since multi-phonon relaxation rate depends exponentially on the number of phonons necessary to deactivate an excited state to the one below it, the increase in phonon energies at higher temperatures has greater influence over the states with a larger energy gap between them53. These results establish lifetime-temperature calibration curves [i.e., Eq. (3)] for ensuing thermometry experiments. Fig. 3: Single-shot temperature mapping using SPLIT. a, b Lifetime images of green (a) and red (b) upconversion emission bands under different temperatures. c, d Normalized photoluminescence decay curves of green (c) and red (d) emission bands at different temperatures, averaged over the entire field of view. e Relationship between temperature and mean lifetimes of green and red emissions with linear fitting. Error bar: standard deviation from three independent measurements. f Normalized contrast versus chicken tissue thickness for green and red emission bands with single-component exponential fitting. g Longitudinal temperature monitoring through 0.5 mm-thick fresh chicken tissue. To demonstrate SPLIT's feasibility in a biological environment, we conducted longitudinal temperature monitoring under a phantom, made by using the 5.6 nm-thick-shell UCNPs covered by lift-out grids (Ted Pella, 460-2031-S), overlaid by fresh chicken breast tissue. We investigated SPLIT's imaging depth with varied tissue thicknesses of up to 1 mm (Fig. 3f, Supplementary Note 10, Supplementary Fig. 8, and Supplementary Movie 4). The chicken tissue of 0.5 mm thickness, where both the green and red emissions produced images with full spatial features of the lift-out grid, was used in the following imaging experiments. Subsequently, we cycled the temperature of the sample between 20 and 46 °C. The lifetime distributions of both green and red emissions and their corresponding temperature maps were monitored every 20 and 23 min, respectively, for ~4 h (see the full evolution in Supplementary Fig. 9 and Supplementary Movie 5). As shown in Fig. 3g, the results are in good agreement with the temperature change preset by the heating plate, and decisively showcase how SPLIT can map 2D temperatures over time with high accuracy beneath biological tissue. We also demonstrated SPLIT using a fresh beef phantom as a scattering medium, where both light scattering and absorption are present (detailed in Supplementary Note 10 and Supplementary Fig. 10). The results reveal better penetration of the red emission over the green counterpart due to its weaker scattering and absorption. More importantly, the results confirm the independence of the measured photoluminescence lifetime of UCNPs to tissue thickness and hence the excitation light power density used in our work (≤0.4 W mm−2). Single-cell dynamic temperature tracking using SPLIT To apply SPLIT to dynamic single-cell temperature mapping, we tested a single-layer onion epidermis sample labeled by the 5.6 nm-thick-shell UCNPs (detailed in Supplementary Note 11 and Supplementary Fig. 11). Furthermore, to generate non-repeatable photoluminescent dynamics, the sample was moved across the FOV at a speed of 1.18 mm s−1 by a translation stage. In the 3-second measurement window, the SPLIT system continuously recorded 60 lifetime/temperature maps. Four representative time-integrated images and their corresponding lifetime maps are shown in Fig. 4a–b (see dynamic lifetime mapping in Supplementary Movie 6). Figure 4c shows intensity decay curves from four selected regions with varied intensities in the onion cell sample at 0.05 s. The photoluminescence lifetimes and hence the temperatures remain stable, showing SPLIT's resilience to spatial intensity variation. We also tracked the time histories of the average emitted fluence and lifetime-indicated temperatures of these four regions during the sample's translational moving (Fig. 4d). In this measurement time window, the emitted photoluminescence fluences have varied in each selected region. In contrast, the measured temperatures show a small fluctuation of ±0.35 °C, which validates the advantage of PLI thermometry in handling temporal intensity variation. Fig. 4: Dynamic single-cell temperature mapping using SPLIT. a Representative time-integrated images of a moving onion epidermis cell sample labeled by UCNPs. b Lifetime images corresponding to (a). c Photoluminescence decay profiles at four selected areas [marked by the solid boxes in the first panel of (a)] with varied intensities. d Time histories of averaged fluence and corresponding temperature in the four selected regions during the sample's translational motion. In summary, we have developed SPLIT for wide-field dynamic temperature sensing in real time. In data acquisition, SPLIT compressively records the photoluminescence emission over a 2D FOV in two views. Then, the dual-view PnP-ADMM algorithm reconstructs spatially resolved intensity decay traces, from which a photoluminescence lifetime distribution and the corresponding temperature map are extracted. Used with core/shell NaGdF4:Er3+,Yb3+/NaGdF4 UCNPs, SPLIT has enabled temperature mapping with high sensitivity for both green and red upconversion emission bands with a 20-µm spatial resolution in a 1.5 × 1.5 mm2 FOV at a video rate of 20 Hz. SPLIT is demonstrated in longitudinal temperature monitoring of a phantom beneath fresh chicken tissue. SPLIT is also applied to dynamic single-cell temperature mapping of a moving single-layer onion epidermis sample. SPLIT advances the technical frontier of optical instrumentation in PLI. The high parallelism in SPLIT's data acquisition drastically improves the overall light throughput. The resulting system, featuring single-shot temperature sensing over a 2D FOV, solves the long-standing issue in scanning-based techniques (see Supplementary Note 12 and Supplementary Figs. 12–13). In particular, SPLIT improves the measurement accuracy by avoiding artifacts generated from the scanning-induced motion blur and the excitation intensity fluctuation. More importantly, as shown in Fig. 4, SPLIT extends the application scope of PLI to observing non-repeatable 2D temperature dynamics. Its high tunability of imaging speeds also accommodates a variety of UCNPs with a wide lifetime span (from hundreds of nanoseconds to milliseconds). Among existing single-shot 2D ultrafast imaging modalities based on streak cameras, SPLIT is well suited for dynamic PLI of UCNPs in terms of the targeted imaging speed, detection sensitivity, spatial resolution, and cost efficiency (detailed in Supplementary Note 12 and Supplementary Table 1). Finally, the SPLIT system by itself records only the lifetime images; yet, when using UCNPs as contrast agents, those images carry temperature information in situ, where the UCNPs reside. Thus, compared to thermal imaging cameras, SPLIT supplies superior temperature mapping results with higher image contrast and better resilience to background interference (detailed in Supplementary Note 13 and Supplementary Fig. 14). From the perspective of system design, both the dual-view data acquisition and the PnP-ADMM algorithm support high imaging quality in SPLIT. In particular, View 1 preserves the spatial information in the dynamic scene54. Meanwhile, View 2 retains temporal information by optical streaking via time-to-space conversion. Altogether, both views maximally keep rich spatiotemporal information. In software, the dual-view PnP-ADMM algorithm provides a powerful modular structure, which allows separated optimization of individual sub-optimization problems with an advanced denoising algorithm to generate high-quality image restoration results. SPLIT offers a versatile PLI temperature-sensing platform. In materials characterization, it could be used in the stress analysis of metal fatigue in turbine blades55. In biomedicine, it could be implemented for accurate sub-cutaneous temperature monitoring for theranostics of skin diseases (e.g., micro-melanoma)56,57. SPLIT's microscopic temperature mapping ability could also be exploited for the studies of temperature-regulated cellular signaling58. Finally, the operation of SPLIT could be extended to Stokes emission in lanthanide-doped nanoparticles and spectrally resolved temperature mapping. All of these topics are promising research directions in the future. Synchronization of the SPLIT system The optical chopper outputs a transistor-transistor logic (TTL) signal that is synchronized with the generated optical pulses. This TTL signal is input to a delay generator (Stanford Research Systems, DG 645), which then generates three synchronized TTL signals at 20 Hz. The first two signals are used to trigger the 3-ms exposure of the EMCCD and CMOS cameras. The last one is used to trigger a function generator (Rigol, DG1022Z) that outputs a 20-Hz sinusoidal waveform under the external burst mode to control the rotation of the galvanometer scanner (GS). Calculation of SPLIT's key parameters The GS, placed at the Fourier plane of the 4\(f\) imaging system consisting of lenses L4 and L5 (Fig. 1), deflects temporal information to different spatial positions. Rotating during the data acquisition, the GS changes the reflection angles of the spatial frequency spectra of individual frames with different time-of-arrival. After the Fourier transformation by Lens 5, this angular difference is converted to the lateral shift in space on the EMCCD camera, which results in temporal shearing. An illustration with a simple example is provided in Supplementary Fig. 15. The imaging speed is determined by the data acquisition for View 2. In particular, the reconstructed movie has a frame rate of59 $${{{{{\rm{r}}}}}}=\frac{{\gamma }_{{{{{{\rm{a}}}}}}}{V}_{{{{{{\rm{g}}}}}}}{f}_{5}}{{t}_{{{{{{\rm{s}}}}}}}d}.$$ Here \({V}_{{{{{{\rm{g}}}}}}}\) is the voltage added onto the GS. \({\gamma }_{{{{{{\rm{a}}}}}}}\) is a constant that links \({V}_{{{{{{\rm{g}}}}}}}\) with GS's deflection angle with the consideration of the input waveform. \({f}_{5}\)=100 mm is the focal length of lens L5, \({t}_{{{{{{\rm{s}}}}}}}=50{{{{{\rm{ms}}}}}}\) is the period of the sinusoidal voltage waveform added to the GS, and \(d\) = 13 µm is the EMCCD sensor's pixel size. In this work, we used the voltage from \({V}_{{{{{{\rm{g}}}}}}}=\) 0.2–1.7 V. The imaging speed of SPLIT ranged from 4 to 33 kfps. In addition, we used \({t}_{{{{{{\rm{e}}}}}}}=3\,{{{{{\rm{ms}}}}}}\) as the exposure time of the EMCCD and CMOS cameras. The sequence depth, \({N}_{t}\), is determined by $${N}_{t}={{{{{\rm{r}}}}}}{t}_{{{{{{\rm{e}}}}}}}.$$ In the experiments presented in this work, \({N}_{t}\) ranged from 12 to 100 frames. All data needed to evaluate the findings of this study are present in the paper and Supplementary Information. The raw data for Fig. 2 can be downloaded via the following link: https://figshare.com/articles/figure/SPLIT_Fig2/16703413. All other raw data are available from the corresponding authors upon reasonable request. The image reconstruction algorithm is described in detail in Supplementary Information. The custom computer code is not publicly available because it is proprietary and included in a patent application. Inada, N., Fukuda, N., Hayashi, T. & Uchiyama, S. Temperature imaging using a cationic linear fluorescent polymeric thermometer and fluorescence lifetime imaging microscopy. Nat. Protoc. 14, 1293–1321 (2019). Wood, M. & Ozanyan, K. Simultaneous temperature, concentration, and pressure imaging of water vapor in a turbine engine. IEEE Sens. J. 15, 545–551 (2015). Obermayer, D. & Kappe, C. On the importance of simultaneous infrared/fiber-optic temperature monitoring in the microwave-assisted synthesis of ionic liquids. Org. Biomol. Chem. 8, 114–121 (2010). Zhang, Z., Wang, J. & Chen, C. Near‐infrared light‐mediated nanoplatforms for cancer thermo‐chemotherapy and optical imaging. Adv. Mater. 25, 3869–3880 (2013). Chen, Z. et al. Phosphorescent polymeric thermometers for in vitro and in vivo temperature sensing with minimized background interference. Adv. Funct. Mater. 26, 4386–4396 (2016). Jaque, D. & Vetrone, F. Luminescence nanothermometry. Nanoscale 4, 4301–4326 (2012). Article ADS CAS PubMed Google Scholar Datta, R. et al. Fluorescence lifetime imaging microscopy: fundamentals and advances in instrumentation, analysis, and applications. J. Biomed. Opt. 25, 071203 (2020). Article ADS CAS PubMed Central Google Scholar Kurokawa, H. et al. High resolution imaging of intracellular oxygen concentration by phosphorescence lifetime. Sci. Rep. 5, 1–13 (2015). Childs, P., Greenwood, J. & Long, C. Review of temperature measurement. Rev. Sci. Instrum. 71, 2959–2978 (2000). Gao, L. et al. Single-cell photoacoustic thermometry. J. Biomed. Opt. 18, 026003 (2013). Article ADS PubMed Central CAS Google Scholar Snyder, W., Wan, Z., Zhang, Y. & Feng, Y. Classification-based emissivity for land surface temperature measurement from space. Int. J. Remote Sens. 19, 2753–2774 (1998). Suhling, K. et al. Fluorescence lifetime imaging (FLIM): basic concepts and some recent developments. Med. Photonics 27, 3–40 (2015). Chelushkin, P. & Tunik, S. Phosphorescence lifetime imaging (PLIM): state of the art and perspectives. Chelushkin, P. & Tunik, S. In Progress in Photon Science. (eds Yamanouchi, K., Tunik, S. and Makarov, V.) (Springer Nature, 2019). Labrador-Páez, L. et al. Reliability of rare-earth-doped infrared luminescent nanothermometers. Nanoscale 10, 22319–22328 (2018). Pickel, A. et al. Apparent self-heating of individual upconverting nanoparticle thermometers. Nat. Commun. 9, 1–12 (2018). Shen, Y. et al. In vivo spectral distortions of infrared luminescent nanothermometers compromise their reliability. ACS nano 14, 4122–4133 (2020). Becker, W. Fluorescence lifetime imaging–techniques and applications. J. Microsc. 247, 119–136 (2012). Bolek, P. et al. Ga-modified YAG: Pr3+ dual-mode tunable luminescence thermometers. Chem. Eng. J. 421, 129764 (2021). Gao, H. et al. A simple yet effective AIE-based fluorescent nano-thermometer for temperature mapping in living cells using fluorescence lifetime imaging microscopy. Nanoscale Horiz. 5, 488–494 (2020). Maciejewska, K., Bednarkiewicz, A. & Marciniak, L. NIR Luminescence lifetime nanothermometry based on phonon assisted Yb3+-Nd3+ energy transfer. Nanoscale Adv. 3, 4918–4925 (2021). Zhang, H. et al. Dual-emissive phosphorescent polymer probe for accurate temperature sensing in living cells and zebrafish using ratiometric and phosphorescence lifetime imaging microscopy. ACS Appl. Mater. Interfaces 10, 17542–17550 (2018). Tan, M. et al. Accurate in vivo nanothermometry through NIR‐II lanthanide luminescence lifetime. Small 16, 2004118 (2020). Allison, S. W. et al. Nanoscale thermometry via the fluorescence of YAG: Ce phosphor particles: measurements from 7 to 77 °C. Nanotechnology 14, 859 (2003). Benninger, R. et al. Quantitative 3D mapping of fluidic temperatures within microchannel networks using fluorescence lifetime imaging. Anal. Chem. 78, 2272–2278 (2006). Graham, E. M. et al. Quantitative mapping of aqueous microfluidic temperature with sub-degree resolution using fluorescence lifetime imaging microscopy. Lab Chip 10, 1267–1273 (2010). Schlegel, G. et al. Fluorescence decay time of single semiconductor nanocrystals. Phys. Rev. Lett. 88, 137401 (2002). Article ADS PubMed CAS Google Scholar Auzel, F. Upconversion and anti-stokes processes with f and d ions in solids. Chem. Rev. 104, 139–174 (2004). Skripka, A. et al. Spectral characterization of LiYbF4 upconverting nanoparticles. Nanoscale 12, 17545–17554 (2020). Jacques, S. Optical properties of biological tissues: a review. Phys. Med. Biol. 58, R37 (2013). Article ADS PubMed Google Scholar Vetrone, F. et al. Temperature sensing using fluorescent nanothermometers. ACS Nano 4, 3254–3258 (2010). Brites, D. S. C. et al. Thermometry at the nanoscale. Nanoscale 4, 4799–4829 (2012). Rostami, I., Alanagh, H., Hu, Z. & Shahmoradian, S. H. Breakthroughs in medicine and bioimaging with up-conversion nanoparticles. Int. J. Nanomed. 14, 7759 (2019). Zhou, J., Del Rosal, B., Jaque, D., Uchiyama, S. & Jin, D. Advances and challenges for fluorescence nanothermometry. Nat. Methods 17, 967–980 (2020). Qin, H. et al. Tuning the upconversion photoluminescence lifetimes of NaYF4: Yb3+, Er3+ through lanthanide Gd3+ doping. Sci. Rep. 8, 12683 (2018). Article ADS PubMed PubMed Central CAS Google Scholar Howard, S., Straub, A., Horton, N., Kobat, D. & Xu, C. Frequency-multiplexed in vivo multiphoton phosphorescence lifetime microscopy. Nat. Photonics 7, 33–37 (2013). Suhling, K. et al. Wide-field TCSPC-based fluorescence lifetime imaging (FLIM). Microsc. SPIE proc. 9858, 98580J (2016). Hirvonen, L., Fisher, M., Suhling, K. & Nomerotski, A. Photon counting phosphorescence lifetime imaging with TimepixCam. Rev. Sci. Instrum. 88, 013104 (2017). Sen, R. et al. New luminescence lifetime macro-imager based on a Tpx3Cam optical camera. Biomed. Opt. Express 11, 77–88 (2020). Franke, R. & Holst, G. Frequency-domain fluorescence lifetime imaging system (pco. flim) based on a in-pixel dual tap control CMOS image sensor. SPIE proc. 9328, 93281K (2015). Xiong, B. & Fang, Q. Luminescence lifetime imaging using a cellphone camera with an electronic rolling shutter. Opt. Lett. 45, 81–84 (2020). Liang, J. Punching holes in light: recent progress in single-shot coded-aperture optical imaging. Rep. Prog. Phys. 83, 116101 (2020). Liang, J. et al. Single-shot real-time video recording of a photonic Mach cone induced by a scattered light pulse. Sci. Adv. 3, e1601814 (2017). Liu, Y., Yuan, X., Suo, J., Brady, D. J. & Dai, Q. Rank minimization for snapshot compressive imaging. IEEE Trans. Pattern Anal. Mach. Intell. 41, 2990–3006 (2018). Yuan, X., Liu, Y., Suo, J. & Dai, Q. Plug-and-play algorithms for large-scale snapshot compressive imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1447–1457 (2020). Chan, S., Wang, X. & Elgendy, O. Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imag. 3, 84–98 (2016). Article MathSciNet Google Scholar Yuan, X. et al. Compressive hyperspectral imaging with side information. IEEE J. Sel. Top. Signal Process 9, 964–976 (2015). Liang, J., Zhu, L. & Wang, L. Single-shot real-time femtosecond imaging of temporal focusing. Light Sci. Appl. 7, 42 (2018). Wang, P., Liang, J. & Wang, L. Single-shot ultrafast imaging attaining 70 trillion frames per second. Nat. Commun. 11, 2091 (2020). Article ADS CAS PubMed PubMed Central Google Scholar Liang, J. et al. Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution. Nat. Commun. 11, 5252 (2020). May, P. & Berry, M. Tutorial on the acquisition, analysis, and interpretation of upconversion luminescence data. Methods Appl. Fluoresc. 7, 023001 (2019). Dos Santos, P., De Araujo, M., Gouveia-Neto, A., Medeiros Neto, J. & Sombra, A. Optical temperature sensing using upconversion fluorescence emission in Er 3+/Yb 3+-codoped chalcogenide glass. Appl. Phys. Lett. 73, 578–580 (1998). Miller, M. & Wright, J. Multiphonon and energy transfer relaxation in charge compensated crystals. J. Chem. Phys. 71, 324–338 (1979). Liu, X., Zhang, S., Yurtsever, A. & Liang, J. Single-shot real-time sub-nanosecond electron imaging aided by compressed sensing: analytical modeling and simulation. Micron 117, 47–54 (2019). Wang, R. et al. Thermomechanical fatigue experiment and failure analysis on a nickel-based superalloy turbine blade. Eng. Fail. Anal. 102, 35–45 (2019). Jung, H. et al. Organic molecule-based photothermal agents: an expanding photothermal therapy universe. Chem. Soc. Rev. 47, 2280–2297 (2018). Shen, Y. et al. Ag2S nanoheaters with multiparameter sensing for reliable thermal feedback during in vivo tumor therapy. Adv. Funct. Mater. 30, 2002730 (2020). Wang, C. et al. Determining intracellular temperature at single-cell level by a novel thermocouple method. Cell Res. 21, 1517–1519 (2011). Liu, X., Liu, J., Jiang, C., Vetrone, F. & Liang, J. Single-shot compressed optical-streaking ultra-high-speed photography. Opt. Lett. 44, 1387–1390 (2019). The authors thank Professor Aycan Yurtsever and Wanting He for experimental assistance and fruitful discussion. Natural Sciences and Engineering Research Council of Canada (RGPIN-2017-05959, RGPAS-2017-507845, I2IPJ-555593-20, RGPIN-2018-06217, RGPAS-2018-522650); Canada Foundation for Innovation and Ministère de l'Économie et de l'Innovation du Québec (37146); Canadian Cancer Society (707056); New Frontier in Research Fund (NFRFE-2020-00267); Fonds de Recherche du Québec–Nature et Technologies (2019-NC-252960); Fonds de Recherche du Québec–Santé (267406, 280229). Artiom Skripka Present address: Nanomaterials for Bioimaging Group, Departamento de Física de Materiales, Facultad de Ciencias, Universidad Autónoma de Madrid, Madrid, 28049, Spain and The Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA These authors contributed equally: Xianglei Liu, Artiom Skripka. Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1S2, Canada Xianglei Liu, Artiom Skripka, Yingming Lai, Cheng Jiang, Jingdan Liu, Fiorenzo Vetrone & Jinyang Liang Xianglei Liu Yingming Lai Cheng Jiang Jingdan Liu Fiorenzo Vetrone Jinyang Liang X.L. designed and built the system, conducted the experiments, developed the reconstruction algorithm, and analyzed the data. A.S. prepared the UCNPs, conducted some experiments, and analyzed the data. Y.L. contributed to the algorithm development. C.J. and J. Liu conducted some experiments. F.V. and J. Liang initiated the project. J. Liang proposed the concept, contributed to experimental design, and supervised the project. All authors wrote and revised the manuscript. Corresponding authors Correspondence to Fiorenzo Vetrone or Jinyang Liang. The authors disclose the following patent applications: WO 2020/154806 A1 (J. Liang, F.V., and X.L.) and US Provisional 63/260,511 (J. Liang, F.V., X.L., and A.S.). Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Description of Additional Supplementary Files Supplementary Movie 1 Liu, X., Skripka, A., Lai, Y. et al. Fast wide-field upconversion luminescence lifetime thermometry enabled by single-shot compressed ultrahigh-speed imaging. Nat Commun 12, 6401 (2021). https://doi.org/10.1038/s41467-021-26701-1 An Optoelectronic thermometer based on microscale infrared-to-visible conversion devices He Ding Guoqing Lv Xing Sheng Light: Science & Applications (2022) Compressed ultrahigh-speed single-pixel imaging by swept aggregate patterns Patrick Kilcullen Tsuneyuki Ozaki Nature Communications (2022) Near-infrared excitation/emission microscopy with lanthanide-based nanoparticles Laura Francés-Soriano Juan Ferrera-González Julia Pérez-Prieto Analytical and Bioanalytical Chemistry (2022)
CommonCrawl
Simple loops on 2-bridge spheres in Heckoid orbifolds for 2-bridge links ERA-MS Home This Volume Upper bounds for Steklov eigenvalues on surfaces 2012, 19: 86-96. doi: 10.3934/era.2012.19.86 The pentagram map in higher dimensions and KdV flows Boris Khesin 1, and Fedor Soloviev 2, School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, USA and Department of Mathematics,, University of Toronto, Toronto, ON M5S 2E4, Canada Department of Mathematics, University of Toronto, Toronto, ON M5S 2E4, Canada Published September 2012 We extend the definition of the pentagram map from 2D to higher dimensions and describe its integrability properties for both closed and twisted polygons by presenting its Lax form. The corresponding continuous limit of the pentagram map in dimension $d$ is shown to be the $(2,d+1)$-equation of the KdV hierarchy, generalizing the Boussinesq equation in 2D. Keywords: Lax equation, pentagram map, KdV hierarchy, completely integrable system, spectral curve.. Mathematics Subject Classification: Primary: 37J35; Secondary: 37K10, 37K25, 53A2. Citation: Boris Khesin, Fedor Soloviev. The pentagram map in higher dimensions and KdV flows. Electronic Research Announcements, 2012, 19: 86-96. doi: 10.3934/era.2012.19.86 M. Gekhtman, M. Shapiro, S. Tabachnikov and A. Vainshtein, Higher pentagram maps, weighted directed networks, and cluster dynamics, Electron. Res. Announc. Math. Sci., 19 (2012), 1-17; arXiv:1110.0472. Google Scholar B. Khesin and F. Soloviev, Integrability of higher pentagram maps, (2012); arXiv:1204.0756. Google Scholar I. M. Krichever and D. H. Phong, On the integrable geometry of soliton equations and N=2 supersymmetric gauge theories, J. Diff. Geom., 45 (1997), 349-389. Google Scholar I. M. Krichever and D. H. Phong, Symplectic forms in the theory of solitons, in "Surveys in Differential Geometry: Integral Systems [Integrable Systems]," Surv. Diff. Geom., Vol. IV, Int. Press, Boston, MA, (1998), 239-313. Google Scholar G. Marí Beffa, On generalizations of the pentagram map: Discretizations of AGD flows, (2011); arXiv:1103.5047. Google Scholar V. Ovsienko, R. Schwartz and S. Tabachnikov, The pentagram map: A discrete integrable system, Comm. Math. Phys., 299 (2010), 409-446; arXiv:0810.5605. doi: 10.1007/s00220-010-1075-y. Google Scholar V. Ovsienko, R. Schwartz and S. Tabachnikov, Liouville-Arnold integrability of the pentagram map on closed polygons, (2011); arXiv:1107.3633. Google Scholar R. Schwartz, The pentagram map, Experiment. Math., 1 (1992), 71-81. Google Scholar F. Soloviev, Integrability of the pentagram map, submitted to Duke Mathematical Journal, (2011); arXiv:1106.3950. Google Scholar Valentin Ovsienko, Richard Schwartz, Serge Tabachnikov. Quasiperiodic motion for the pentagram map. Electronic Research Announcements, 2009, 16: 1-8. doi: 10.3934/era.2009.16.1 Roberto Camassa. Characteristics and the initial value problem of a completely integrable shallow water equation. Discrete & Continuous Dynamical Systems - B, 2003, 3 (1) : 115-139. doi: 10.3934/dcdsb.2003.3.115 Vladimir S. Gerdjikov, Georgi Grahovski, Rossen Ivanov. On the integrability of KdV hierarchy with self-consistent sources. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1439-1452. doi: 10.3934/cpaa.2012.11.1439 Răzvan M. Tudoran. On the control of stability of periodic orbits of completely integrable systems. Journal of Geometric Mechanics, 2015, 7 (1) : 109-124. doi: 10.3934/jgm.2015.7.109 Rong Rong, Yi Peng. KdV-type equation limit for ion dynamics system. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1699-1719. doi: 10.3934/cpaa.2021037 Álvaro Pelayo, San Vű Ngọc. First steps in symplectic and spectral theory of integrable systems. Discrete & Continuous Dynamical Systems, 2012, 32 (10) : 3325-3377. doi: 10.3934/dcds.2012.32.3325 Marie-Claude Arnaud. A nondifferentiable essential irrational invariant curve for a $C^1$ symplectic twist map. Journal of Modern Dynamics, 2011, 5 (3) : 583-591. doi: 10.3934/jmd.2011.5.583 Alberto Maspero, Beat Schaad. One smoothing property of the scattering map of the KdV on $\mathbb{R}$. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1493-1537. doi: 10.3934/dcds.2016.36.1493 Juan-Ming Yuan, Jiahong Wu. The complex KdV equation with or without dissipation. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 489-512. doi: 10.3934/dcdsb.2005.5.489 Stéphane Gaubert, Nikolas Stott. A convergent hierarchy of non-linear eigenproblems to compute the joint spectral radius of nonnegative matrices. Mathematical Control & Related Fields, 2020, 10 (3) : 573-590. doi: 10.3934/mcrf.2020011 Wenjing Chen, Louis Dupaigne, Marius Ghergu. A new critical curve for the Lane-Emden system. Discrete & Continuous Dynamical Systems, 2014, 34 (6) : 2469-2479. doi: 10.3934/dcds.2014.34.2469 Juan-Ming Yuan, Jiahong Wu. A dual-Petrov-Galerkin method for two integrable fifth-order KdV type equations. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1525-1536. doi: 10.3934/dcds.2010.26.1525 Sébastien Gouëzel. An interval map with a spectral gap on Lipschitz functions, but not on bounded variation functions. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1205-1208. doi: 10.3934/dcds.2009.24.1205 Dmitry Treschev. A locally integrable multi-dimensional billiard system. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5271-5284. doi: 10.3934/dcds.2017228 Felipe Linares, M. Panthee. On the Cauchy problem for a coupled system of KdV equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 417-431. doi: 10.3934/cpaa.2004.3.417 Annie Millet, Svetlana Roudenko. Generalized KdV equation subject to a stochastic perturbation. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1177-1198. doi: 10.3934/dcdsb.2018147 Rowan Killip, Soonsik Kwon, Shuanglin Shao, Monica Visan. On the mass-critical generalized KdV equation. Discrete & Continuous Dynamical Systems, 2012, 32 (1) : 191-221. doi: 10.3934/dcds.2012.32.191 S. Raynor, G. Staffilani. Low regularity stability of solitons for the KDV equation. Communications on Pure & Applied Analysis, 2003, 2 (3) : 277-296. doi: 10.3934/cpaa.2003.2.277 María Santos Bruzón, Tamara María Garrido. Symmetries and conservation laws of a KdV6 equation. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 631-641. doi: 10.3934/dcdss.2018038 Hisashi Okamoto, Takashi Sakajo, Marcus Wunsch. Steady-states and traveling-wave solutions of the generalized Constantin--Lax--Majda equation. Discrete & Continuous Dynamical Systems, 2014, 34 (8) : 3155-3170. doi: 10.3934/dcds.2014.34.3155 Boris Khesin Fedor Soloviev
CommonCrawl
Reason for the discreteness arising in quantum mechanics? What is the most essential reason that actually leads to the quantization. I am reading the book on quantum mechanics by Griffiths. The quanta in the infinite potential well for e.g. arise due to the boundary conditions, and the quanta in harmonic oscillator arise due to the commutation relations of the ladder operators, which give energy eigenvalues differing by a multiple of $\hbar$. But what actually is the reason for the discreteness in quantum theory? Which postulate is responsible for that. I tried going backwards, but for me it somehow seems to come magically out of the mathematics. quantum-mechanics mathematical-physics quantization foundations discrete $\begingroup$ The other postulate is that it arises in the dynamics of little tiny strings, which is perfectly consistent with all observations, but such things are still viewed as speculative. $\endgroup$ – user11547 Oct 6 '12 at 15:38 $\begingroup$ In reviewing some of the response and your replies, I am curious as to what you are actually after in the the question "But what actually is the reason for discreteness in quantum theory?" The current high scoring answers are tautological since they rely largely on the enforcement of boundary conditions of some sort. So its not entirely clear what the goal of the question is. I was wondering if you could clarify. $\endgroup$ – user11547 Oct 7 '12 at 15:06 $\begingroup$ @HalSwyers: I want an intutive (if possible, in not mathematical) explanation for how discreteness in the theory of quantum mechanics actually arises. So, what feature of QM makes it discrete rather than the continuous solutions in classical mechanics. $\endgroup$ – user7757 Oct 7 '12 at 16:04 $\begingroup$ @ ramanujan_dirac well the answer there is simply that we define a fundamental unit of action \hbar. This partitions the respective Hilbert/phase space (depending on use). This is, at some level, an arbitrary feature, however, experiment proves that this is how nature operates. As with most things in physics, the proof is usually not mathematical but empirical. At some level, the universe is the ultimate black box. We can ask it well formulated questions, and it will give an answer, but its inner workings are still too complex for use to determine. $\endgroup$ – user11547 Oct 7 '12 at 17:21 If I'm only allowed to use one single word to give an oversimplified intuitive reason for the discreteness in quantum mechanics, I would choose the word 'compactness'. Examples: The finite number of states in a compact region of phase space. See e.g. this & this Phys.SE posts. The discrete spectrum for Lie algebra generators of a compact Lie group, e.g. angular momentum operators. See also this Phys.SE post. On the other hand, the position space $\mathbb{R}^3$ in elementary non-relativistic quantum mechanics is not compact, in agreement that we in principle can find the point particle in any continuous position $\vec{r}\in\mathbb{R}^3$. See also this Phys.SE post. Qmechanic♦Qmechanic $\begingroup$ I absolutely agree that the most fundamental reason is the symplectic noncommutative structure which quantizes phase space areas and limits information (in the sense of dimensionality of a complex vectors space) to phase space area. But I'm not sure if this is the whole story if you talk about observed discreteness like in a double slit experiment or atomic spectroscopy. $\endgroup$ – A.O.Tell Oct 6 '12 at 14:11 $\begingroup$ Awesome answer. I am absolutely delighted! $\endgroup$ – user7757 Oct 6 '12 at 17:16 There are several forms of discreteness in quantum theory. The simplest one is the discreteness of eigenvalues and the associated countable eigenstates. Those arise similarly to the discrete standing waves on a guitar string. The boundary conditions only allow certain standing waves that nicely fit into the enforced region in space. Even though the string is a continuous object, its spectrum becomes discontinuous and is naturally labeled with natural numbers. Exactly the same thing happens in unbounded (from above) quantum potentials like the infinite well or the harmonic oscillator, where you also get discrete standing quantum waves. (Other potentials can generate both discrete and continuous eigenvalues at the same time) Another reason for discreteness comes in with multi-particle systems. Quantum theory requires that a system that is realized in space-time contains a unitary representation of the symmetry group of space-time, the lorentz group. In fact, you can define a particle in quantum theory as a subsystem that contains such a group representation. And because you can't have any non integer fraction of a unitary group representation, you need to have an integer number of them in your total system. So the number of particles is also an (expected) discrete feature, and it plays a role when you talk about single photons for example, that are either absorbed completely or not at all. And finally there is a form of discreteness that comes with quantum measurement. The measurement postulate says that the result of a measurement is an eigenvalue of an hermitian operator called an observable. Now the existence of discrete spectra for these operators is related to my first point (boundary conditions), but this one goes deeper. While the existence of a discrete spectrum of the energies of a system still allows all continuous energy values by superposition, the measurement outcome results in exactly one (often discrete) result. This is responsible for the discreteness of the beams in the Stern-Gerlach experiment for example. Why quantum measurement works this way is essentially an open question even today. There are some approaches to answer it, but there is no generally accepted answer that would explain all aspects convincingly. A.O.TellA.O.Tell $\begingroup$ Thanks for the answer. Firstly I would like to know which postulate/property of quantum mechanics entails a unitary representation of the symmetry group (is it because of unitarity - the fact that the sum of the probabilities should be 1?) Secondly, I didnt understand your statement: he measurement outcome results in exactly one (often discrete) result. What do you mean by a discrete result, when we say that QM is discrete we mean that the possible outcomes can take only specific values, but how can you talk about the discreteness of a specific result. $\endgroup$ – user7757 Oct 6 '12 at 13:21 $\begingroup$ (contd) Do you mean to say that there is no accepted explanation for the discrete spectrum of the eigenvalues? And therefore, the foundations of QM are basically empirical? Help will be much appreciated on this matters. $\endgroup$ – user7757 Oct 6 '12 at 13:22 $\begingroup$ What I meant is that the outcome is picked from a discrete set. Some measurements allow outcomes from a continuous set, like a position measurement for example. $\endgroup$ – A.O.Tell Oct 6 '12 at 13:22 $\begingroup$ No, the discrete spectrum of eigenvalues is mathematically well understood. What is not understood is why a measurement forces a system to be in one of the eigenstates associated with the measured eigenvalue after the measurement. This is essentially known as the "quantum measurement problem". You can surely find a lot of information about it if you look for it. $\endgroup$ – A.O.Tell Oct 6 '12 at 13:25 $\begingroup$ Regarding the unitary representations of symmetry, that issue has mostly been raised by Weyl and Wigner who studied the representation of groups in quantum theory. I'm afraid I cannot explain the details in a comment, but the basic idea is that physically observable symmetries can be described as groups, and the representations of these groups must be contained in the describing mathematics. And it turns out that for quantum theory the only option is a unitary representation. $\endgroup$ – A.O.Tell Oct 6 '12 at 13:28 If you want you can go back to Planck's derivation of the black body energy spectrum, otherwise known as Planck's law, as well as Einstein's use of Planck's work in his explanation of the Photo Electric Effect (which garnered him the Nobel prize) in order to first understand some of the experimental motivation. However, to understand the roots of quantum mechanics in atomic physics, one must go back to Bohr and Rutherford model of hydrogen. An Introduction to Quantum Physics by French and Taylor discusses the Bohr-Rutherford model of the hydrogen atom on page 24. This model was introduced around 1913 and Bohr provided two key postulates: An atom has a number of possible "stationary states." In any one of these states the electrons perform orbital motions according to Newtonian mechanics, but (contrary to the predictions of classical electromagnetism) do not radiate so long as they remain in fixed orbits. When the atom passes from one stationary state to another, corresponding to a change in orbit (a "quantum jump") by one of the electrons in the atom, radiation is emitted in the form of a photon. The photon energy is just the energy difference between the initial and final states of the atom. The classical frequency $\nu$ is related to this energy through the Planck-Einstein relation: $$E_{photon} = E_i - E_f = h\nu$$ Which was described in Bohr's paper On the Constitution of Atoms and Molecules. These postulates are slightly dated in modern conceptions of electron motion, since we now understand things better in terms of the Schrodinger equation, which allows for an an extremely accurate model of the hydrogen atom. However, one of the key concepts Bohr introduced is the Correspondence Principle, which according to French and Taylor: ...requires classical and quantum predictions to agree in the limit of large quantum numbers... This is a key ingredient in modern physics, and is best understood in terms of asymptotic analysis. Most modern theories connect to real observed phenomena at the large N limit of the theory. Admittedly these are the practical origins of why we have quantum mechanics, as far as the reason nature chose these things, the answer might be very anthropic. We simple wouldn't exist without them. Dirac frequently pondered the question why and here was his answer in 1963: It seems to be one of the fundamental features of nature that fundamental physical laws are described in terms of a mathematical theory of great beauty and power, needing quite a high standard of mathematics for one to understand it. You may wonder: Why is nature constructed along these lines? One can only answer that our present knowledge seems to show that nature is so constructed. We simply have to accept it. One could perhaps describe the situation by saying that God is a mathematician of a very high order, and He used very advanced mathematics in constructing the universe. Despite several modern attempts to attack the more meta-physical aspects of this, and give them rigor, there is still no really good answer...as Feynman or Mermin said: Shut up and calculate! $\begingroup$ I hope you realize that when Mermin said "Shut up and calculate!" he did not intend this as advice. In his own words, he was "dismissing an interpretive position of others by lampooning it as a 'shut up and calculate interpretation'." $\endgroup$ – Peter Shor Oct 7 '12 at 0:58 $\begingroup$ @PeterShor Absolutely agree! The point that I failed to convey but was trying to make is that this question of why nature uses stationary states. We have ample experimental proof, and some proofs of stationary states arising in certain contexts but as far as a reason why this should occur in nature I suspect is unanswerable. I find the other responses to this question to be tautological. String theory is on the right track since at least it keeps this very fundamental. $\endgroup$ – user11547 Oct 7 '12 at 15:01 $\begingroup$ You are saying that this question is not answer able by reason. You have to depend on experimental truth. Right? @HalSwyers $\endgroup$ – Self-Made Man Jun 18 '14 at 11:24 In a more mathematical sense, the discreteness just arises out of the mathematics. For example: The Schrodinger equation is a classic Sturm-Liouville problem in ODE. https://en.wikipedia.org/wiki/Sturm–Liouville_theory That means we get eigenfunctions (our eigenstates in QM) and eigenvalues corresponding to those eigenfunctions (our energy levels). The Hamiltonian operator in the Schrodinger equation would be our self adjoint SL operator. Noon36Noon36 A very interesting question, indeed ! In late 19th century Physics had ordinary crisis - classical physics at that time predicted that black body emitted radiation intensity must increase monotonically with increasing wave frequency. This can be seen from a graph (black curve, 5000K) : One, by summing all energies which black body radiates away from all frequencies can show that it must approach infinity. Thus black body would almost instantly radiate all it's energy away and cool-down to absolute zero. This is known as "Ultraviolet catastrophe". But in practice, it was not the case. Black body really radiated according to unknown law at that time (blue curve, 5000K). In 1900 Max Plank using strange assumptions at that time, that energy is absorbed or emitted discreetly - by energy quanta ($E=h\nu$) - was able to derive correct intensity spectral distribution law and resolve Ultraviolet catastrophe : $$ B_{\lambda }(\lambda ,T)={\frac {2hc^{2}}{\lambda ^{5}}}{\frac {1}{e^{hc/(\lambda k_{\mathrm {B} }T)}-1}} $$ Albert Einstein in 1905 once again patched Physics and showed that Plank's quanta is not just empty theoretical construct, but real physical particles, which now we call photons. answered Jan 3 at 12:58 Agnius VasiliauskasAgnius Vasiliauskas The discreteness of quantum mechanics, is evident from the experimental evidence. Any experiment, take for example the stern gerlach, Will yield probabilistic answers under identical experimental conditions. The Matrix structure of quantum mechanics allows us to calculate only probability amplitudes of processes to happen. PrathyushPrathyush $\begingroup$ I am asking for a theoretical reason. $\endgroup$ – user7757 Oct 6 '12 at 12:24 $\begingroup$ Operator algebra is responsible. The state is a vector in the Hilbert space. States can be decomposed into the eigenstates of Hermitian operators. And measurement happens with respect to the eigenmodes of suitable hermtian operators. $\endgroup$ – Prathyush Oct 6 '12 at 12:40 Is quantization of energy a purely mathematical result or is there a fundamental reason behind it? Why is the energy of a particle quantised? Why position is not quantized in quantum mechanics? Why does spin have a discrete spectrum? How does one quantize the phase-space semiclassically? Does the wave function always asymptotically approach zero? Discreteness of set of energy eigenvalues What fundamental reasons imply quantization? What entities in quantum mechanics are known to be "not quantized"? Is velocity quantized? What's the exact connection between bosonic Fock space and the quantum harmonic oscillator? Quantization from density of states Difference between discretization and quantization in physics Commutation relations in quantum mechanics Quantum mechanics with multiple values of $\hbar$ Non-integer powers for the quantum harmonic oscillator ladder operators and spectrum uniqueness Do Hermite polynomials imply a weight for quantum harmonic oscillator wavefunctions?
CommonCrawl
by David Thomson The following is from Secrets of the Aether: Distributed frequency is equal to resonance. Viewing resonance in just one dimension of frequency is like viewing area in just one dimension of length. The true meaning of resonance is lost when we change its dimensions. The unit of resonance indicates there are two distinct dimensions of frequency involved. \[rson = fre{q^2} \tag{6.39}\] Modern physics does not measure capacitance and inductance as square roots, yet the resonance equation usually expresses as: \[F = \frac{1}{{2\pi \sqrt {LC} }} \tag{6.40}\] where $F$ is the "resonant frequency," $L$ is the inductance and $C$ is the capacitance. ("Resonant frequency" is redundant and incorrect. It is like saying "surface length.") Equation (6.40) loses much of its meaning by making it appear the inductance and capacitance measurements are square roots and expressing the resonance in terms of frequency. It is as though modern physics has not yet discovered the unit of resonance. To make the math of resonance compatible with the rest of physics, the correct expression would keep the natural measurements of inductance and capacitance and notate the result as frequency squared. In the Aether Physics Model, equation (6.41) arises as a different equation (6.40) from the Standard Model resonance equation. \[rson = \frac{1}{{4\pi \cdot indc \cdot capc}} \tag{6.41}\] Equation (6.41) differs from the Standard Model resonance equation by a factor of $\sqrt \pi $ and yet it produces true resonance in physical experiments. This is not to say the Standard Model resonance equation is wrong. It is merely incomplete. There are actually three resonance equations, which are related through the Pythagorean Theorem. We express the three resonance equations in terms of a common denominator of ${4{\pi ^2}}$ and in quantum measurements units: \[rson1 = \frac{1}{{4{\pi ^2} \cdot indc \cdot capc}} \tag{6.42}\] \[rson2 = \frac{{\pi - 1}}{{4{\pi ^2} \cdot indc \cdot capc}} \tag{6.43}\] \[rson3 = \frac{\pi }{{4{\pi ^2} \cdot indc \cdot capc}} \tag{6.44}\] Equations (6.42) to (6.44) are related such that: \[rson1 + rson2 = rson3 \tag{6.45}\] The rson1 equation is identical to the Standard Model equation for resonance (6.40), and is associated with the highest potential. The rson3 equation is the true resonance of an inductive-capacitive circuit and is identical to equation (6.41). Both rson2 and rson3 equations resonate with potential near zero. The resonance unit indicates that resonance must measure as a distributed quantity in order for us to arrive at the correct value. The design of present measurement equipment measures resonance in only one dimension of frequency. Because familiarity with the time domain exists at the macro level of existence, modern physics also measures the quantum realm in the time domain. The reciprocal of time is frequency, not resonance. It is a significant error that modern physics does not recognize resonance as a distributed unit. The quantum realm exists in a five-dimensional space-resonance as opposed to a four-dimensional space-time. If physicists wish to understand quantum existence properly, then we must design measurement equipment to measure directly in the resonance domain. Presently, Fourier analysis attempts to account for this shortcoming by mathematically converting time domain measurements into frequency domain data. I made the following post to the Pupman Tesla Coil mailing list on March 8, 2018: If you are looking for a truly fascinating coil design with ground breaking potential, try building a magnifier with three tertiary coils on a common grounded frame. From my decades of studying Tesla literature, this is how Tesla built the flying triangle, which has since been perfected by US black projects. Back in the 1990s, there was more Tesla information on the Internet and BBS forums, which has since disappeared. One such report was of Tesla having built a model flying triangle configuration in his lab. There is one primary and secondary coil, which drives three tertiary coils. All are grounded to a single aluminum frame, which is not in contact with Earth ground. The tertiary coils are each 1/3 out of phase with each other, and have variable inductance or variable capacitance to finely adjust the resonance. The three tertiary coils develop a rotating magnetic field around the entire coil setup, which creates what I call a space bubble. The space in this macro rotating magnetic field becomes gravitationally separated from the space around it. By changing the characteristics of the of the rotating magnetic field space bubble, you can build a free moving vehicle that can navigate through space with reduced gravitational influence from the Earth. The orientation of the tertiary coils with respect to the Earth ground plane is what keeps the system from tumbling. With the tertiary coils perpendicular to the Earth ground plane, the rotating magnetic field orientation will also have an axis perpendicular to the Earth ground plane. I believe Tesla's first model was an open frame. The rotating magnetic field space bubble will exist whether the frame is open, or whether it is completely enclosed around the Tesla coil system. However, a closed frame system should be more stable, as the conductive frame will give the rotating magnetic field space bubble a more reliable structure. This is something I have been planning on for years, but have not yet put together. The key component of this system will be the design of the tertiary coils to be in 1/3 phase with each other and also to have a controllable variable inductance or controllable variable capacitance with just the right range of adjustment. I followed up with the following comment: Thanks, Jim, for the link to Antonio's work. This just clarified for me how the three tertiary coils should be tuned. A tertiary coil would be way out of tune with the resonating primary and secondary power supply if it was tuned to 1/3 wavelength. Instead, there should be three, quarter-wave steps and a missing step, such that the tertiary coils would fire X-X-X-O, or 1/4, 1/2, 3/4, 0. This makes sense since a magnetic field, whether rotating or not, is asymmetrical. And it is more than just the magnetic field that is at play, here. Tertiary coils are acting strongly on the electrostatic field, in addition to the magnetic field. They are pushing electrons and ions. When three tertiary coils are mounted to a common ground, and tuned to produce maximum potential in succession, they will push the ions circularly, and perpendicular to the axis of the frame. This is what creates a rotating magnetic field plasma. The plasma will likely be double layered, with negative ions moving in one direction, and positive ions moving in the opposite direction. The resulting plasma is the space bubble that surrounds the frame and coils. Naturally, the aim for tuning is to minimize streamer production and keep as much of the energy as possible within the system. Angular frequency is a cycle of repetition and refers to the angles covered by a frequency. Thus angular frequency is equal to: A bicycle wheel turning at the rate of five cycles per second will scan an angle of 10π radians per second. Angular frequency is in units of radians per second. When a cycle completes, it does so in a given period of time. The period of time to complete a cycle is the reciprocal of the frequency: A bicycle wheel turning at five cycles per second will have a period of 1/5 second. Or in other words, the time it takes to turn the wheel once at five cycles per second will be 1/5 second. Not all cycles are stationary and go in circles. Quite often a cycle is associated with a velocity. As a bicycle wheel turns and completes one cycle, the rider of the bike will travel a distance. This distance is called the "wavelength" when applied to photons. The distance the bicycle rider will travel will depend on its velocity per frequency: So if the rider travels five feet per second, and the wheel turns at five cycles per second, then the rider has traveled one foot (the bicycle would be very small!). The above pictures are of a double coned coil I acquired on eBay. The seller purchased it from the estate of a deceased FBI agent. I was informed by one of the bidders, who owns a similar coil without the wires that it was designed by Nikola Tesla, himself. The wooden frame he owns was given to his father by Nikola Tesla as payment (Tesla had no money at the time). Each cone is 6" in diameter and 6" in height. Each cone has approx. 127ft of wire. The end to end inductance of the secondary is 2.36mH. The quarter wave length frequency calculates to 1.937MHz for a straight solenoid but this coil resonates at about 1.12MHz as determined by a frequency counter and oscilloscope. The two cones are connected in the middle such that the entire length is one wire. There is no tapping point between the two coils. I had to replace the copper primary due to age and wear but I am keeping the old copper tubing as verification for the coil's age. This coil was likely built in the early 1900s and is identical in layout to Tesla's patent schematics from the late 1890s. Rumor has it that this coil exhibits antigravity properties. I have not seen antigravity effects when applying a standard spark gap discharge to the primary. David Thomson's Coil Formula As is explained in the Aether Physics Model, Coulomb's constant is equal to: \[{k_C} = \frac{{c \cdot Cd \cdot {\mu _0}}}{{{\varepsilon _0}}} \tag{1.1}\] From this, inductance can be defined as: In equation (1.2) 1 meter equals 1 henry. .003 meter equals 3 mH, and so on. The relationship of length to inductance is clearly shown in reference to Coulomb's constant. Wheeler's formula (for the inductance of an air core solenoid coil) outputs inductance in terms of thousand inches. where N is the number of turns, R is the radius in inches, and H is the length of windings in inches. Equation (1.3) outputs in length, just what the Coulomb's constant formula requires. So if the units of Wheeler's formula are converted to meter and the two formulas are combined then: and this can be simplified to: The values and units generated in equation (1.5) are accurate to the same degree as Wheeler's formula for inductance since it incorporates Wheeler's formula unchanged. The exact value of Cd is equal to: where the values are Coulomb's constant, light speed, permeability, and permittivity. A practical simplification of the formula (1.5) is: where the value N is a number, and R and H are in inches, and the result is in henry. If you use MathCAD or another program, which automatically converts inches to meters, then use equation (1.8): If you're looking for an inductance formula where the input is in meters instead of inches, you can use this formula: Formula (1.9) can be used for either solenoid or flat spiral coils. For flat spiral coils use the average radius for R and the width of the coil windings for H. Both R and H are in meters. N is the number of turns. Ideal Tesla Magnifier System
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems MathJax in WordPress and LaTeX for Math Blogs The mathematical equations and symbols in this website is created using MathJax service. MathJax enables us to use LaTeX code in WordPress for mathematical equations/symbols. How to Use MathJax on WordPress My setting with macros Useful LaTeX code for MathJax. System of equations Augmented matrix Block matrix Matrix with fractions Elementary row operations (Gauss-Jordan elimination) Use array for tabular Giving reasons for each line in align Second derivative The length (magnitude) of vectors Explanations under equations Crossing things out To use MathJax on WordPress, write the following code in header.php. (I put the code just before </head > .) That's it!! <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [['$','$'], ["\\(","\\)"]] } }); src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML"> <meta http-equiv="X-UA-Compatible" CONTENT="IE=EmulateIE7" /> if you type $\sin x$ in an editor, it gives $\sin x$. \cos^2 x +\sin^ 2 x=1 creates Here is my current setting. The following codes include macros as well. <script type="text/x-mathjax-config">// <![CDATA[ MathJax.Hub.Config({ tex2jax: { inlineMath: [['$','$'], ["\\(","\\)"]] }, TeX: { Macros: { R: "{\\mathbb R}", N: "{\\mathbb N}", C: "{\\mathbb C}", Z: "{\\mathbb Z}", Q: "{\\mathbb Q}", SL: "{\\operatorname {SL}}", SO: "{\\operatorname {SO}}", GL: "{\\operatorname {GL}}", id: "{\\mathbb {id}}", tr: "{\\operatorname {tr}}", trans: "{\\mathrm T}", Span: "{\\operatorname {Span}}", Hom: "{\\operatorname {Hom}}", Rep: "{\\operatorname {Rep}}", Aut: "{\\operatorname {Aut}}", End: "{\\operatorname {End}}", Repart: "{\\operatorname {Re}}", Impart: "{\\operatorname {Im}}", im: "{\\operatorname {im}}", rk: "{\\operatorname {rank}}", nullity: "{\\operatorname {null}}", Stab: "{\\operatorname {Stab}}", Zmod: ["{\\mathbb Z / #1 \\mathbb Z}",1], bold: ["{\\bf #1}",1], Abs: ['\\left\\lvert #2 \\right\\rvert_{\\text{#1}}', 2, ""] Here is the list of LaTeX codes that I used in this website and I found them useful. Here is the LaTeX code for the system of equations. \left\{ \begin{array}{c} a_1x+b_1y+c_1z=d_1 \\ a_3x+b_3y+c_3z=d_3 \right. If you don't need the big bracket, then omit \left\{ and \right. from the above code. The LaTex code \[\begin{array}{c} generates \end{array}. By writing \left[\begin{array}{rrr|rrr} 1 & 0 & 0 & 1 &1 & 1 \\ 0 & 1 & 0 & 0 & 1 & 1 \\ \end{array} \right] \[ \left[\begin{array}{rrr|rrr} \end{array} \right]\] To write a block matrix using MathJax, write the following LaTeX code \left[\begin{array}{c|c} A & B\\ C & D \right] \right] \] When you write a matrix whose entries are fractions, you might feel the line are cramped. So to widen the gap between lines, use \\[6pt] instead of \\ as in the following code. A=\begin{bmatrix} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\[6pt] \frac{2}{3} &\frac{-1}{3} &\frac{-1}{3} \\[6pt] \frac{1}{3} & \frac{1}{3} & \frac{-2}{3} \end{bmatrix} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{2}{3} &\frac{-1}{3} &\frac{-1}{3} \\ To get \[ \left[\begin{array}{rrrr|r} 1 & 1 & 1 & 1 &1 \\ 0 & 1 & 2 & 3 & 5 \\ 0 & -2 & 0 & -2 & 2 \\ 0 & 1 & -2 & 3 & 1 \\ \end{array}\right] \xrightarrow{\substack{R_1-R_2 \\ R_3-R_2\\R_4-R_2}} \left[\begin{array}{rrrr|r} 1 & 0& -1 & -2 &-4 \\ 0 & 0 & 4 & 4 & 12 \\ 0 & 0 & -4 & 0 & -4 \\ \end{array}\right] \xrightarrow[\frac{-1}{4}R_4]{\frac{1}{4}R_3} 0 & 0 & 1 & 0& 1 \\ \end{array}\right] \] write the LaTeX code \end{array}\right] \xrightarrow{\substack{R_1-R_2 \\ R_3-R_2\\R_4-R_2}} 1 & 0& -1 & -2 &-4 \\ \xrightarrow[\frac{-1}{4}R_4]{\frac{1}{4}R_3} 0 & 0 & 1 & 0& 1 \\ The LaTeX code for the following table \begin{array}{ |c|c|c| } a & a^2 \pmod{5} & 2a^2 \pmod{5} \\ 0 & 0 & 0 \\ 1& 1 & 2 \\ 3 & 4 & 3\\ To give a reason how to obtain each equality, like f(ab)&=(ab)^2 && (\text{by definition of $f$})\\ &=(ab)(ab)\\ &=a^2 b^2 && (\text{since $G$ is abelian})\\ &=f(a)f(b) && (\text{by definition of $f$}). write the LaTex code If the values of a function depends on cases (like parity), you might want to write: \det(A)&=1+(-1)^{n+1} \\ &= \begin{cases} 2 & \text{ if } n \text{ is odd}\\ 0 & \text{ if } n \text{ is even}. \end{cases} The following LaTex code produces the above equation with cases: When you want to say $A$ implies $B$, and want to write $A\implies B$, then use \implies for the allow $\implies$. The opposite direction arrow $\impliedby$ is given by \impliedby. When you want to say $A$ if and only if $B$, and want to write $A\iff B$, then use \iff for the allow $\iff$. Symbols Latex codes $\implies$ \implies $\impliedby$ \impliedby $\iff$ \iff $\mapsto$ \mapsto $\to$ \to $\gets$ \gets $\rightarrow$ \rightarrow $\leftarrow$ \leftarrow $\hookrightarrow$ \hookrightarrow $\hookleftarrow$ \hookleftarrow If you want to write the second derivative $f^{\prime\prime}$, then write the LaTeX code f^{\prime\prime}. Note that for the first derivative $f'$, the latex code f' works but f'' produces $f"$. To write a length of a vector such as \[\|\mathbf{v}\| \text{ or } \left\|\frac{a}{b}\right \|,\] use the LateX codes \|\mathbf{v}\| \left\|\frac{a}{b}\right \| If you want to add some explanations under an equation like \[n=\underbrace{1+1+\cdots+1}_{\text{$n$ times}},\] then use the Latex code \[n=\underbrace{1+1+\cdots+1}_{\text{$n$ times}}.\] An integral of a function \[\int_{a}^{b} \! f(x)\,\mathrm{d}x\] is generated by the LaTex code \int_{a}^{b} \! f(x)\,\mathrm{d}x. Note that \! narrows the space between the integral sign and the function, and \, increases the space between the function and $\mathrm{d}x$. To strike through an expression obliquely like \(\require{cancel}\)$\cancel{A}$ to cancel the expression $A$, we first need to put the code \(\require{cancel}\) before the formula where you want to put $\cancel{A}$. You just need only one \(\require{cancel}\) per page. Then write the LaTex code \cancel{A}. The following website contains more useful LaTex codes for MathJax. For LaTex symbols, check the website LaTeX:Symbols (Art of Problem Solving). This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Elementary Number Theory (1) Field Theory (27) Group Theory (126) Linear Algebra (485) Math-Magic (1) Module Theory (13) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. Find the Conditional Probability About Math Exam Experiment What is the Probability that Selected Coin was Two-Headed? If a Smartphone is Defective, Which Factory Made It? If At Least One of Two Coins Lands Heads, What is the Conditional Probability that the First Coin Lands Heads? Probability of Having Lung Cancer For Smokers Prove that any Algebraic Closed Field is Infinite True or False Quiz About a System of Linear Equations A Matrix Representation of a Linear Transformation and Related Subspaces If There are 28 Elements of Order 5, How Many Subgroups of Order 5? Find all Column Vector $\mathbf{w}$ such that $\mathbf{v}\mathbf{w}=0$ for a Fixed Vector $\mathbf{v}$ How to Diagonalize a Matrix. Step by Step Explanation. How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix Determine Whether Each Set is a Basis for $\R^3$ Prove Vector Space Properties Using Vector Space Axioms Union of Subspaces is a Subspace if and only if One is Included in Another Summary: Possibilities for the Solution Set of a System of Linear Equations The Intersection of Two Subspaces is also a Subspace Express a Vector as a Linear Combination of Other Vectors The Matrix for the Linear Transformation of the Reflection Across a Line in the Plane Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2020. All Rights Reserved.
CommonCrawl
Preparation and Properties of ... Holmes signals can be given by using: (A)- ${\text{Ca}}{{\text{C}}_{\text{2}}}{\text{ + CaC}}{{\text{N}}_{\text{2}}}$ (B)- ${\text{Ca}}{{\text{C}}_{\text{2}}}{\text{ + C}}{{\text{a}}_3}{{\text{P}}_{\text{2}}}$ (C)- ${\text{Ca}}{{\text{C}}_{\text{2}}}{\text{ + CaC}}{{\text{O}}_3}$ (D)- ${\text{C}}{{\text{a}}_3}{{\text{P}}_{\text{2}}} + {\text{CaC}}{{\text{N}}_{\text{2}}}$ 120.9k+ views Hint:. Mixture which shows Holmes signals produces phosphine (${\text{P}}{{\text{H}}_{\text{3}}}$) gas and acetylene (${{\text{C}}_{\text{2}}}{{\text{H}}_2}$) gas, which is used in ships for giving signals in the form of dazzling flames. Complete step by step answer: Holmes signal is a type of container in which a mixture of those chemicals is present which produces phosphine and acetylene gas on reacting with water. -In option (A) mixture of calcium carbide and calcium cyanide ${\text{Ca}}{{\text{C}}_{\text{2}}}{\text{ + CaC}}{{\text{N}}_{\text{2}}}$ is given but this mixture is not used to show Holmes signal, because in this mixture phosphorous is not present through which phosphine (${\text{P}}{{\text{H}}_{\text{3}}}$) gas produced. -In option (B) mixture of calcium carbide and calcium phosphide ${\text{Ca}}{{\text{C}}_{\text{2}}}{\text{ + C}}{{\text{a}}_3}{{\text{P}}_{\text{2}}}$ is given and this mixture is used to show Holmes signal, because in this mixture phosphorus is present which produces phosphine (${\text{P}}{{\text{H}}_{\text{3}}}$) gas and acetylene (${{\text{C}}_{\text{2}}}{{\text{H}}_2}$) gas on reacting with water. \[\mathop {{\text{C}}{{\text{a}}_{\text{3}}}{{\text{P}}_{\text{2}}}}\limits_{{\text{(CalciumPhosphide)}}} {\text{ + }}\mathop {{\text{6}}{{\text{H}}_{\text{2}}}{\text{O}}}\limits_{{\text{(Water)}}} \to \mathop {{\text{3Ca}}{{\left( {{\text{OH}}} \right)}_{\text{2}}}}\limits_{{\text{(CalciumHydroxide)}}} {\text{ + }}\mathop {{\text{2P}}{{\text{H}}_{\text{3}}}}\limits_{{\text{(Phosphine)}}} \] \[\mathop {{\text{Ca}}{{\text{C}}_{\text{2}}}}\limits_{{\text{(CalciumCarbide)}}} {\text{ + }}\mathop {{\text{2}}{{\text{H}}_{\text{2}}}{\text{O}}}\limits_{{\text{(Water)}}} \to \mathop {{\text{Ca}}{{\left( {{\text{OH}}} \right)}_{\text{2}}}}\limits_{{\text{(CalciumHydroxide)}}} {\text{ + }}\mathop {{{\text{C}}_{\text{2}}}{{\text{H}}_{\text{2}}}}\limits_{{\text{(Acetylene)}}} \] Produced phosphine (${\text{P}}{{\text{H}}_{\text{3}}}$) gas is flammable in nature and catches fire very rapidly and due to this fire, acetylene (${{\text{C}}_{\text{2}}}{{\text{H}}_2}$) gas burns by showing blinding flame because it is also flammable in nature. -In option (C) mixture of calcium carbide and calcium carbonate ${\text{Ca}}{{\text{C}}_{\text{2}}}{\text{ + CaC}}{{\text{O}}_3}$ is given but this mixture is not used to show Holmes signal, because in this mixture phosphorous is not present through which phosphine (${\text{P}}{{\text{H}}_{\text{3}}}$) gas produced. -In option (D) mixture of calcium phosphide and calcium cyanide ${\text{C}}{{\text{a}}_3}{{\text{P}}_{\text{2}}} + {\text{CaC}}{{\text{N}}_{\text{2}}}$ is given but this mixture is not used to show Holmes signal, because in this mixture calcium cyanide is present which on reacting with water produces ${\text{HCN}}$ not acetylene. Hence, Holmes signal can be given by using ${\text{Ca}}{{\text{C}}_{\text{2}}}{\text{ + C}}{{\text{a}}_3}{{\text{P}}_{\text{2}}}$. So, the correct answer is "Option B". Note: Here some of you may consider option (D) is correct by only noticing phosphorus atoms but that will be wrong because calcium cyanide does not produce acetylene gas which is also used for showing signals.
CommonCrawl
You are here: Home ∼ Weekly Papers on Quantum Foundations (18) Weekly Papers on Quantum Foundations (18) Published by editor on May 6, 2017 Test of a hypothesis of realism in quantum theory using a Bayesian approach PRA: Fundamental concepts on 2017-5-05 2:00pm GMT Author(s): N. Nikitin and K. Toms In this paper we propose a time-independent equality and time-dependent inequality, suitable for an experimental test of the hypothesis of realism. The derivation of these relations is based on the concept of conditional probability and on Bayes' theorem in the framework of Kolmogorov's axiomatics o… [Phys. Rev. A 95, 052103] Published Fri May 05, 2017 New test of weak equivalence principle using polarized light from astrophysical events. (arXiv:1703.09935v3 [astro-ph.HE] UPDATED) gr-qc updates on arXiv.org on 2017-5-05 8:32am GMT Authors: Xue-Feng Wu, Jun-Jie Wei, Mi-Xiang Lan, He Gao, Zi-Gao Dai, Peter Mészáros Einstein's weak equivalence principle (WEP) states that any freely falling, uncharged test particle follows the same identical trajectory independent of its internal structure and composition. Since the polarization of a photon is considered to be part of its internal structure, we propose that polarized photons from astrophysical transients, such as gamma-ray bursts (GRBs) and fast radio bursts (FRBs), can be used to constrain the accuracy of the WEP through the Shapiro time delay effect. Assuming that the arrival time delays of photons with different polarizations are mainly attributed to the gravitational potential of the Laniakea supercluster of galaxies, we show that a strict upper limit on the differences of the parametrized post-Newtonian parameter $\gamma$ value for the polarized optical emission of GRB 120308A is $\Delta\gamma<1.2\times10^{-10}$, for the polarized gamma-ray emission of GRB 100826A is $\Delta\gamma<1.2\times10^{-10}$, and for the polarized radio emission of FRB 150807 is $\Delta\gamma<2.2\times10^{-16}$. These are the first direct verifications of the WEP for multiband photons with different polarizations. In particular, the result from FRB 150807 provides the most stringent limit to date on a deviation from the WEP, improving by one order of magnitude the previous best result based on Crab pulsar photons with different energies. Complex dimensions and their observability. (arXiv:1705.01619v1 [gr-qc]) Authors: Gianluca Calcagni We show that the dimension of spacetime becomes complex-valued when its short-scale geometry is invariant under a discrete scaling symmetry. This characteristic can arise either in quantum gravities based on combinatorial or multifractal structures or as the partial breaking of continuous dilation symmetry in any conformal-invariant theory. With its infinite scale hierarchy, discrete scale invariance overlaps with the traditional separation between ultraviolet and infrared physics and it can leave an observable imprint in the cosmic microwave background (CMB). We discuss such imprint in the form of log oscillations and sharp features in the CMB primordial power spectrum. Testing different approaches to quantum gravity with cosmology: An overview. (arXiv:1705.01597v1 [gr-qc]) Authors: Aurélien Barrau Among the available quantum gravity proposals, string theory, loop quantum gravity, non-commutative geometry, group field theory, causal sets, asymptotic safety, causal dynamical triangulation, emergent gravity are among the best motivated models. As an introductory summary to this special issue of Comptes Rendus Physique, I explain how those different theories can be tested or constrained by cosmological observations. Emerging Dynamics Arising From Quantum Mechanics. (arXiv:1705.01604v1 [quant-ph]) quant-ph updates on arXiv.org Authors: Cristhiano Duarte, Gabriel Dias Carvalho, Nadja K. Bernades, Fernando de Melo Physics dares to describe Nature from elementary particles all the way up to cosmological objects like cluster of galaxies and black holes. Although a unified description for all this spectrum of events is desirable, an one-theory-fits-all would be highly impractical. To not get lost in unnecessary details, effective descriptions are mandatory. Here we analyze what are the dynamics that may emerge from a fully quantum description when one does not have access to all the degrees of freedom of a system. More concretely, we describe the properties of the dynamics that arise from Quantum Mechanics if one has only access to a coarse grained description of the system. We obtain that the effective channels are not necessarily of Kraus form, due to correlations between accessible and non-accessible degrees of freedom, and that the distance between two effective states may increase under the action of the effective channel. We expect our framework to be useful for addressing questions such as the thermalization of closed quantum systems, and the description of measurements in quantum mechanics. Reality from maximizing overlap in the future-included real action theory. (arXiv:1705.01585v1 [quant-ph]) Authors: Keiichi Nagao, Holger Bech Nielsen In the future-included real action theory whose path runs over not only past but also future, we demonstrate the theorem, which states that the normalized matrix element of a Hermitian operator $\hat{\cal O}$ defined in terms of the future state at the final time $T_B$ and the fixed past state at the initial time $T_A$ becomes real for the future state selected such that the absolute value of the transition amplitude from the past state to the future state is maximized. This is a special version of our previously proposed theorem for the future-included complex action theory. We find that though the maximization principle leads to the reality of the normalized matrix element in the future-included real action theory, it does not specify the future and past states so much as in the case of the future-included complex action theory. In addition, we argue that the normalized matrix element seems to be more natural than the usual expectation value. Thus we speculate that the functional integral formalism of quantum theory could be most elegant in the future-included complex action theory. ALTERNATIVE EXPLANATIONS OF THE COSMIC MICROWAVE BACKGROUND: A HISTORICAL AND AN EPISTEMOLOGICAL PERSPECTIVE Philsci-Archive: No conditions. Results ordered -Date Deposited. Cirkovic, Milan M. and Perovic, Slobodan (2017) ALTERNATIVE EXPLANATIONS OF THE COSMIC MICROWAVE BACKGROUND: A HISTORICAL AND AN EPISTEMOLOGICAL PERSPECTIVE. [Preprint] Jordan's Derivation of Blackbody Fluctuations Bacciagaluppi, Guido and Crull, Elise and Maroney, O J E (2017) Jordan's Derivation of Blackbody Fluctuations. [Preprint] Additivity Requirements in Classical and Quantum Probability Earman, John (2017) Additivity Requirements in Classical and Quantum Probability. [Preprint] Quantum Correlations are Weaved by the Spinors of the Euclidean Primitives Christian, Joy (2017) Quantum Correlations are Weaved by the Spinors of the Euclidean Primitives. [Preprint] Matter-Wave Tractor Beams Author(s): Alexey A. Gorlach, Maxim A. Gorlach, Andrei V. Lavrinenko, and Andrey Novitsky Theory shows that the quantum-mechanical wave of a beam of particles can exert a pulling force on a small particle, just as other waves do. [Phys. Rev. Lett. 118, 180401] Published Thu May 04, 2017 Experimental Demonstration of Uncertainty Relations for the Triple Components of Angular Momentum PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Author(s): Wenchao Ma, Bin Chen, Ying Liu, Mengqi Wang, Xiangyu Ye, Fei Kong, Fazhan Shi, Shao-Ming Fei, and Jiangfeng Du The uncertainty principle is considered to be one of the most striking features in quantum mechanics. In the textbook literature, uncertainty relations usually refer to the preparation uncertainty which imposes a limitation on the spread of measurement outcomes for a pair of noncommuting observables… The Present Situation in Quantum Theory and its Merging with General Relativity Latest Results for Foundations of Physics on 2017-5-03 12:00am GMT We discuss the problems of quantum theory (QT) complicating its merging with general relativity (GR). QT is treated as a general theory of micro-phenomena—a bunch of models. Quantum mechanics (QM) and quantum field theory (QFT) are the most widely known (but, e.g., Bohmian mechanics is also a part of QT). The basic problems of QM and QFT are considered in interrelation. For QM, we stress its nonrelativistic character and the presence of spooky action at a distance. For QFT, we highlight the old problem of infinities. And this is the main point of the paper: it is meaningless to try to unify QFT so heavily suffering of infinities with GR. We also highlight difficulties of the QFT-treatment of entanglement. We compare the QFT and QM based measurement theories by presenting both theoretical and experimental viewpoints. Then we discuss two basic mathematical constraints of both QM and QFT, namely, the use of real (and, hence, complex) numbers and the Hilbert state space. We briefly present non-archimedean and non-hilbertian approaches to QT and their consequences. Finally, we claim that, in spite of the Bell theorem, it is still possible to treat quantum phenomena on the basis of a classical-like causal theory. We present a random field model generating the QM and QFT formalisms. This emergence viewpoint can serve as the basis for unification of novel QT (may be totally different from presently powerful QM and QFT) and GR. (It may happen that the latter would also be revolutionary modified.) Deviations from 2 Nature Physics – Issue – nature.com science feeds Nature Physics 13, 518 (2017). doi:10.1038/nphys4126 Author: Alberto Moscatelli Alberto Moscatelli surveys a series of experiments on the electron g-factor that marked the departure from the Dirac equation and contributed to the development of quantum electrodynamics. Quantum Mechanics: Indefinite causality Author: Yun Li "Above the Slough of Despond": Weylean invariantism and quantum physics ScienceDirect Publication: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics Publication date: Available online 29 April 2017 Source:Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics Author(s): Iulian D. Toader No smooth beginning for spacetime. (arXiv:1705.00192v1 [hep-th]) hep-th updates on arXiv.org Authors: Job Feldbrugge, Jean-Luc Lehners, Neil Turok We identify a fundamental obstruction to any theory of the beginning of the universe as a semi-classical quantum process, describable using complex, smooth solutions of the classical Einstein equations. The no boundary and tunneling proposals are examples of such theories. We argue that the Lorentzian path integral for quantum cosmology is meaningful and, with the use of Picard-Lefschetz theory, provides a consistent definition of the semi-classical expansion. Framed in this way, the no boundary and tunneling proposals become identified, and the resulting framework is unique. Unfortunately, the Picard-Lefschetz approach shows that the primordial fluctuations are out of control: larger fluctuations receive a higher quantum mechanical weighting. We prove a general theorem to this effect, in a wide class of theories. A semi-classical description of the beginning of the universe as a regular tunneling event thus appears to be untenable. Space-time Constructivism vs. Modal Provincialism: Or, How Special Relativistic Theories Needn't Show Minkowski Chronogeometry Pitts, J. Brian (2017) Space-time Constructivism vs. Modal Provincialism: Or, How Special Relativistic Theories Needn't Show Minkowski Chronogeometry. [Preprint] What Quantum Measurements Measure. (arXiv:1704.08725v1 [quant-ph]) on 2017-5-01 12:43pm GMT Authors: Robert B. Griffiths A solution to the second measurement problem, determining what prior microscopic properties can be inferred from measurement outcomes ("pointer positions"), is worked out for projective and generalized (POVM) measurements, using consistent histories. The result supports the idea that equipment properly designed and calibrated reveals the properties it was designed to measure. Applications include Einstein's hemisphere and Wheeler's delayed choice paradoxes, and a method for analyzing weak measurements without recourse to weak values. Quantum measurements are noncontextual in the original sense employed by Bell and Mermin: if $[A,B]=[A,C]=0,\, [B,C]\neq 0$, the outcome of an $A$ measurement does not depend on whether it is measured with $B$ or with $C$. An application to Bohm's model of the Einstein-Podolsky-Rosen situation suggests that a faulty understanding of quantum measurements is at the root of this paradox. Entropy-Growth in the Universe: Some Plausible Scenarios Latest Results for International Journal of Theoretical Physics Diverse measurements indicate that entropy grows as the universe evolves, we analyze from a quantum point of view plausible scenarios that allow such increase. A proposal for a minimalist ontology Esfeld, Michael (2017) A proposal for a minimalist ontology. [Preprint] Stop making sense of Bell's theorem and nonlocality? A reply to Stephen Boughn Laudisa, Federico (2017) Stop making sense of Bell's theorem and nonlocality? A reply to Stephen Boughn. [Preprint] Posted in @all Weekly Papers Article written by editor
CommonCrawl
List of known Seiberg-dual pairs of N=1 gauge theories Derivatives of Superpotential in $\mathcal{N}=1$ Gauge Theories About the definition/motivation/properties of the twisted chiral superfield in ${\cal N}=2$ theories in $1+1$ dimensions Anomaly polynomial of Hitchin system $\mathcal{N}=2$ 4d SQFT Baryonic operators in ${\cal N}=1$ $U(N)$ SQCD in four dimensions How can we see that a 4D N = 2 sigma model will yield a 3D N = 4 sigma model when compactified on a circle? Does 4D N = 3 supersymmetry exist? Coulomb branch of $\mathcal{N}=2$ field theories On self-duality of N=4 super Yang Mills theories Why is the order parameter in N=2 Seiberg-Witten theory $\langle \text{tr} \phi^2 \rangle$? (And discussion of gauge-variant order parameter in general) Relevance of operators in 4d N=1 theories I have a conceptual question about how to judge a given operator, \(\mathcal{O}\), in 4d \(\mathcal{N}=1\) theories is relevant or not. In the literature, the criterion is given by \(R(\mathcal{O}) \leqslant 2\). i.e., the operator is relevant or marginal if \(R(\mathcal{O}) \leqslant 2\). However, as I understand, an acceptable operator deformation for the theory must have \(R(\mathcal{O}) = 2\) so that \(R\)-symmetry is not broken. How to understand why the \(R\)-charge, not the dimension of the operator, tells us that the operator is relevant, marginal or irrelevant? Are there any relations when the theory is in UV? How to resolve the "contradiction" that the \(R\)-charge for the superpotential should always be such that \(R=2\)? asked Apr 1, 2015 in Theoretical Physics by Ke Ye (50 points) [ revision history ] For a general quantum field theory, it's true that the notions of relevant, marginal, irrelevant operators are defined in terms of dimensions of operators. In the special case of a supersymmetric theory, it is natural to study the special case of supersymmetric deformations and the familiar criterion in terms of dimensions can often be reformulated. More precisely, in a unitary 4d N=1 superconformal field theory, there is a general ("BPS-like") bound, bounding below the dimension of an operator by a multiple of its R-charge. Operators inducing supersymmetric deformations are precisely the operators saturating this bound (chiral or antichiral). For such operators, dimension and R-charge determine each other and so the definition of relevant, marginal, irrelevant can be reformulated purely in tems of R-charge. The condition R=2 is equivalent to the condition of marginality, itself equivalent to the preservation of R-symmetry. Indeed, R-symmetry is only guaranteed to be preserved in the superconformal case. In the non-superconformal case, R-symmetry is in general anomalous. answered Aug 14, 2018 by 40227 (5,120 points) [ no revision ] p$\hbar$ysicsOv$\varnothing$rflow
CommonCrawl
Germinal center dynamics during acute and chronic infection MBE Home Flow optimization in vascular networks June 2017, 14(3): 625-653. doi: 10.3934/mbe.2017036 Mathematical analysis of a quorum sensing induced biofilm dispersal model and numerical simulation of hollowing effects Blessing O. Emerenini 1,, , Stefanie Sonner 2, and Hermann J. Eberl 3, Biomedical Physics, Dept. Physics, Ryerson University, 350 Victoria Street Toronto, ON, M5B 2K3, Canada Institute for Mathematics and Scientific Computing, University of Graz, Heinrichstr. 36,8010 Graz, Austria Dept. Mathematics and Statistics, University of Guelph, 50 Stone Road East, ON, N1G 2W1, Canada * Corresponding author: Blessing O. Emerenini Received August 20, 2015 Accepted October 26, 2016 Published December 2016 Figure(8) / Table(1) We analyze a mathematical model of quorum sensing induced biofilm dispersal. It is formulated as a system of non-linear, density-dependent, diffusion-reaction equations. The governing equation for the sessile biomass comprises two non-linear diffusion effects, a degeneracy as in the porous medium equation and fast diffusion. This equation is coupled with three semi-linear diffusion-reaction equations for the concentrations of growth limiting nutrients, autoinducers, and dispersed cells. We prove the existence and uniqueness of bounded non-negative solutions of this system and study the behavior of the model in numerical simulations, where we focus on hollowing effects in established biofilms. Keywords: Quorum sensing, biofilm, cell dispersal, density dependent diffusion, existence, uniqueness, numerical simulation. Mathematics Subject Classification: Primary: 35Q92, 35K65; Secondary: 92D25. Citation: Blessing O. Emerenini, Stefanie Sonner, Hermann J. Eberl. Mathematical analysis of a quorum sensing induced biofilm dispersal model and numerical simulation of hollowing effects. Mathematical Biosciences & Engineering, 2017, 14 (3) : 625-653. doi: 10.3934/mbe.2017036 F. Abbas, R. Sudarsan and H. J. Eberl, Longtime behaviour of one-dimensional biofilm moels with shear dependent detachment rates, Math. Biosc. Eng., 9 (2012), 215-239. doi: 10.3934/mbe.2012.9.215. Google Scholar H. Amman, Nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems, Function Spaces, Differential Operators and Nonlinear Analysis, Teubner-Texte Math., 133 (1993), 9-126. doi: 10.1007/978-3-663-11336-2_1. Google Scholar D. Aronson, M. G. Crandall and L. A. Peletier, Stabilization of solutions of a degenerate nonlinear diffusion problem, Nonlinear Anal., 6 (1982), 1001-1022. doi: 10.1016/0362-546X(82)90072-4. Google Scholar N. Barraud, D. J. Hassett, S. H. Hwang, S. A. Rice, S. Kjelleberg and J. S. Webb, Involvement of nitric oxide in biofilm dispersal of Pseudomonas Aeruginosa, J. Bacteriol, 188 (2006), 7344-7353. doi: 10.1128/JB.00779-06. Google Scholar G. Boyadjiev and N. Kutev, Comparison principle for quasilinear elliptic and parabolic systems, Comptes rendus de l'Académie bulgare des Sciences, 55 (2002), 9-12. Google Scholar A. Boyd and A. M. Chakrabarty, Role of alginate lyase in cell detachment of Pseudomonas Aeruginosa, Appl. Environ. Microbiol., 60 (1994), 2355-2359. Google Scholar V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics, American Mathematical Society, Providence, RI, 2002. Google Scholar M. E. Davey, N. C. Caiazza and G. A. O'Toole, Rhamnolipid surfactant production affects biofilm architecture in Pseudomonas Aeruginosa PAO1, J. Bacteriol, 185 (2003), 1027-1036. doi: 10.1128/JB.185.3.1027-1036.2003. Google Scholar D. A. D'Argenio, M. W. Calfee, P. B. Rainey and E. C. Pesci, Autolysis and autoaggregation in Pseudomonas Aeruginosa colony morphology mutants, J. Bacteriol., 184 (2002), 6481-6489. doi: 10.1128/JB.184.23.6481-6489.2002. Google Scholar L. Demaret, H. J. Eberl, M. A. Efendiev and R. Lasser, Analysis and simulation of a meso-scale model of diffusive resistance of bacterial biofilms to penetration of antibiotics, Adv. Math. Sci. Appl., 18 (2008), 269-304. Google Scholar R. M. Donlan, Biofilms and device-associated infections, Emerging Infec. Dis., 7 (2001). Google Scholar R. Duddu, D. L. Chopp and B. Moran, A two-dimensional continuum model of biofilm growth incorporating fluid flow and shear stress based detachment, Biotechnol. Bioeng., 103 (2009), 92-104. doi: 10.1002/bit.22233. Google Scholar H. J. Eberl, D. F. Parker and M. C. M. van Loosdrecht, A new deterministic spatio-temporal continuum model for biofilm development, J. Theor. Med., 3 (2001), 161-175. doi: 10.1080/10273660108833072. Google Scholar H. J. Eberl and L. Demaret, A finite difference scheme for a degenerated diffusion equation arising in microbial ecology, Electron. J. Differential Equations, 15 (2007), 77-96. Google Scholar H. J. Eberl and R. Sudarsan, Exposure of biofilms to slow flow fields: The convective contribution to growth and disinfections, J. Theor. Biol., 253 (2008), 788-807. doi: 10.1016/j.jtbi.2008.04.013. Google Scholar M. A. Efendiev, H. J. Eberl and S. V. Zelik, Existence and longtime behaviour of solutions of a nonlinear reaction-diffusion system arising in the modeling of biofilms, Nonlin. Diff. Sys. Rel. Topics, RIMS Kyoto, 1258 (2002), 49-71. Google Scholar M. A. Efendiev, H. J. Eberl and S. V. Zelik, Existence and longtime behavior of a biofilm model, Comm. Pur. Appl. Math., 8 (2009), 509-531. doi: 10.3934/cpaa.2009.8.509. Google Scholar B. O. Emerenini, B. A. Hense, C. Kuttler and H. J. Eberl, A mathematical model of quorum sensing induced biofilm detachment, PLoS ONE., 10 (2015). doi: 10.1371/journal.pone.0132385. Google Scholar A. Fekete, C. Kuttler, M. Rothballer, B. A. Hense, D. Fischer, K. Buddrus-Schiemann, M. Lucio, J. Müller, P. Schmitt-Kopplin and A. Hartmann, Dynamic regulation of N-acyl-homoserine lactone production and degradation in Pseudomonas putida IsoF., FEMS Microbiology Ecology, 72 (2010), 22-34. Google Scholar M. R. Frederick, C. Kuttler, B. A. Hense and H. J. Eberl, A mathematical model of quorum sensing regulated EPS production in biofilms, Theor. Biol. Med. Mod., 8 (2011). doi: 10.1186/1742-4682-8-8. Google Scholar M. R. Frederick, C. Kuttler, B. A. Hense, J. Müller and H. J. Eberl, A mathematical model of quorum sensing in patchy biofilm communities with slow background flow, Can. Appl. Math. Quarterly, 18 (2011), 267-298. Google Scholar S. M. Hunt, M. A. Hamilton, J. T. Sears, G. Harkin and J. Reno, A computer investigation of chemically mediated detachment in bacterial biofilms, J. Microbiol., 149 (2003), 1155-1163. doi: 10.1099/mic.0.26134-0. Google Scholar S. M. Hunt, E. M. Werner, B. Huang, M. A. Hamilton and P. S. Stewart, Hypothesis for the role of nutrient starvation in biofilm detachment, J. Appl. Environ. Microb., 70 (2004), 7418-7425. doi: 10.1128/AEM.70.12.7418-7425.2004. Google Scholar H. Khassehkhan, M. A. Efendiev and H. J. Eberl, A degenerate diffusion-reaction model of an amensalistic biofilm control system: existence and simulation of solutions, Disc. Cont. Dyn. Sys. Series B, 12 (2009), 371-388. doi: 10.3934/dcdsb.2009.12.371. Google Scholar O. A. Ladyzenskaja, V. A. Solonnikov and N. N. Ural'ceva, Linear and Quasi-linear Equations of parabolic Type, American Mathematical Society, Providence RI, 1968. Google Scholar J. B. Langebrake, G. E. Dilanji, S. J. Hagen and P. de Leenheer, Traveling waves in response to a diffusing quorum sensing signal in spatially-extended bacterial colonies, J. Theor. Biol., 363 (2014), 53-61. doi: 10.1016/j.jtbi.2014.07.033. Google Scholar P. D. Marsh, Dental plaque as a biofilm and a microbial community implications for health and disease, BMC Oral Health, 6 (2006), S14. doi: 10.1186/1472-6831-6-S1-S14. Google Scholar N. Muhammad and H. J. Eberl, OpenMP parallelization of a Mickens time-integration scheme for a mixed-culture biofilm model and its performance on multi-core and multi-processor computers, LNCS, 5976 (2010), 180-195. doi: 10.1007/978-3-642-12659-8_14. Google Scholar G. A. O'Toole and P. S. Stewart, Biofilms strike back, Nature Biotechnology, 23 (2005), 1378-1379. doi: 10.1038/nbt1105-1378. Google Scholar M. R. Parsek and P. K. Singh, Bacterial biofilms: An emerging link to disease pathogenesis, Annu. Rev. Microbiol., 57 (2003), 677-701. doi: 10.1146/annurev.micro.57.030502.090720. Google Scholar C. Picioreanu, M. C. M. van Loosdrecht and J. J. Heijnen, Two-dimensional model of biofilm detachment caused by internal stress from liquid flow, Biotechnol. Bioeng., 72 (2001), 205-218. doi: 10.1002/1097-0290(20000120)72:2<205::AID-BIT9>3.0.CO;2-L. Google Scholar A. Radu, J. Vrouwenvelder, M. C. M. van Loosdrecht and C. Picioreanu, Effect of flow velocity, substrate concentration and hydraulic cleaning on biofouling of reverse osmosis feed channels, Chem. Eng. J., 188 (2012), 30-39. doi: 10.1016/j.cej.2012.01.133. Google Scholar M. Renardy and R. C. Rogers, An Introduction to Partial Differential Equations, 2nd edition, Springer Verlag, New York, 2004. Google Scholar S. A. Rice, K. S. Koh, S. Y. Queck, M. Labbate, K. W. Lam and S. Kjelleberg, Biofilm formation and sloughing in Serratia marcescens are controlled by quorum sensing and nutrient cues, J. Bacteriol, 187 (2005), 3477-3485. doi: 10.1128/JB.187.10.3477-3485.2005. Google Scholar Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd edition, SIAM, Philadelphia, 2003. doi: 10.1137/1.9780898718003. Google Scholar S. Sirca and M. Morvat, Computational Methods for Physicists, Springer, Heidelberg, 2012. doi: 10.1007/978-3-642-32478-9. Google Scholar Solano, Echeverz and LasaI, Biofilm dispersion and quorum sensing, Curr. Opin. Microbiol., 18 (2014), 96-104. doi: 10.1016/j.mib.2014.02.008. Google Scholar S. Sonner, M. A. Efendiev and H. J. Eberl, On the well-posedness of a mathematical model of quorum-sensing in patchy biofilm communities, Math. Methods Appl. Sci., 34 (2011), 1667-1684. doi: 10.1002/mma.1475. Google Scholar S. Sonner, M. A. Efendiev and H. J. Eberl, On the well-posedness of mathematical models for multicomponent biofilms, Math. Methods Appl. Sci., 38 (2015), 3753-3775. doi: 10.1002/mma.3315. Google Scholar P. S. Stewart, A model of biofilm detachment, Biotechnol. Bioeng., 41 (1993), 111-117. doi: 10.1002/bit.260410115. Google Scholar M. G. Trulear and W. G. Characklis, Dynamics of biofilm processes, J. Water Pollut. Control Fed., 54 (1982), 1288-1301. Google Scholar B. L. Vaughan Jr, B. G. Smith and D. L. Chopp, The Influence of Fluid Flow on Modeling Quorum Sensing in Bacterial Biofilms, Bull. Math. Biol., 72 (2010), 1143-1165. Google Scholar O. Wanner and P. Reichert, Mathematical modelling of mixed-culture biofilm, Biotech. Bioeng., 49 (1996), 172-184. Google Scholar O. Wanner, H. J. Eberl, E. Morgenroth, D. R. Noguera, C. Picioreanu, B. E. Rittmann and M. C. M. van Loosdrecht, Mathematical Modelling of Biofilms, IWA Publishing, London, 2006. Google Scholar J. S. Webb, Differentiation and dispersal in biofilms, Book chapter in The Biofilm Mode of Life: Mechanisms and Adaptations, Horizon Biosci., Oxford (2007), 167–178. Google Scholar J. B. Xavier, C. Piciroeanu and M. C. M. van Loosdrecht, A general description of detachment for multidimensional modelling of biofilms, Biotechnol. Bioeng., 91 (2005), 651-669. doi: 10.1002/bit.20544. Google Scholar J. B. Xavier, C. Picioreanu, S. A. Rani, M. C. M. van Loosdrecht and P. S. Stewart, Biofilm-control strategies based on enzymic disruption of the extracellular polymeric substance matrix a modelling study, Microbiol., 151 (2005), 3817-3832. doi: 10.1099/mic.0.28165-0. Google Scholar 18]: The aqueous phase is the domain $\Omega_1(t) = \{x\in \Omega: M(t;x) =0 \}$, the biofilm phase $\Omega_2(t) = \{x\in \Omega: M(t;x) >0 \}$. These regions change over time as the biofilm grows. Biofilm colonies form attached to the substratum, which is a part of the boundary of the domain">Figure 1. Schematic of the biofilm system cf [18]: The aqueous phase is the domain $\Omega_1(t) = \{x\in \Omega: M(t;x) =0 \}$, the biofilm phase $\Omega_2(t) = \{x\in \Omega: M(t;x) >0 \}$. These regions change over time as the biofilm grows. Biofilm colonies form attached to the substratum, which is a part of the boundary of the domain Figure 2. 2-D structural representation of the microbial floc growth for autoinducer production rate $\alpha=30.7$ and maximum dispersal rate $\eta_1=0.6$ for selected time instances t. Color coded is the biomass density $M$, iso-lines of the autoinducer concentration $A$ are plotted in grayscale Figure 3. 1-D Spatial representation of the development and dispersal of bacterial cells from the microbial floc: The snapshots are taken at different computational time $t$, with an autoinducer production rate $\alpha=30.7$ and a dispersal rate of $\eta_1=0.6$ Figure 4. Temporal plots of simulations computed for a non-quorum sensing producing microfloc (Non-QS) and a quorum sensing producing microfloc using seven different constitutive autoinducer production rate $\alpha = \{92.0,46.0,30.7,23.0,18.4,15.3,13.1\}$ and fixed maximum dispersal rate $\eta_1=0.6$. Shown are (a) the total sessile biomass fraction $M_{tot}$ in the floc, (b) the floc size $\omega$ (c) dispersed cells $N_{tot}$, (d) relative variation $R$, and (e) signal concentration $A_{ave}$ Figure 5. 2-D structural representation of the microbial biofilm growth for autoinducer production rate $\alpha=30.7$ and maximum dispersal rate $\eta_1=0.6$ for selected time instances $t$. Color coded is the biomass density $M$, iso-lines of the autoinducer concentration $A$ are plotted in grayscale Figure 7. Temporal plots of simulations computed for a non-quorum sensing producing biofilm (Non-QS) and a quorum sensing producing biofilm using seven different constitutive autoinducer production rate $\alpha = \{92.0,46.0,30.7,23.0,18.4,15.3,13.1\}$ and fixed maximum dispersal rate $\eta_1=0.6$. Shown are (a) the total sessile biomass fraction $M_{tot}$ in the floc, (b) the floc size $\omega$ (c) dispersed cells $N_{tot}$, (d) relative variation $R$, and (e) signal concentration $A_{ave}$ Figure 8. Comparison of the sessile biomass $M_{tot}$ and the dispersed cells $N_{tot}$ under different boundary conditions for the signal molecule $A$: Homogenous Dirichlet conditions and Neumann conditions. The left panel is for a microbial floc while the right panel is for a biofilm Table 1. Parameters used in the numerical simulations Parameter Description Value Source $k_1$ half saturation concentration (growth) $0.4$ [44] $k_2$ lysis rate $0.067$ assumed $\sigma$ nutrient consumption rate $793.65$ [19] $\eta_1$ maximum dispersal rate varied [18] $\lambda$ quorum sensing abiotic decay rate $0.02218$ [39] $\alpha$ constitutive autoinducer production rate varied - $\beta$ induced autoinducer production rate $10 \times \alpha$ [19] $m$ degree of polymerization $2.5$ [19] $d_1$ constant diffusion coefficients for $N$ $4.1667$ assumed $d_2$ constant diffusion coefficients for $C$ $4.1667$ [15] $d_3$ constant diffusion coefficients for $A$ $3.234$ [15] $d$ biomass motility coefficient $4.2 \times 10^{-8}$ [13] $a$ biofilm diffusion exponent $4.0$ [13] $b$ biofilm diffusion exponent $4.0$ [13] $L$ system length $1.0$ [15] $H$ system height $1.0$ assumed Download as excel Richard L Buckalew. Cell cycle clustering and quorum sensing in a response / signaling mediated feedback model. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 867-881. doi: 10.3934/dcdsb.2014.19.867 Hassan Khassehkhan, Messoud A. Efendiev, Hermann J. Eberl. A degenerate diffusion-reaction model of an amensalistic biofilm control system: Existence and simulation of solutions. Discrete & Continuous Dynamical Systems - B, 2009, 12 (2) : 371-388. doi: 10.3934/dcdsb.2009.12.371 Jacques A. L. Silva, Flávia T. Giordani. Density-dependent dispersal in multiple species metapopulations. Mathematical Biosciences & Engineering, 2008, 5 (4) : 843-857. doi: 10.3934/mbe.2008.5.843 François James, Nicolas Vauchelet. One-dimensional aggregation equation after blow up: Existence, uniqueness and numerical simulation. Networks & Heterogeneous Media, 2016, 11 (1) : 163-180. doi: 10.3934/nhm.2016.11.163 Paolo Fergola, Marianna Cerasuolo, Edoardo Beretta. An allelopathic competition model with quorum sensing and delayed toxicant production. Mathematical Biosciences & Engineering, 2006, 3 (1) : 37-50. doi: 10.3934/mbe.2006.3.37 Jean-Paul Chehab. Damping, stabilization, and numerical filtering for the modeling and the simulation of time dependent PDEs. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2693-2728. doi: 10.3934/dcdss.2021002 Nalin Fonseka, Ratnasingham Shivaji, Jerome Goddard, Ⅱ, Quinn A. Morris, Byungjae Son. On the effects of the exterior matrix hostility and a U-shaped density dependent dispersal on a diffusive logistic growth model. Discrete & Continuous Dynamical Systems - S, 2020, 13 (12) : 3401-3415. doi: 10.3934/dcdss.2020245 Messoud A. Efendiev, Sergey Zelik, Hermann J. Eberl. Existence and longtime behavior of a biofilm model. Communications on Pure & Applied Analysis, 2009, 8 (2) : 509-531. doi: 10.3934/cpaa.2009.8.509 Francisco Guillén-González, Mamadou Sy. Iterative method for mass diffusion model with density dependent viscosity. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 823-841. doi: 10.3934/dcdsb.2008.10.823 Jennifer Weissen, Simone Göttlich, Dieter Armbruster. Density dependent diffusion models for the interaction of particle ensembles with boundaries. Kinetic & Related Models, 2021, 14 (4) : 681-704. doi: 10.3934/krm.2021019 Nadia Loy, Luigi Preziosi. Stability of a non-local kinetic model for cell migration with density dependent orientation bias. Kinetic & Related Models, 2020, 13 (5) : 1007-1027. doi: 10.3934/krm.2020035 Vikram Krishnamurthy, William Hoiles. Information diffusion in social sensing. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 365-411. doi: 10.3934/naco.2016017 Andrea L. Bertozzi, Dejan Slepcev. Existence and uniqueness of solutions to an aggregation equation with degenerate diffusion. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1617-1637. doi: 10.3934/cpaa.2010.9.1617 Vicent Caselles. An existence and uniqueness result for flux limited diffusion equations. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1151-1195. doi: 10.3934/dcds.2011.31.1151 Tracy L. Stepien, Hal L. Smith. Existence and uniqueness of similarity solutions of a generalized heat equation arising in a model of cell migration. Discrete & Continuous Dynamical Systems, 2015, 35 (7) : 3203-3216. doi: 10.3934/dcds.2015.35.3203 Weiping Yan. Existence of weak solutions to the three-dimensional density-dependent generalized incompressible magnetohydrodynamic flows. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1359-1385. doi: 10.3934/dcds.2015.35.1359 Tracy L. Stepien, Erica M. Rutter, Yang Kuang. A data-motivated density-dependent diffusion model of in vitro glioblastoma growth. Mathematical Biosciences & Engineering, 2015, 12 (6) : 1157-1172. doi: 10.3934/mbe.2015.12.1157 Kaigang Huang, Yongli Cai, Feng Rao, Shengmao Fu, Weiming Wang. Positive steady states of a density-dependent predator-prey model with diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3087-3107. doi: 10.3934/dcdsb.2017209 Gong Chen, Peter J. Olver. Numerical simulation of nonlinear dispersive quantization. Discrete & Continuous Dynamical Systems, 2014, 34 (3) : 991-1008. doi: 10.3934/dcds.2014.34.991 Nicolas Vauchelet. Numerical simulation of a kinetic model for chemotaxis. Kinetic & Related Models, 2010, 3 (3) : 501-528. doi: 10.3934/krm.2010.3.501 HTML views (55) Blessing O. Emerenini Stefanie Sonner Hermann J. Eberl
CommonCrawl
frk-rheinbach.de opinion you are not right. assured.. Grobner bases and convex polytopes Mezikus More information grobner bases and convex polytopes Lecture 13: Grobner bases - Solving Polynomial Systems, time: 4:23 "This book is a state-of-the-art account of the rich interplay between combinatorics and geometry of convex polytopes and computational commutative algebra via the tool of Gröbner bases. It is an essential introduction for those who wish to perform research in this fast-developing, interdisciplinary frk-rheinbach.de by: Gröbner bases of toric ideals have applications in many research areas. Among them, one of the most important topics is the correspondence to triangulations of convex polytopes. It is very interesting that, not only do Gröbner bases give triangulations, but also "good" Gröbner bases give "good" triangulations (unimodular triangulations).Author: Hidefumi Ohsugi. Get this from a library! Gröbner bases and convex polytopes. [Bernd Sturmfels] -- This book is about the interplay of computational commutative algebra and the theory of convex polytopes. It centers around a special class of ideals in a polynomial ring: the class of toric ideals. Find helpful customer reviews and review ratings for Grobner Bases and Convex Polytopes (University Lecture Series, No. 8) at frk-rheinbach.de Read honest and unbiased product reviews from our users. Destination page number Search scope Search Text Search scope Search Text. Dec 15, · Grobner Bases and Convex Polytopes by Bernd Sturmfels, , available at Book Depository with free delivery worldwide.4/5(2). Grobner Bases and Convex Polytopes (University Lecture Series, No. 8) by Bernd Sturmfels and a great selection of related books, art and collectibles available now at frk-rheinbach.de A very carefully crafted introduction to the theory and some of the applications of Grobner bases contains a wealth of illustrative examples and a wide variety of useful exercises, the discussion is everywhere well-motivated, and further developments and important issues are well sign-posted has many solid virtues and is an ideal text for beginners in the subject certainly an. Grobner basics The state polytope Variation of term orders Toric ideals Enumeration, sampling and integer programming Primitive partition identities Universal Grobner bases Regular triangulations The second hypersimplex $\mathcal A$-graded algebras Canonical subalgebra bases Generators, Betti numbers and localizations Toric varieties in.This book is about the interplay of computational commutative algebra and the theory of convex polytopes. It centers around a special class of. Gröbner Bases and Convex Polytopes. About this Title. Bernd Sturmfels, University of California, Berkeley, Berkeley, CA. Publication: University Lecture Series. Buy Grobner Bases and Convex Polytopes (University Lecture Series, No. 8) on frk-rheinbach.de ✓ FREE SHIPPING on qualified orders. GRÖBNER BASES AND CONVEX POLYTOPES (University Lecture Series 8). Liam O'Carroll · Search for more papers by this author · Liam O'. Grobner bases and convex polytopes pdf. 1. Grobner Bases and Convex Polytopes Bernd Sturmfels; 2. Publisher: American Mathematical. Request PDF on ResearchGate | Grobner bases and convex polytopes / Bernd Sturmfels | Incluye bibliografía e índice. This book is about the interplay of computational commutative algebra and the theory of convex polytopes. It centers around a special class of ideals in a. Grobner Bases and Convex Polytopes by Bernd Sturmfels, , available at Book Depository with free delivery worldwide. Secondary: 13P Gröbner bases; other bases for ideals and modules (e.g., Janet and border bases) 52B Lattice polytopes (including relations with. On the other hand, in order to use polytopes to study Gröbner bases of ideals Convex Polytopes Monomial Ideal Finite Graph Homogeneous. Previous Articletor browser ubuntu wallpaper Next Article how to umd dumper on psp 0 Comments on grobner bases and convex polytopes Grobner Bases and Convex Polytopes (University Lecture Series, No. 8) by Bernd Sturmfels and a great selection of related books, art and collectibles available now at frk-rheinbach.de tegan and sara sainthood music line on nokia apps pensamentos roberto carlos lagu catalogo celular lg gd 510 software About Grobner bases and convex polytopes Request PDF on ResearchGate | Grobner bases and convex polytopes / Bernd Sturmfels | Incluye bibliografía e índice. © Copyright frk-rheinbach.de Design by frk-rheinbach.de
CommonCrawl
Modern approaches to the invariant subspace problem, by Isabelle Chalendar and Jonathan R. Partington, Cambridge University Press, 2011. Journaux internationaux avec comité de lecture [62] I. Chalendar, P. Gorkin and J.R. Partington. Inner functions and Operator theory, North-Western European Journal of Mathematics, accepted. [61] I. Chalendar, J. Esterle and J.R. Partington. Dichotomy results for norm estimates in operator semigroups, In Semigroups meet complex analysis, Harmonic Analysis and Mathematical Physics. Birkha\"user, Operator Theory: Advances and Applications, accepted. [60] I. Chalendar and J.R. Partington. Phragm\'en-Lindel\"of principles for generalized analytic functions on unbounded domains, Complex Analysis and Operator Theory, accepted. [59] I. Chalendar, E. Gallardo and J.R. Partington. Weighted composition operators on the Dirichlet space: boundedness and spectral properties, Mathematische Annalen, accepted. [58] C. Avicou, I. Chalendar and J.R. Partington. A class of quasicontractive semigroups acting on Hardy and Dirichlet space, Journal of Evolution Equations, accepted. [57] I. Chalendar, S. Garcia, W. Ross and D. Timotin. An extremal problem for characteristic functions, Transactions of the American Mathematical Society, to appear. [56] I. Chalendar and J.R. Partington. Norm estimates for weighted composition operators on spaces of holomorphic functions, Complex Analysis and Operator Theory, 8,1087-1095 (2014) . [55] I. Chalendar and D. Timotin. Commutation relations for truncated Toeplitz operators, Operators and Matrices, 8, No 3, 877-888 (2014). [54] I. Chalendar, Gorkin, P. and Partington, J.R. Prime and Semiprime Inner Functions, J. London Math. Soc. 88, No 2, 779-800 (2013). [53] I. Chalendar and Partington, J.R. , An overview of some recent developments on the Invariant Subspace Problem, Concrete Operators, 1 (2012), 1-10. [52] I. Chalendar, E. Fricain, M. Gürdal and M. Karaev, Compactness and Berezin symbols, Acta Scientiarum Mathematicarum (Szeged), 78, No. 1-2, 315-329 (2012). [51] I. Chalendar, P. Gorkin and J.R. Partington, The group of invariants of an inner function with finite spectrum, J. Math. Anal. and Appl., 389 (2012), 1259-1267. [50] I. Chalendar, P. Gorkin and J.R. Partington, Determination of inner functions by their value sets on the circle, Computational Methods and Function Theory, vol. 11, n.1 (2011). [49] I. Chalendar, J. Esterle and J.R. Partington, Boundary values of analytic semigroups and associated norm estimates, Proceedings of Banach Algebras 2009, Banach Center Publications 91 (2010), 87--103. [48] Baranov, A., Chalendar, I., Fricain, E. Mashreghi, J. and Timotin, D. Bounded symbols and reproducing kernel thesis for truncated Toeplitz operators. J. Funct. Anal. 259 (2010), no. 10, 2673--2701. [47] Chalendar, I, Fricain, E. and Timotin D. Embeddings theorems for Müntz spaces, Annales de l'Institut Fourier, 61 (2011), no. 6, 2291--2311. [46] Bendaoud, Z., Chalendar, I., Esterle, J. and Partington, J.R. Distances between elements of a semigroup and estimates for derivatives. Acta Mathematica Sinica, 26 (2010), 2239-2254. [45] Chalendar, I., Gorkin, P. and Partington, J.R. Boundary interpolation and approximation by infinite Blashcke products. Mathematica Scandinavica, 107 (2010), 305-319. [44] Chalendar, I., Partington, J.R. and Pozzi, E. Multivariable weighted composition operators: lack of point Spectrum, and cyclic vectors. Operator Theory: Advances and Applications, Vol. 202, Birkhäuser (2010), 63-85. [43] Chalendar, I., Fricain, E. and Timotin, D On an extremal problem of Garcia and Ross. Operator and matrices, Vol. 3, no 4 (2009), 541-546. [42] Chalendar, I., Gorkin, P. and Partington, J.R. Numerical ranges of restricted shifts and unitary dilations. Operator and matrices, vol 3 no 2 (2009), 271--281. [41] Chalendar, I., E. Fricain, A.~I. Popov, D. Timotin and V.~G. Troitsky Finitely strictly singular operators between James spaces, J. Funct. Anal., Vol 256, n.4 (2009), 1258-1268. [40] Chalendar, E. Fricain and D. Timotin, A note on the stability of linear combinations of algebraic operators., Extracta Mathematicae, Vol. 23, n.1 (2008), 43-48. [39] Chalendar, N. Chevrot and J.R. Partington, Nearly invariant subspaces for backward shifts on vector-valued Hardy spaces, Journal of Operator Theory, Vol. 63, n.2 (2010), 101-114. [38] Chalendar, I. and Partington, J.R. Invariant subspaces for products of Bishop operators, Acta Sci. Math. (Szeged) 74 (2008), 717--725. [37] Chalendar, I. and Partington, J.R. Doubly-invariant subspaces for the shift on the vector-valued Sobolev spaces of the disc and annulus, Integral Equations Operator Theory, Vol. 61 (2008), 149-158. [36] Chalendar, I.,A. Flattot and N. Guillotin-Plantard. On the spectrum of multivariable weighted composition operators. Archiv der Mathematik, Vol. 90, n. 4 (2008), 353-359. [35] Chalendar, I., Chevrot, N. and Partington, J.Invariant subspaces for the shift on the vector-valued L2 space of an annulus, Journal of Operator Theory, Vol. 61, n.2 (2009), 313-330. [34] Chalendar, I. and Partington, J. Application of moment problems to the overcompleteness of sequences Math. Scand., Vol. 101 (2007), 249-260. [33] Chalendar, I., Flattot, A. and Partington, J. The method of minimal vectors applied to weighted composition operators Operator Theory: Advances and Applications, Vol. 171, Birkhäuser (2007), 89-105. [32] Chalendar and Partington, J. Multivariable approximate Carleman-type theorems for complex measures , Annals of Probability 2007, Vol. 35, No. 1, 384-396. [31] Chalendar, I. and Partington, J. Variations on Lomonosov's theorem via the technique of minimal vectors , Acta Sci. Math. (Szeged), vol. 71 (2005), 603-617. [30] Chalendar, I., Fricain, E. and Partington, J. Overcompleteness of sequences of reproducing kernels in model spaces, Integral Equations Operator Theory, vol. 56 (2006), 45-56. [29] Chalendar, I. and Partington, J. An image problem for compact operators , Proc. Amer. Math. Soc., vol. 134 (2006), 1391-1396. [28] Chalendar, I., Partington, J. and Smith. R. $L\sp 1$ factorizations, moment problems and invariant subspaces , Studia Mathematica, Vol. 167 (2005), 183-194. [27] Chalendar, I., Habsieger, L., Partington, J. and Ransford T. Approximate Carleman theorems and a Denjoy-Carleman maximum principles, Arch. Math. (Basel), Vol. 83 (2004) 88-96. [26] Chalendar, I. and Partington, J. Convergence properties of minimal vectors for normal operators and weighted shifts, Proc. Amer. Math. Soc. 133 (2005), 501-510. [25] Chalendar, I The operator-valued Poisson kernel and its applications, Irish Math. Soc. Bulletin, Vol. 51 (2003) 21-44. [24] Chalendar, I., Fricain, E. and Timotin, D. Functional models and asymptotically orthonormal sequences , Annales de l'Institut Fourier, Vol. 53 (5) (2003) 1527-1549. [23] Chalendar, I. and Partington, J.R. Constrained approximation and invariant subspaces, J. Math. Anal. Appl., Vol. 280 (2003) 176-187. [22] Chalendar, I., Partington, J. and Smith, M. Approximation in reflexive Banach spaces and applications to the invariant subspace problem, Proc. A. M. S., Vol. 132 (2004), 1133-1142. [21] Chalendar, I. and Partington, J.R. On the structure of invariant subspaces for isometric composition operators on $H^2 (D)$ and $H^2 (C_+)$, Arch. Math. (Basel), Vol. 81 (2003) 193-207. [20] Chalendar, I. and Partington, J.R. Spectral density for multiplication operators with applications to factorization of $L^1$ functions, J. Operator Theory, Vol. 50 (2003) 411-422. [19] Cassier, G. and Chalendar, I. and Chevreau, B. A mapping theorem for the boundary set $X_T$ of an absolutely continuous contraction $T$, J. Operator Theory, Vol. 50 (2003) 331-343. [18] Chalendar, I., Leblond, J. and Partington, J.R. Approximation problems in some holomorphic spaces, with applications., Systems, Approximation, Singular Integral Operators, and Related Topics, Proceedings of IWOTA 2001. eds. Alexander A. Borichev and Nikolai K. Nikolski, Birkauser. [17] Chalendar, I.; Mortini, R. When do finite Blaschke products commute ? Bull. Austral. Math. Soc. , Vol. 64 (2001) 189-200. [16] Chalendar, I. and Partington, J.R. $L^1$ factorizations for some perturbations of the unilateral shift. Note aux C.R.A.S. , Tome 332, Série I, N.2, p.115-119 (2001). [15] Chalendar, I. and Partington, J.R. Interpolation between Hardy spaces on circular domains with applications to approximation. Arch. Math. (Basel), Vol. 78 (2002) 223-232. [14] Cassier, G.; Chalendar, I. The group of the invariants of a finite Blaschke product. Complex variables 42 (2000) 193--206. [13] Chalendar, I.; Kellay, K.; Ransford, T. Binomial sums, moments and invariant subspaces. Israel Math. J. 115 (2000), 303--320. [12] Chalendar, I.; Partington, J.R. Approximation problems and representations of Hardy spaces in circular domains. Studia Math. 136 (1999), no. 3, 255--269. [11] Chalendar, I.; Esterle, J. Le problème du sous-espace invariant. In. Development of Mathematics 1950-2000 édités par Jean-Paul Pier, Birkhaüser (Bale), 2000. [10] Cassier, G.; Chalendar, I.; Chevreau, B. Some mapping theorems for the class ${A}\sb {m,n}$. Proc. London Math. Soc. 79 (1999), no. 1, 222--240. [9] Cassier, G.; Chalendar, I.; Chevreau, B. New examples of contractions illustrating membership and non-membership in the classes ${A}\sb {m,n}$. Acta Sci. Math. (Szeged) 64 (1998), no. 3-4, 707--731. [8] Chalendar, I. Localized $L\sp 1$-factorization for absolutely continuous contractions. Banach algebras '97 (Blaubeuren), 79--86, de Gruyter, Berlin, 1998. [7] Chalendar, I.; Esterle, J. $L\sp 1$-factorization for $C\sb {00}$-contractions with isometric functional calculus. J. Funct. Anal. 154 (1998), no. 1, 174--194. [6] Chalendar, I. A note on hyperinvariant subspaces for operators acting on a Banach space. Arch. Math. (Basel) 70 (1998), no. 5, 399--406. [5] Chalendar, I. Hyperinvariant subspaces for compact perturbations of operators whose spectrum has a Dini-smooth exposed arc. Indiana Univ. Math. J. 46 (1997), no. 4, 1125--1136. [4] Chalendar, Isabelle; Jaeck, Frédéric On the contractions in the classes $\bold A\sb {n,m}$. J. Operator Theory 38 (1997), no. 2, 265--296. [3] Chalendar, Isabelle Hyperinvariant subspaces for certain compact perturbations of an operator. J. Operator Theory 36 (1996), no. 1, 147--156. [2] Chalendar, Isabelle Techniques d'algèbres duales et sous-espaces invariants. (French) [Dual algebra techniques and invariant subspaces] With a preface in Romanian by Bernard Chevreau. Monografii Matematice [Mathematical Monographs], 55. Universitatea de Vest din Timisoara, Facultatea de Matematica, Timisoara, 1995. iv+94 pp. [1] Chalendar, I. Représentations isométriques de ${\cal H}\sp \infty(G)$. (Isometric representations of ${\cal H}\sp \infty(G)$). (French) An. Univ. Timis., Ser. Mat.-Inform. 33, No.1, 19-44 (1995). [4] C. Avicou, I. Chalendar and J. R. Partington. Analyticity and compactness of semigroups of composition operators. [3] I. Chalendar, J. Esterle and J. R. Partington. Lower estimates near the origin for functional calculus on operator semigroups. [2] Chalendar, I, Fricain, E. and Timotin, D. , A short note on the Feichtinger Conjecture preprint, 2011. [1] Chalendar, I. and Partington, J.R. The cyclic vectors and invariant subspaces of rational Bishop operators. [5] Chalendar, I. Habilitation à Diriger des Recherches, Université Lyon I, spécialité Mathématiques Pures, Outils d'Analyse Complexe et Fonctionnelle pour le problème du sous-espace invariant, 2001. [4] Henry Helson. Et les séries de Fourier devinrent Analyse harmonique. Leçons de mathématiques d'aujourd'hui. CASSINI, Paris, 2000. [3] Techniques d'analyse complexe appliquées au problème des moments et au problème du sous-espace invariant Revue de l'association Femmes et Mathématique . Forum des Jeunes Mathématiciennes, 1999. [2] Chalendar, I. Thèse présentée à l'Université Bordeaux I, spécialité Mathématiques Pures, Autour du problème du sous-espace et théorie des algèbres duales. [1] Algèbres duales et classes $\bold A\sb {n,m}$. Revue de l'association Femmes et Mathématique (supplément au numéro 2). Premier Forum des Jeunes Mathématiciennes, 7-10, 1996. Retour à la page de l'Institut Girard Desargues Retour à la page du thème analyse Retour à la page d' Isabelle Chalendar Page modifiée le 25 avril 2009
CommonCrawl
A generalization of Douady's formula On the extension and smoothing of the Calabi flow on complex tori December 2017, 37(12): 6165-6181. doi: 10.3934/dcds.2017266 $\mathcal{D}$-solutions to the system of vectorial Calculus of Variations in $L^∞$ via the singular value problem Gisella Croce 1, , Nikos Katzourakis 2,†,, and Giovanni Pisante 3, Normandie Univ, UNIHAVRE, LMAH, 76600 Le Havre, France Department of Mathematics and Statistics, University of Reading, Whiteknights, PO Box 220, Reading RG6 6AX, UK Universitá degli Studi della Campania "Luigi Vanvitelli", Scuola Politecnica e delle Scienze di Base, Dipartimento di Matematica e Fisica, Viale Lincoln, 5, 81100 Caserta, Italy ‡ Corresponding author † N.K. has been partially financially supported by the EPSRC grant EP/N017412/1 Received December 2016 Revised July 2017 Published August 2017 $\mathrm{H}∈ C^2(\mathbb{R}^{N\times n})$ $u :Ω \subseteq \mathbb{R}^n \longrightarrow \mathbb{R}^N$ , consider the system $ \label{1}\mathrm{A}_∞ u\, :=\,\Big(\mathrm{H}_P \otimes \mathrm{H}_P + \mathrm{H}[\mathrm{H}_P]^\bot \mathrm{H}_{PP}\Big)(\text{D} u):\text{D}^2u\, =\,0. \tag{1}$ We construct $\mathcal{D}$ -solutions to the Dirichlet problem for (1), an apt notion of generalised solutions recently proposed for fully nonlinear systems. Our -solutions are $W^{1,∞}$ -submersions and are obtained without any convexity hypotheses for $\mathrm{H}$ , through a result of independent interest involving existence of strong solutions to the singular value problem for general dimensions $n≠ N$ Keywords: Vectorial Calculus of Variations in L∞, generalised solutions, fully nonlinear systems, ∞-Laplacian, young measures, singular value problem, Baire Category method, convex integration. Mathematics Subject Classification: 35D99, 35F05. Citation: Gisella Croce, Nikos Katzourakis, Giovanni Pisante. $\mathcal{D}$-solutions to the system of vectorial Calculus of Variations in $L^∞$ via the singular value problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6165-6181. doi: 10.3934/dcds.2017266 H. Abugirda and N. Katzourakis, Existence of 1D vectorial Absolute Minimisers in $L^∞$ under minimal assumptions, Proceedings of the AMS, 145 (2017), 2567-2575. doi: 10.1090/proc/13421. Google Scholar L. Ambrosio and J. Malý, Very weak notions of differentiability, Proceedings of the Royal Society of Edinburgh A, 137 (2007), 447-455. doi: 10.1017/S0308210505001344. Google Scholar G. Aronsson, Minimization problems for the functional $sup_x \mathcal{F}(x, f(x), f'(x))$, Arkiv für Mat., 6 (1965), 33-53. doi: 10.1007/BF02591326. Google Scholar G. Aronsson, Minimization problems for the functional $sup_x \mathcal{F}(x, f(x), f'(x))$ Ⅱ, Arkiv für Mat., 6 (1966), 409-431. doi: 10.1007/BF02590964. Google Scholar G. Aronsson, Extension of functions satisfying Lipschitz conditions, Arkiv für Mat., 6 (1967), 551-561. doi: 10.1007/BF02591928. Google Scholar G. Aronsson, On the partial differential equation $u_x^2 u_{xx} + 2u_x u_y u_{xy} + u_y^2 u_{yy} = 0$, Arkiv für Mat., 7 (1968), 395-425. doi: 10.1007/BF02590989. Google Scholar G. Aronsson, Minimization problems for the functional $sup_x \mathcal{F}(x, f(x), f'(x))$ Ⅲ, Arkiv für Mat., 7 (1969), 509-512. doi: 10.1007/BF02590888. Google Scholar G. Aronsson, On certain singular solutions of the partial differential equation $u_x^2 u_{xx} + 2u_x u_y u_{xy} + u_y^2 u_{yy} = 0$, Manuscripta Math, 47 (1984), 133-151. doi: 10.1007/BF01174590. Google Scholar G. Aronsson, Construction of singular solutions to the $p$-harmonic equation and its limit equation for $p=∞$, Manuscripta Math, 56 (1986), 135-158. doi: 10.1007/BF01172152. Google Scholar G. Aronsson, M. Crandall and P. Juutinen, A tour of the theory of absolutely minimizing functions, Bulletin of the AMS, New Series, 41 (2004), 439-505. doi: 10.1090/S0273-0979-04-01035-3. Google Scholar E. N. Barron, L. C. Evans and R. Jensen, The infinity Laplacian, Aronsson's equation and their generalizations, Trans. Amer. Math. Soc., 360 (2008), 77-101. doi: 10.1090/S0002-9947-07-04338-3. Google Scholar E. N. Barron, R. Jensen and C. Wang, The Euler equation and absolute minimizers of $L^{∞}$ functionals, Arch. Rational Mech. Analysis, 157 (2001), 255-283. doi: 10.1007/PL00004239. Google Scholar E. N. Barron, R. Jensen and C. Wang, Lower semicontinuity of $L^{∞}$ functionals, Ann. I. H. Poincaré AN, 18 (2001), 495-517. doi: 10.1016/S0294-1449(01)00070-1. Google Scholar A. C. Barroso, G. Croce and A. Ribeiro, Sufficient conditions for existence of solutions to vectorial differential inclusions and applications, Houston J. Math., 39 (2013), 929-967. Google Scholar C. Le Bris and P. L. Lions, Renormalized solutions of some transport equations with partially $W^{1,1}$ velocities and applications, Ann. di Mat. Pura ed Appl., 183 (2004), 97-130. doi: 10.1007/s10231-003-0082-4. Google Scholar L. Capogna and A. Raich, An Aronsson type approach to extremal quasiconformal mappings, Journal of Differential Equations, 253 (2012), 851-877. doi: 10.1016/j.jde.2012.04.015. Google Scholar C. Castaing, P. R. de Fitte and M. Valadier, Young Measures on Topological spaces with Applications in Control Theory and Probability Theory, Mathematics and Its Applications (Kluwer Academic Publishers, Academic Press), Academic Press, 2004. doi: 10.1007/1-4020-1964-5. Google Scholar M. G. Crandall, A visit with the 1-Laplacian, in Calculus of Variations and Non-Linear Partial Differential Equations, 75-122, Springer Lecture notes 1927, Springer, Berlin, 2008. doi: 10.1007/978-3-540-75914-0_3. Google Scholar G. Croce, A differential inclusion: The case of an isotropic set, ESAIM Control Optim. Calc. Var., 11 (2005), 122-138. doi: 10.1051/cocv:2004035. Google Scholar B. Dacorogna, Direct Methods in the Calculus of Variations, 2nd edition, Applied Mathematical Sciences, Springer, 2008. Google Scholar B. Dacorogna and P. Marcellini, Cauchy-Dirichlet problem for first order nonlinear systems, Journal of Functional Analysis, 152 (1998), 404-446. doi: 10.1006/jfan.1997.3172. Google Scholar B. Dacorogna and P. Marcellini, Implicit Partial Differential Equations, Progress in Nonlinear Differential Equations and Their Applications, Birkhäuser, 1999. doi: 10.1007/978-1-4612-1562-2. Google Scholar B. Dacorogna and G. Pisante, A general existence theorem for differential inclusions in the vector valued case, Portugaliae Mathematica, 62 (2005), 421-436. Google Scholar B. Dacorogna, G. Pisante and A. M. Ribeiro, On non quasiconvex problems of the calculus of variations, Discrete Contin. Dyn. Syst., 13 (2005), 961-983. doi: 10.3934/dcds.2005.13.961. Google Scholar B. Dacorogna and A. M. Ribeiro, Existence of solutions for some implicit partial differential equations and applications to variational integrals involving quasi-affine functions, Proc. Roy. Soc. Edinburgh Sect. A, 134 (2004), 907-921. doi: 10.1017/S0308210500003541. Google Scholar B. Dacorogna and C. Tanteri, On the different convex hulls of sets involving singular values, Proc. Roy. Soc. Edinburgh Sect. A, 128 (1998), 1261-1280. doi: 10.1017/S0308210500027311. Google Scholar R. E. Edwards, Functional Analysis: Theory and Applications, Corrected reprint of the 1965 original. Dover Publications, Inc., New York, 1995. Google Scholar L. C. Evans, Partial Differential Equations, AMS Graduate Studies in Mathematics, 2nd edition, 2010. doi: 10.1090/gsm/019. Google Scholar L. C. Evans and R. Gariepy, Measure Theory and Fine Properties of Functions, Studies in advanced mathematics, CRC press, 1992. Google Scholar L. C. Florescu and C. Godet-Thobie, Young Measures and Compactness in Metric Spaces, De Gruyter, 2012. doi: 10.1515/9783110280517. Google Scholar I. Fonseca and G. Leoni, Modern methods in the Calculus of Variations: $L^p$ Spaces, Springer Monographs in Mathematics, 2007. Google Scholar R. A. Horn and Ch. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 2013. Google Scholar N. Katzourakis, $L^∞$-variational problems for maps and the Aronsson PDE system, J. Differential Equations, 253 (2012), 2123-2139. doi: 10.1016/j.jde.2012.05.012. Google Scholar N. Katzourakis, $∞$-minimal submanifolds, Proceedings of the AMS, 142 (2014), 2797-2811. doi: 10.1090/S0002-9939-2014-12039-9. Google Scholar N. Katzourakis, On the structure of $∞$-harmonic maps, Communications in PDE, 39 (2014), 2091-2124. doi: 10.1080/03605302.2014.920351. Google Scholar N. Katzourakis, Explicit $2D$ $∞$-harmonic maps whose interfaces have junctions and corners, Comptes Rendus Acad. Sci. Paris, Ser. I, 351 (2013), 677-680. doi: 10.1016/j.crma.2013.07.028. Google Scholar N. Katzourakis, Optimal $∞$-quasiconformal immersions, ESAIM Control, Opt. and Calc. Var., 21 (2015), 561-582. doi: 10.1051/cocv/2014038. Google Scholar N. Katzourakis, Nonuniqueness in vector-valued calculus of variations in $L^∞$ and some linear elliptic systems, Comm. on Pure and Appl. Anal., 14 (2015), 313-327. doi: 10.3934/cpaa.2015.14.313. Google Scholar N. Katzourakis, An Introduction to Viscosity Solutions for Fully Nonlinear PDE with Applications to Calculus of Variations in $L^∞$, Springer Briefs in Mathematics, 2015. doi: 10.1007/978-3-319-12829-0. Google Scholar N. Katzourakis, Generalised solutions for fully nonlinear PDE systems and existence-uniqueness theorems, Journal of Differential Equations, 263 (2017), 641-686. doi: 10.1016/j.jde.2017.02.048. Google Scholar N. Katzourakis, Absolutely minimising generalised solutions to the equations of vectorial Calculus of Variations in $L^∞$, Calculus of Variations and PDE, 56 (2017), 1-25. doi: 10.1007/s00526-016-1099-z. Google Scholar N. Katzourakis, A new characterisation of $∞$-Harmonic and $p$-Harmonic maps via affine variations in $L^∞$, Electronic Journal of Differential Equations, 2017 (2017), 1-19. Google Scholar N. Katzourakis, Solutions of vectorial Hamilton-Jacobi equations are rank-one Absolute Minimisers in $L^∞$, Advances in Nonlinear Analysis, in press. Google Scholar N. Katzourakis, Weak versus $\mathcal{D}$-solutions to linear hyperbolic first order systems with constant coefficients, preprint, arXiv: 1507.03042. Google Scholar N. Katzourakis and J. Manfredi, Remarks on the validity of the maximum principle for the $∞$-Laplacian, Le Matematiche, 71 (2016), 63-74. Google Scholar N. Katzourakis and T. Pryer, On the numerical approximation of $∞$-Harmonic mappings Nonlinear Differential Equations and Applications, 23 (2016), Art. 51, 23 pp. doi: 10.1007/s00030-016-0415-9. Google Scholar B. Kirchheim, Rigidity and geometry of microstructures, in Issue 16 of Lecture notes, Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig, 2003. Google Scholar B. Kirchheim, Deformations with finitely many gradients and stability of convex hulls, Comptes Rendus de l'Académie des Sciences, Séries I, Mathematics, 332 (2001), 289-294. doi: 10.1016/S0764-4442(00)01792-4. Google Scholar S. Müller and V. Šverák, Attainment results for the two-well problem by convex integration, Geometric analysis and the calculus of variations, (1996), 239-251. Google Scholar S. Müller and V. Šverák, Convex integration for Lipschitz mappings and counterexamples to regularity, Ann. of Math., 157 (2003), 715-742. doi: 10.4007/annals.2003.157.715. Google Scholar P. Pedregal, Parametrized Measures and Variational Principles Birkhäuser, 1997. doi: 10.1007/978-3-0348-8886-8. Google Scholar G. Pisante, Sufficient conditions for the existence of viscosity solutions for nonconvex Hamiltonians, SIAM J. Math. Anal., 36 (2004), 186-203. doi: 10.1137/S0036141003426902. Google Scholar S. Sheffield and C. K. Smart, Vector-valued optimal Lipschitz extensions, Comm. Pure Appl. Math., 65 (2012), 128-154. doi: 10.1002/cpa.20391. Google Scholar M. Valadier, Young measures, Methods of Nonconvex Analysis, 1446 (1990), 152-188. doi: 10.1007/BFb0084935. Google Scholar Felix Sadyrbaev. Nonlinear boundary value problems of the calculus of variations. Conference Publications, 2003, 2003 (Special) : 760-770. doi: 10.3934/proc.2003.2003.760 Nikos Katzourakis. Nonuniqueness in vector-valued calculus of variations in $L^\infty$ and some Linear elliptic systems. Communications on Pure & Applied Analysis, 2015, 14 (1) : 313-327. doi: 10.3934/cpaa.2015.14.313 Nikos Katzourakis. Corrigendum to the paper: Nonuniqueness in Vector-Valued Calculus of Variations in $ L^\infty $ and some Linear Elliptic Systems. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2197-2198. doi: 10.3934/cpaa.2019098 G. Dal Maso, Antonio DeSimone, M. G. Mora, M. Morini. Time-dependent systems of generalized Young measures. Networks & Heterogeneous Media, 2007, 2 (1) : 1-36. doi: 10.3934/nhm.2007.2.1 Françoise Demengel, O. Goubet. Existence of boundary blow up solutions for singular or degenerate fully nonlinear equations. Communications on Pure & Applied Analysis, 2013, 12 (2) : 621-645. doi: 10.3934/cpaa.2013.12.621 Francesca Papalini. Strongly nonlinear multivalued systems involving singular $\Phi$-Laplacian operators. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1025-1040. doi: 10.3934/cpaa.2010.9.1025 Patricio Cerda, Leonelo Iturriaga, Sebastián Lorca, Pedro Ubilla. Positive radial solutions of a nonlinear boundary value problem. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1765-1783. doi: 10.3934/cpaa.2018084 Xiying Sun, Qihuai Liu, Dingbian Qian, Na Zhao. Infinitely many subharmonic solutions for nonlinear equations with singular $ \phi $-Laplacian. Communications on Pure & Applied Analysis, 2020, 19 (1) : 279-292. doi: 10.3934/cpaa.20200015 Chuanqiang Chen. On the microscopic spacetime convexity principle of fully nonlinear parabolic equations I: Spacetime convex solutions. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3383-3402. doi: 10.3934/dcds.2014.34.3383 Luisa Fattorusso, Antonio Tarsia. Regularity in Campanato spaces for solutions of fully nonlinear elliptic systems. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1307-1323. doi: 10.3934/dcds.2011.31.1307 Bernard Dacorogna, Giovanni Pisante, Ana Margarida Ribeiro. On non quasiconvex problems of the calculus of variations. Discrete & Continuous Dynamical Systems - A, 2005, 13 (4) : 961-983. doi: 10.3934/dcds.2005.13.961 Daniel Faraco, Jan Kristensen. Compactness versus regularity in the calculus of variations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 473-485. doi: 10.3934/dcdsb.2012.17.473 Chris Guiver. The generalised singular perturbation approximation for bounded real and positive real control systems. Mathematical Control & Related Fields, 2019, 9 (2) : 313-350. doi: 10.3934/mcrf.2019016 Mariane Bourgoing. Viscosity solutions of fully nonlinear second order parabolic equations with $L^1$ dependence in time and Neumann boundary conditions. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 763-800. doi: 10.3934/dcds.2008.21.763 Shigeaki Koike, Andrzej Świech. Local maximum principle for $L^p$-viscosity solutions of fully nonlinear elliptic PDEs with unbounded coefficients. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1897-1910. doi: 10.3934/cpaa.2012.11.1897 Alberto Cabada, J. Ángel Cid. Heteroclinic solutions for non-autonomous boundary value problems with singular $\Phi$-Laplacian operators. Conference Publications, 2009, 2009 (Special) : 118-122. doi: 10.3934/proc.2009.2009.118 Yu-Feng Sun, Zheng Zeng, Jie Song. Quasilinear iterative method for the boundary value problem of nonlinear fractional differential equation. Numerical Algebra, Control & Optimization, 2019, 0 (0) : 0-0. doi: 10.3934/naco.2019045 Luca Codenotti, Marta Lewicka. Visualization of the convex integration solutions to the Monge-Ampère equation. Evolution Equations & Control Theory, 2019, 8 (2) : 273-300. doi: 10.3934/eect.2019015 Ioan Bucataru, Matias F. Dahl. Semi-basic 1-forms and Helmholtz conditions for the inverse problem of the calculus of variations. Journal of Geometric Mechanics, 2009, 1 (2) : 159-180. doi: 10.3934/jgm.2009.1.159 Ivar Ekeland. From Frank Ramsey to René Thom: A classical problem in the calculus of variations leading to an implicit differential equation. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1101-1119. doi: 10.3934/dcds.2010.28.1101 HTML views (22) Gisella Croce Nikos Katzourakis Giovanni Pisante
CommonCrawl
Does the energy of ground and/or excited states have uncertainty? In this question about absorption of continuous energies by discrete atom states, one of the reasons given to explain the width of spectral lines is the uncertainty principle (natural broadening): the decay of an electron from an excited state emits a photon with an uncertain energy (until observed) within a certain range. In order to respect energy conservation, does that mean that the energy levels of ground and/or excited states in an atom also have uncertainty (and thus, a continuous range of possible energies for each level, instead of a unique energy value)? As the electron can decay between two excited states, not necessarily to the ground state, I would expect that to be at least true for excited states. However, this page gives energy values for the ground and first excited states of hydrogen electron without any range: I'm not sure if that means that there's no range (and thus, no uncertainty), or if there's simply no need to indicate it (not useful, can be calculated from the theory, ...). Wikipedia states that both ground and excited states are quantum states, but I don't know if that necessarily implies that all associated "properties" are subjected to uncertainty. energy atomic-physics ground-state OxTazOxTaz Yes, excited states have a finite lifetime $\tau$ due to spontaneous emission, and therefore an energy uncertainty (linewidth) given roughly by $\hbar$/$\tau$. That is exactly what natural broadening means. The atomic ground state cannot decay to a lower energy state (it has an infinite lifetime) and so there is no energy uncertainty. Presumably the page you linked quotes values for the hydrogen energies that are derived assuming an atom in isolation, i.e. with no electromagnetic field. If you also include the EM field in your model then it is possible to calculate the lifetimes of the excited states and therefore their natural linewidth. Mark MitchisonMark Mitchison $\begingroup$ Oh, I see! I somehow misunderstood the lifetime, thinking it was about the emitted photon, and trying to find a consequence on the electron energy while it's the other way around: the uncertainty is initially already there in the excited electron energy, because its state is unstable, and is then "transfered" to the emitted photon when the electron decays. $\endgroup$ – OxTaz Jan 6 '15 at 19:33 In order to characterise the energy of a state precisely you need to monitor it for an infinitely long time. You can show this easily for a free particle described by a plane wave, but while I don't know of an easy explanation it's also true of more complicated states like atomic wavefunctions. So if you excite an atom from the ground state to an excited state you don't know the energy of either state precisely because neither state lasts an infinitely long time. However the ground state usually lasts a lot longer than the excited state, so it's normally safe to take the ground state as exact and only worry about the uncertainty in the excited state. This means the energy of the absorbed photon will have some variability, as will the energy of the emitted photon when the atom relaxes. However the energy of the absorbed and emitted photons will be the same to a very good approximation. The uncertainty principle is involved because the uncertainty in the energy $\Delta E$ is related to the lifetime of the excited state $\Delta t$ by the time energy form of the uncertainty principle: $$ \Delta E \Delta t \ge \frac{\hbar}{2} $$ $\begingroup$ Thanks for your answer! Sadly, I can only accept one answer :) However, the part about the ground state made me curious: I thought the ground state was the "stable" state, with a theoretical infinite lifetime. Is it wrong? $\endgroup$ – OxTaz Jan 6 '15 at 20:37 $\begingroup$ @OxTaz: the ground state lifetime can usually be safely treated as infinite. It isn't of course, because as soon as you excite an atom the ground state disappears i.e. its life has ended! When the atom relaxes a new ground state is created and the lifetime counter starts again. $\endgroup$ – John Rennie Jan 7 '15 at 6:10 Not the answer you're looking for? Browse other questions tagged energy atomic-physics ground-state or ask your own question. If photon energies are continuous and atomic energy levels are discrete, how can atoms absorb photons? Do electrons collapse into nucleus, if electrons in the atom are constantly excited? Hydrogen energy levels and energy-time uncertainty principle Can an excited atom have multiple electrons in excited states? Does the mass of a nucleus increase when it is excited to higher energy levels Uncertainty relation of energy and time in stationary states Energy states and electron excitation Why are doubly-excited states in helium not square integrable? Excited states in Bohr's model of an atom Natural line width for doubly excited transitions The order of excited states
CommonCrawl
The Transactions of The Korean Institute of Electrical Engineers (전기학회논문지) The Korean Institute of Electrical Engineers The first issue of The Transactions of the Korean Institute of Electrical Engineers(KIEE) was in October 1948. The Transactions of the KIEE is a monthly periodical published on the 1st day of each month. KIEE is an official publication of the Korean Institute of Electrical Engineers. The aim of the journal is to publish the highest quality manuscripts dedicated to the advancement of electrical engineering technologies. The Journal contains papers related to electrical power engineering, electrical machinery and energy conversion systems, electro-physics and application, information and controls, as well as electrical facilities. Currently, KIEE Transactions is divided into: "A" Sector (Power Engineering); "B" Sector (Electrical Machinery and Energy Conversion System); "C" Sector (Electro-physics and Application); "D" Sector (Information and Control); and "E" Sector (Electrical Facilities). With an aim to collect and publish outstanding scientific papers related to electrical engineering, the topics of papers published in KIEE Transactions are further divided into: power systems, transmission and distribution systems, electric power system economics, micro-grid, power system protection and automation, pumped-storage hydro power generation and electric energy storage systems in the "A" Sector; electrical equipment and appliances, power electronics, electrical traffic systems, new and renewable energy systems, eco-friendly electric power equipment, super-conductive equipment and E-mobility in the "B" Sector; electrical materials and semiconductors, smart large power and high voltage technology, optoelectronics and electromagnetic waves, Micro Electro Mechanical Systems (MEMS), and power asset risk management in the "C" Sector; instrumentation and control (I&C), robotics and automation, computer intelligence and intelligence systems, signal processing and embedded systems, autonomous moving object information processing, and biomedical engineering in the "D" Sector; and engineering and supervision, construction technology, power distribution system diagnosis, safety technology, electric railroad, technical standards, LVDC Facility, electric Facility fusion, etc. in the "E" Sector http://www.trans.kiee.or.kr/ KSCI KCI SCOPUS Analysis of Global Oscillation via Sync Search in Power Systems Shim, Kwan-Shik;Nam, Hae-Kon;Kim, Yong-Gu;Moon, Young-Hoan;Kim, Sang-Tae 1255 PDF KSCI The present study explained the phenomenon that low frequency oscillation is synchronized with discrete data obtained from a wide area system, and a sync search method. When a disturbance occurs in an power system, various controllers operate in order to maintain synchronization. If the system's damping is poor, low frequency oscillations continue for a long time and the oscillations are synchronized with one another at specific frequency. The present study estimated dominant modes, magnitude and phase of signals by applying parameter estimation methods to discrete signals obtained from an power system, and performed sync search among wide area signals by comparing the estimated data. Sync search was performed by selecting those with the same frequency and damping constant from dominant oscillation modes included in a large number of signals, and comparing their magnitude and phase. In addition, we defined sync indexes in order to indicate the degree of sync between areas in a wide area system. Furthermore, we proposed a wide area sync search method by normalizing mode magnitude in discrete data obtained from critical generator of the wide area. By applying the sync search method and sync indexes proposed in this study to two area systems, we demonstrated that sync scanning can be performed for discrete signals obtained from power systems. The Maximum Installable DG Capacity According to Operation Methods of Voltage Regulator in Distribution Systems Kim, Mi-Young 1263 Stable and sustainable power supply means maintaining a certain level of power quality and service while securing energy resource and resolving environmental issues. Distributed generation (DG) has become an essential and indispensable element from environmental and energy security perspectives. It is known that voltage violation is the most important constraint for load variation and the maximum allowable DG. In distribution system, sending voltage from distribution substation is regulated by ULTC (Under Load Tap Changer) designed to maintain a predetermined voltage level. ULTC is controlled by LDC (Line Drop Compensation) method compensating line voltage drop for a varying load, and the sending voltage of ULTC calls for LDC parameters. The consequence is that the feasible LDC parameters considering variation of load and DG output are necessary. In this paper, we design each LDC parameters determining the sending voltage that can satisfy voltage level, decrease ULTC tap movement numbers, or increase DG introduction. Moreover, the maximum installable DG capacity based on each LDC parameters is estimated. A Comparative Study for Reliability of Single and Radial Power Distribution System considering Momentary Interruption Lee, Hee-Tae;Kim, Jae-Chul 1270 The structure of a power distribution system will change in a loop configuration such as in the case of a smart grid. If power distribution system changes radial to loop form, the structure may have to be changed significantly. Therefore, we analyzed the reliability indices and calculated a CIC(Customer Interruption Cost) for the loop power distribution system. The power distribution system reliability depends on the protection scheme. This study is applied to the current protection scheme method and is compared with each model. When the CIC was evaluated, most studies performed calculations only for sustained interruptions. However, in actuality, momentary interruption frequencies occurred more than sustain interruptions. Thus, it is occurred the CIC additively. Therefore, we evaluated a CIC including momentary interruption, for each model, and then compared with MAIFI(Momentary Average Interruption Frequency Index) A Study on the Fault Diagnosis Expert System for 765kV Substations Lee, Heung-Jae;Kang, Hyun-Jae 1276 This paper presents a fault diagnosis expert system for 765kV substation. The proposed system includes the topology processor and intelligent alarm processing subsystems. This expert system estimates the fault section through the inference process using heuristic knowledge and the output of topology processor and intelligent alarm processing system. The rule-base of this expert system is composed of basic rules suggested by Korea Electric Power Corporation and heuristic rules. This expert system is developed using PROLOG language. Also, user friendly Graphic User Interface is developed using visual basic programming in the windows XP environment. The proposed expert system showed a promising performance through the several case studies. A Study on the Protective Coordination of Distribution Automation System under Loop Operation Lee, Hee-Tae;Moon, Jong-Fil;Kim, Jae-Chul 1281 As a general radial configuration of power distribution system, the various researches have being studied to change a radial configuration to network one such as smart, intelligent and micro grid for loop operation. If a radial configuration changes to network, protective coordination comes to the biggest problem. When a typical protective algorithm is applied to loop distribution system protection, the interrupted section is expanded, therefore, reliability grows worse. This paper presents the new protective method being able to apply to loop distribution system with Distribution Automation System (DAS) which separate the minimal faulted section. Through contingency analysis of the sustained and momentary fault, we analyzed the influence for radial configuration and loop configuration using interrupted area and proved the effectiveness of proposed method. An Application of Harmony Search Algorithm for Operational Cost Minimization of MicroGrid System Rhee, Sang-Bong;Kim, Kyu-Ho;Kim, Chul-Hwan 1287 This paper presents an application of Harmony Search (HM) meta-heuristic optimization algorithm for optimal operation of microgrid system. The microgrid system considered in this paper consists of a wind turbine, a diesel generator, and a fuel cell. An one day load profile which divided 20 minute data and wind resource for wind turbine generator were used for the study. In optimization, the HS algorithm is used for solving the problem of microgrid system operation which a various generation resources are available to meet the customer load demand with minimum operating cost. The application of HS algorithm to optimal operation of microgrid proves its effectiveness to determine optimally the generating resources without any differences of load mismatch and having its nature of fast convergency time as compared to other optimization method. A Numerical Algorithm for Fault Location Estimation and Arc Faults Detection for Auto-Reclosure Kim, Byeong-Man;Chae, Myeong-Suk;Zheng, Tai-Ying;Kang, Yong-Cheol 1294 This paper presents a new numerical algorithm for fault discrimination and fault location estimation when occur to arcing ground and arcing line to line on transmission lines. The object of this paper is developed from new numerical algorithm to calculate the fault distance and simultaneously to make a distinction between transient and permanent faults. so the first of object for propose algorithm would be distinguish the permanent from the transient faults. This arcing fault discrimination algorithm is used if calculated value of arc voltage amplitude is greater than product of arc voltage gradient and the length of the arc path, which is equal or greater than the flashover length of a suspension insulator string[1-3]. Also, each algorithm is separated from short distance and long distance. This is difference to with/without capacitance between short to long distance. To test the validity of the proposed algorithms, the results of algorithm testing through various computer simulations are given. The test was simulated in EMTP/ATP simulator under a number of scenarios and calculate of algorithm was used to MATLAB. A Study on Measures to Boost the Development of Distributed Generation through Analysis and assessment of the District Electricity Power Business Environment Kim, Soo-Chul;Yoo, Wang-Jin 1304 The purpose of this study is to build promotive measures and to develop alternative policies of DG(Distributed Generation) by finding and analysing effects of four business environment factors related to DEPB(District Electricity Power Business) on boosting DG. In this study, four business environment factors, which are the electric power industry restructuring, electricity tariff and pricing structure, regulations for DEPB, and conflicts of stake-holding groups, are considered as independent variables. And promotion factors of DG including small CHP(Combined Heat and Power) generation, which is outcome of DEPB, are considered as dependent variables. But dependent variables including booming of new renewable energy generation due to green energy pricing incentives, the electric power industry restructuring, and electricity tariff and pricing policies were separatively considered. In this study, some policies were proposed reflecting research results of empirical demonstrative analysis, previous studies, overseas cases, etc. Comparison of Characteristics for Variable Operation using Doubly-fed Induction Generator and Fixed Speed Operation in Wind Turbine System Ro, Kyoung-Soo;Kim, Tae-Ho 1313 This paper analyzes the steady-state operating characteristics of doubly-fed induction generator(DFIG) and fixed-speed induction generator(FSIG) in wind turbine system. It also presents a modeling and simulation of a grid-connected wind turbine generation system for dynamics analysis on MATLAB/Simulink, and compares the responses between DFIG and FSIG wind turbine systems with respect to wind speed variation, 3-phase fault and 1-phase ground fault of the network. Simulation results show the variations of generator's active/reactive output, rotor speed, terminal voltage, fault current, etc. Case studies demonstrate that DFIG illustrates better performance compared to FSIG. Design of Electromagnetic Actuator with Three-Link Mechanism for Air Circuit Breaker Kim, Rae-Eun;Kwak, Sang-Yeop;Jung, Hyun-Kyo 1321 In this paper, an electromagnetic force driving actuator (EMFA) and three-link mechanism are proposed as a driving mechanism and connection device for low voltage air circuit breaker (ACB). As the result of dynamic characteristic analysis, the actuator and link mechanism are designed from the simulation and manufactured. The magneitc field of the EMFA is analyzed using the finite element method (FEM). The dynamic characteristic analysis with calculation of the circuit equation and kinetical equation is performed by the time difference method (TDM). Also, the result of the analysis is verified through the experiment of the fabrication model. In this paper, the EMFA size is smaller than the actuator for high voltage circuit breaker. Thus, the dynamic characteristic is analyzed with end-winding inductance that is calculated by the same method which is applied on the circle type end-winding of motors. The designed model for 1600 ampere-frame ACB and the three-link mechanism for connecting contact part with actuating part are manufactured. It is confirmed that the three-link mechanism is possible for improving the circuit breaker efficiency and reducing the size of the EMFA. It is proved that the improved 2-D analysis is more accurate than established method. Analysis of Flow Characteristics and Experiment of Conductive Liquid Metal Coupling Lorentz Force with Fluid Equation Jeon, Mun-Ho;Lee, Suk-Won;Kim, Chang-Eob 1329 This paper presents the flow characteristics in the fluid circulation loop using the tubular type linear induction motor(TLIM) electromagnetic pump. A TLIM pump was designed using the equivalent and genetic algorithm for the flow system of 40[1/min]. The flow characteristics are analyzed by coupling the Maxwell equations with the Navier-Stokes equation. The analysis algorithm also takes account of the effects of the thrust. The flow characteristics are analysed with the proposed method and compared with the commercial program and experiment and discussed. A Study on Reduction Method of Electromagnetic Noise of PCB for Vehicle Cluster Kim, Byeong-Woo;Hur, Jin 1336 In this paper, an EMI reduction effects using an EMC chamber is described and reduction methods is proposed. In the case of general electronic components a working frequency is low. But in this paper the vehicle cluster works 75MHz in the main clock frequency, becoming weak by noise because of being attached in TFT LCD. As the outer case installed in the vehicle is made up of plastic materials, the noise is radiated if not protecting noise in the PCB itself. Therefore, This paper will explain the theoretical basis and propriety with respect to the discussion and need about the guide for PCB design considering EMC, through the reduction of PCB noise. Current Control Method of Distribution Static Compensator Considering Non-Linear Loads Kim, Dong-Geun;Choi, Jong-Woo;Kim, Heung-Geun 1342 DSTATCOM(distribution static compensator) is one of the custom power devices, and protects a distribution line from unbalanced and harmonic current caused by non-linear and unbalanced loads. Researches about DSTATCOM are mainly divided two parts, one is the calculation of compensated current and the other part is the current control. This paper proposes a proportional-resonant-repetitive current controller. Improved performance of instantaneous power compensation has been shown by simulations and experiments. A Study on PFC AC-DC Converter of High Efficiency added in Electric Isolation Kwak, Dong-Kurl;Kim, Sang-Roan 1349 This paper is studied on a novel power factor correction (PFC) AC-DC converter of high efficiency by soft switching technique. The input current waveform in the proposed converter is got to be a sinusoidal form composed of many a discontinuous pulse in proportion to the magnitude of a ac input voltage under the constant switching frequency. Therefore, the input power factor is nearly unity and the control method is simple. The proposed converter adding an electric isolation operates with a discontinuous current mode (DCM) of the reactor in order to obtain some merits of simpler control, such as fixed switching frequency, without synchronization control circuit used in continuous current mode (CCM). To achieve the soft switching (ZCS or ZVS) of control devices, the converter is constructed with a new loss-less snubber for a partial resonant circuit. It is that the switching losses are very low and the efficiency of the converter is high, Particularly, the stored energy in a loss-less snubber capacitor recovers into input side and increases input current from a resonant operation. The result is that the input power factor of the proposed converter is higher than that of a conventional PFC converter. This paper deals mainly with the circuit operations, theoretical, simulated and experimental results of the proposed PFC AC-DC converter in comparison with a conventional PFC AC-DC converter. Implementation of IEC 61400-25 based communication system for wind power plants Lee, Jung-Hoon;Kim, Tae-O;Lee, Hong-Hee 1356 IEC 61400-25(Communications for monitoring and control of wind power plants) is established as a international standard by IEC TC 88 in 2007 with the growth of the wind power industry. In this paper, MMS service which is defined in lEC 61400-25 part 4 is implemented, and the wind power generator is objectified by using the VMD of MMS service. We also implemented MMS communication system between the wind power generator and the local control center using the proposed VMD. The performance of the developed MMS communication system is verified through the XML based user interface on the web browser. The Characteristics and Technical Trends of Power MOSFET Bae, Jin-Yong;Kim, Yong 1363 This paper reviews the characteristics and technical trends in Power MOSFET technology that are leading to improvements in power loss for power electronic system. The silicon bipolar power transistor has been displaced by silicon power MOSFET's in low and high voltage system. The power electronic technology requires the marriage of power device technology with MOS-gated device and bipolar analog circuits. The technology challenges involved in combining power handling capability with finger gate, trench array, super junction structure, and SiC transistor are described, together with examples of solutions for telecommunications, motor control, and switch mode power supplies. Characteristics of Insulation Aging in Large Generator Stator Windings Kim, Hee-Dong;Lee, Young-Jun;Ju, Young-Ho 1375 Insulation tests have been performed on two generator stator bars under accelerated aging under a laboratory environment. Electrical stress was applied to stator bar No.1, and electrical and thermal stresses were applied to stator bar No.2. Nondestructive stator insulation tests including the ac current, dissipation factor($tan{\delta}$), and partial discharge tests have been performed on both bars as the bars were aged for 11460 hours. Experimental test results show that ${\Delta}I$, ${\Delta}tan{\delta}$, and partial discharge of No. 1 and No.2 stator bars increased with increased in aging time. It has been concluded from the test that the stator insulation of the two generators are in good condition. An Analysis of UV Detected Images and Safety Standards in Discharging Model Shong, Kil-Mok;Kim, Young-Seok;Jung, Jin-Soo 1380 This paper was studied about the aging judgment method by ultraviolet rays image to happen in electric power equipment using ultraviolet rays camera. We established the aging judgment method as follows; 20% within of risk factor of insulation state of electrical facility that ultraviolet rays image does not show is "good or recognition". 30$^{\sim}$50% within of risk factor is "check", 50$^{\sim}$60% within of risk factor is "inspection" and 60% above of risk factor is "replacement". This method will be utilized for the inspection about electrical facilities. Frquency Characteristics of Electronic Mixing Optical Detection using APD for Radio over Fiber Network Choi, Young-Kyu 1386 An analysis is presented for super-high-speed optical demodulation by an avalanche photodiode(APD) with electric mixing. A normalized gain is defined to evaluate the performance of the optical mixing detection. Unlike previous work, we include the effect of the nonlinear variation of the APD capacitance with bias voltage as well as the effect of parasitic and amplifier input capacitance. As a results, the normalized gain is dependent on the signal frequency and the frequency difference between the signal and the local oscillator frequency. However, the current through the equivalent resistance of the APD is almost independent of signal frequency. The mixing output is mainly attributed to the nonlinearity of the multiplication factor. We show also that there is an optimal local oscillator voltage at which the normalized gain is maximized for a given avalanche photodiode. Construction of High-Speed Wavelength Swept Mode-Locked Laser Based on Oscillation Characteristics of Fiber Fabry-Perot Tunable Filter Lee, Eung-Je;Kim, Yong-Pyung 1393 A high-speed wavelength swept laser, which is based on oscillation characteristics of a fiber Fabry-Perot tunable filter, is described. The laser is constructed by using a semiconductor optical amplifier, a fiber Fabry-Perot tunable filter, and 3.348 km fiber ring cavity. The wavelength sweeps are repeatatively generated with the repetition period of 61 kHz which is the first parallel oscillation frequency of the Fabry-Perot tunable filter for the low power consumption. Mode-locking is implemented by 3.348 km fiber ring cavity for matching the fundamental of cavity roundtrip time to the sweep period. The wavelength tuning range of the laser is 87 nm(FWHM) and the average output power is 1.284 mW. A Vehicle Speed Detector Using AMR Sensors Kang, Moon-Ho;Park, Yoon-Chang 1398 This paper proposes a vehicle speed detector with anisotropic magnetoresistive (AMR) sensors and addresses experimental results to show the performance of the detector. The detector consists of two AMR sensors and mechanical and electronic apparatuses. The AMR sensor senses disturbance of the earth magnetic field caused by a vehicle moving over the sensor and then produces an output indicative of the moving vehicle. In this paper, vehicle speeds are calculated by using two AMR sensors built on a board. The speed of a vehicle is calculated by dividing the known distance between the two sensors with the time difference between two output signals from each sensor, captured sequentially while the vehicle is driving over the sensors. Some field tests have been carried to show the performance of the proposed detector and its usefulness. A Disturbance Observer-Based Output Feedback Controller for a DC/DC Boost Converter with Load Variation Jeong, Goo-Jong;Kim, In-Hyuk;Son, Young-Ik 1405 Output voltage of a DC/DC power converter system is likely to be distorted if variable loads exist in the output terminal. This paper presents a new disturbance observer(DOB) approach to maintain a robust regulation of the output voltage of a boost type DC/DC converter. Unlike the buck-type converter case, the regulation problem of the boost converter is very complicated by the fact that, with respect to the output voltage to be regulated, the system is non-minimum phase. Owing to the non-minimum phase property the classical DOB approach has not been applied to the boost converter. Motivated by a recent result on the application of DOB to non-mimimum phase system, an output feedback control law is proposed by using a parallel feedforward compensator. Simulation results using the Simulink SimPowerSystems prove the performance of the proposed controller against load variation. Model-free $H_{\infty}$ Control of Linear Discrete-time Systems using Q-learning and LMI Based on I/O Data Kim, Jin-Hoon;Lewis, F.L. 1411 In this paper, we consider the design of $H_{\infty}$ control of linear discrete-time systems having no mathematical model. The basic approach is to use Q-learning which is a reinforcement learning method based on actor-critic structure. The model-free control design is to use not the mathematical model of the system but the informations on states and inputs. As a result, the derived iterative algorithm is expressed as linear matrix inequalities(LMI) of measured data from system states and inputs. It is shown that, for a sufficiently rich enough disturbance, this algorithm converges to the standard $H_{\infty}$ control solution obtained using the exact system model. A simple numerical example is given to show the usefulness of our result on practical application. A Reconfigurable Image Processing SoC Based on LEON 2 Core Lee, Bong-Kyu 1418 This paper describes the design and implementation of a System-on-a-Chip (SoC) for image processing applications to use in wearable/mobile products. The target Soc consists of LEON 2 core, AMBA/APB bus-systems and custom-designed controllers. A new FPGA-based prototyping platform is implemented and used for design and verification of the target SoC. To ensure that the implemented SoC satisfies the required performances, an image processing application is performed. Efficient Implementing of DNA Computing-inspired Pattern Classifier Using GPU Choi, Sun-Wook;Lee, Chong-Ho 1424 DNA computing-inspired pattern classification based on the hypernetwork model is a novel approach to pattern classification problems. The hypernetwork model has been shown to be a powerful tool for multi-class data analysis. However, the ordinary hypernetwork model has limitations, such as operating sequentially only. In this paper, we propose a efficient implementing method of DNA computing-inspired pattern classifier using GPU. We show simulation results of multi-class pattern classification from hand-written digit data, DNA microarray data and 8 category scene data for performance evaluation. and we also compare of operation time of the proposed DNA computing-inspired pattern classifier on each operating environments such as CPU and GPU. Experiment results show competitive diagnosis results over other conventional machine learning algorithms. We could confirm the proposed DNA computing-inspired pattern classifier, designed on GPU using CUDA platform, which is suitable for multi-class data classification. And its operating speed is fast enough to comply point-of-care diagnostic purpose and real-time scene categorization and hand-written digit data classification. A Music Recommendation System for a Driver in Vehicle Choi, Goon-Ho;Kim, Yoon-Sang 1435 This paper proposes a music recommendation system for a driver in vehicle. The proposed system provides (selects and plays) a music to a driver in vehicle in real-time manner by inferring his preference based on physical, environmental, and personal information. Pulse data as physical information, age and biorhythm as personal information, and time as environmental information are used to infer a driver's and thus recommend a music. Experimental results showed that the proposed system could provide better satisfaction to a driver on the recommended music compared to the conventional approach. A Wide-Window Superscalar Microprocessor Profiling Performance Model Using Multiple Branch Prediction Lee, Jong-Bok 1443 This paper presents a profiling model of a wide-window superscalar microprocessor using multiple branch prediction. The key idea is to apply statistical profiling technique to the superscalar microprocessor with a wide instruction window and a multiple branch predictor. The statistical profiling data are used to obtain a synthetical instruction trace, and the consecutive multiple branch prediction rates are utilized for running trace-driven simulation on the synthesized instruction trace. We describe our design and evaluate it with the SPEC 2000 integer benchmarks. Our performance model can achieve accuracy of 8.5 % on the average. A CCD Camera Lens Degradation Caused by High Dose-Rate Gamma Irradiation Cho, Jai-Wan;Lee, Joon-Koo;Hur, Seop;Koo, In-Soo;Hong, Seok-Boong 1450 Assumed that an IPTV camera system is to be used as an ad-hoc sensor for the surveillance and diagnostics of safety-critical equipments installed in the in-containment building of the nuclear power plant, an major problem is the presence of high dose-rate gamma irradiation fields inside the one. In order to uses an IPTV camera in such intense gamma radiation environment of the in-containment building, the radiation-weakened devices including a CCD imaging sensor, FPGA, ASIC and microprocessors are to be properly shielded from high dose-rate gamma radiation using the high-density material, lead or tungsten. But the passive elements such as mirror, lens and window, which are placed in the optical path of the CCD imaging sensor, are exposed to a high dose-rate gamma ray source directly. So, the gamma-ray irradiation characteristics of the passive elements, is needed to test. A CCD camera lens, made of glass material, have been gamma irradiated at the dose rate of 4.2 kGy/h during an hour up to a total dose of 4 kGy. The radiation induced color-center in the glass lens is observed. The degradation performance of the gamma irradiated lens is explained using an color component analysis.
CommonCrawl
Duke Mathematical Journal Duke Math. J. Volume 164, Number 14 (2015), 2643-2722. Serrin's overdetermined problem and constant mean curvature surfaces Manuel Del Pino, Frank Pacard, and Juncheng Wei More by Manuel Del Pino More by Frank Pacard More by Juncheng Wei Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text For all N≥9, we find smooth entire epigraphs in RN, namely, smooth domains of the form Ω:={x∈RN|xN>F(x1,…,xN−1)}, which are not half-spaces and in which a problem of the form Δu+f(u)=0 in Ω has a positive, bounded solution with 0 Dirichlet boundary data and constant Neumann boundary data on ∂Ω. This answers negatively for large dimensions a question by Berestycki, Caffarelli, and Nirenberg. In 1971, Serrin proved that a bounded domain where such an overdetermined problem is solvable must be a ball, in analogy to a famous result by Alexandrov that states that an embedded compact surface with constant mean curvature (CMC) in Euclidean space must be a sphere. In lower dimensions we succeed in providing examples for domains whose boundary is close to large dilations of a given CMC surface where Serrin's overdetermined problem is solvable. Duke Math. J., Volume 164, Number 14 (2015), 2643-2722. Revised: 9 November 2014 First available in Project Euclid: 26 October 2015 https://projecteuclid.org/euclid.dmj/1445865570 Primary: 35J25: Boundary value problems for second-order elliptic equations Secondary: 35J67: Boundary values of solutions to elliptic equations overdetermined elliptic equation constant mean curvature surface entire minimal graph Del Pino, Manuel; Pacard, Frank; Wei, Juncheng. Serrin's overdetermined problem and constant mean curvature surfaces. Duke Math. J. 164 (2015), no. 14, 2643--2722. doi:10.1215/00127094-3146710. https://projecteuclid.org/euclid.dmj/1445865570 [1] A. D. Alexandrov. Uniqueness theorems for surfaces in the large, I (in Russian), Vestnik Leningrad Univ. 11 (1956), 5–17. Mathematical Reviews (MathSciNet): MR86338 [2] L. Ambrosio and X. Cabré, Entire solutions of semilinear elliptic equations in $\mathbb{R} ^{3}$ and a conjecture of De Giorgi, J. Amer. Math. Soc. 13 (2000), 725–739. Zentralblatt MATH: 0968.35041 [3] J. L. Barbosa, M. do Carmo, and J. Eschenburg, Stability of hypersurfaces of constant mean curvature in Riemannian manifolds, Math. Z. 197 (1988), 123–138. Mathematical Reviews (MathSciNet): MR917854 Digital Object Identifier: doi:10.1007/BF01161634 [4] H. Berestycki, L. A. Caffarelli, and L. Nirenberg, Monotonicity for elliptic equations in unbounded Lipschitz domains, Comm. Pure Appl. Math. 50 (1997), 1089–1111. Digital Object Identifier: doi:10.1002/(SICI)1097-0312(199711)50:11<1089::AID-CPA2>3.0.CO;2-6 [5] E. Bombieri, E. De Giorgi, and E. Giusti, Minimal cones and the Bernstein problem, Invent. Math. 7 (1969), 243–268. [6] C. J. Costa, Example of a complete minimal immersions in $\mathbb{R} ^{3}$ of genus one and three embedded ends, Bol. Soc. Brasil. Mat. 15 (1984), 47–54. [7] E. De Giorgi, "Convergence problems for functionals and operators" in Proceedings of the International Meeting on Recent Methods in Nonlinear Analysis (Rome, 1978), Pitagora, Bologna, 1979, 131–188. [8] M. del Pino, M. Kowalczyk, and J. Wei, On De Giorgi's conjecture in dimension $N\ge9$, Ann. of Math. (2) 174 (2011), 1485–1569. Digital Object Identifier: doi:10.4007/annals.2011.174.3.3 [9] M. del Pino, M. Kowalczyk, and J. Wei, Entire solutions of the Allen-Cahn equation and complete embedded minimal surfaces of finite total curvature in $\mathbb{R} ^{3}$, J. Differential Geom. 93 (2013), 67–131. [10] A. Farina, L. Mari, and E. Valdinoci, Splitting theorems, symmetry results and overdetermined problems for Riemannian manifolds, Comm. Partial Differential Equations 38 (2013), 1818–1862. Digital Object Identifier: doi:10.1080/03605302.2013.795969 [11] A. Farina and E. Valdinoci, Flattening results for elliptic PDEs in unbounded domains with applications to overdetermined problems, Arch. Ration. Mech. Anal. 195 (2010), 1025–1058. [12] A. Farina and E. Valdinoci, 1D symmetry for solutions of semilinear and quasilinear elliptic equations, Trans. Amer. Math. Soc. 363, no. 2 (2011), 579–609. Digital Object Identifier: doi:10.1090/S0002-9947-2010-05021-4 [13] N. Ghoussoub and C. Gui, On a conjecture of De Giorgi and some related problems, Math. Ann. 311 (1998), 481–491. [14] B. Gidas, W. M. Ni, and L. Nirenberg, Symmetry and related properties via the maximum principle, Comm. Math. Phys. 68 (1979), 209–243. Project Euclid: euclid.cmp/1103905359 [15] L. Hauswirth, F. Hélein, and F. Pacard, On an overdetermined elliptic problem, Pacific J. Math. 250 (2011), 319–334. Digital Object Identifier: doi:10.2140/pjm.2011.250.319 [16] D. Hoffman and W. H. Meeks III, Embedded minimal surfaces of finite topology, Ann. of Math. (2) 131 (1990), 1–34. Digital Object Identifier: doi:10.2307/1971506 [17] R. Mazzeo and F. Pacard, Constant mean curvature surfaces with Delaunay ends, Comm. Anal. Geom. 9 (2001), 169–237. [18] F. Morabito, Index and nullity of the Gauss map of the Costa-Hoffman-Meeks surfaces, Indiana Univ. Math. J. 58 (2009), 677–707. Digital Object Identifier: doi:10.1512/iumj.2009.58.3476 [19] S. Nayatani, Morse index and Gauss maps of complete minimal surfaces in Euclidean $3$-space, Comment. Math. Helv. 68 (1993), 511–537. [20] F. Pacard and M. Ritoré, From constant mean curvature hypersurfaces to the gradient theory of phase transitions, J. Differential Geom. 64 (2003), 359–423. [21] A. Ros and P. Sicbaldi, Geometry and topology of some overdetermined elliptic problems, J. Differential Equations 255 (2013), 951–977. Digital Object Identifier: doi:10.1016/j.jde.2013.04.027 [22] O. Savin, Regularity of flat level sets in phase transitions, Ann. of Math. (2) 169 (2009), 41–78. Digital Object Identifier: doi:10.4007/annals.2009.169.41 [23] F. Schlenk and P. Sicbaldi, Bifurcating extremal domains for the first eigenvalue of the Laplacian, Adv. Math. 229 (2012), 602–632. [24] J. Serrin, A symmetry problem in potential theory, Arch. Ration. Mech. Anal. 43 (1971), 304–318. [25] P. Sicbaldi, New extremal domains for the first eigenvalue of the Laplacian in flat tori, Calc. Var. Partial Differential Equations 37 (2010), 329–344. Digital Object Identifier: doi:10.1007/s00526-009-0264-z [26] J. Simons, Minimal varieties in riemannian manifolds, Ann. of Math. (2) 88 (1968), 62–105. [27] M. Traizet, Classification of the solutions to an overdetermined elliptic problem in the plane, Geom. Funct. Anal. 24 (2014), 690–720. [28] B. White, The space of minimal submanifolds for varying Riemannian metrics, Indiana Univ. Math. J. 40 (1991), 161–200. Digital Object Identifier: doi:10.1512/iumj.1991.40.40008 Purchase print copies of recent issues DMJ 100 A note on Serrin's overdetermined problem Ciraolo, Giulio and Magnanini, Rolando, Kodai Mathematical Journal, 2014 On Serrin's symmetry result in nonsmooth domains and its applications Földes, Juraj, Advances in Differential Equations, 2013 Quadrature surfaces as free boundaries Shahgholian, Henrik, Arkiv för Matematik, 1994 Deformation and stability of surfaces with constant mean curvature Koiso, Miyuki, Tohoku Mathematical Journal, 2002 The Dirichlet problem for the constant mean curvature equation in Sol3 Klaser, Patricía and Menezes, Ana, Illinois Journal of Mathematics, 2019 Hardy-singular boundary mass and Sobolev-critical variational problems Ghoussoub, Nassif and Robert, Frédéric, Analysis & PDE, 2017 A comparison theorem for Steiner minimum trees in surfaces with curvature bounded below Naya, Shintaro and Innami, Nobuhiro, Tohoku Mathematical Journal, 2013 Monodromy of Constant Mean Curvature Surface in Hyperbolic Space Pirola, Gian Pietro, Asian Journal of Mathematics, 2007 On a class of semilinear elliptic equations with boundary conditions and potentials which change sign Ouanan, M. and Touzani, A., Abstract and Applied Analysis, 2005 Constant mean curvature surfaces with $D_4$-singularities Ogata, Yuta and Teramoto, Keisuke, , 2018 euclid.dmj/1445865570
CommonCrawl
Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): bioinformatics DWNN-RLS: regularized least squares method for predicting circRNA-disease associations Cheng Yan1,2, Jianxin Wang1 & Fang-Xiang Wu3 BMC Bioinformatics volume 19, Article number: 520 (2018) Cite this article Many evidences have demonstrated that circRNAs (circular RNA) play important roles in controlling gene expression of human, mouse and nematode. More importantly, circRNAs are also involved in many diseases through fine tuning of post-transcriptional gene expression by sequestering the miRNAs which associate with diseases. Therefore, identifying the circRNA-disease associations is very appealing to comprehensively understand the mechanism, treatment and diagnose of diseases, yet challenging. As the complex mechanism between circRNAs and diseases, wet-lab experiments are expensive and time-consuming to discover novel circRNA-disease associations. Therefore, it is of dire need to employ the computational methods to discover novel circRNA-disease associations. In this study, we develop a method (DWNN-RLS) to predict circRNA-disease associations based on Regularized Least Squares of Kronecker product kernel. The similarity of circRNAs is computed from the Gaussian Interaction Profile(GIP) based on known circRNA-disease associations. In addition, the similarity of diseases is integrated by the mean of GIP similarity and sematic similarity which is computed by the direct acyclic graph (DAG) representation of diseases. The kernels of circRNA-disease pairs are constructed from the Kronecker product of the kernels of circRNAs and diseases. DWNN (decreasing weight k-nearest neighbor) method is adopted to calculate the initial relational score for new circRNAs and diseases. The Kronecker product kernel based regularised least squares approach is used to predict new circRNA-disease associations. We adopt 5-fold cross validation (5CV), 10-fold cross validation (10CV) and leave one out cross validation (LOOCV) to assess the prediction performance of our method, and compare it with other six competing methods (RLS-avg, RLS-Kron, NetLapRLS, KATZ, NBI, WP). Conlusion The experiment results show that DWNN-RLS reaches the AUC values of 0.8854, 0.9205 and 0.9701 in 5CV, 10CV and LOOCV, respectively, which illustrates that DWNN-RLS is superior to the competing methods RLS-avg, RLS-Kron, NetLapRLS, KATZ, NBI, WP. In addition, case studies also show that DWNN-RLS is an effective method to predict new circRNA-disease associations. Circular RNAs (circRNAs) are a class of endogenous noncoding RNAs with distinct properties and diverse cellular functions, unlike the linear RNAs with 5' and 3' termini which reflect start and stop of the RNA polymerase on the DNA template, and are generated by back splicing (3'-5') or lariat introns [1–4]. The circRNAs are not easy to be degraded by exoribonucleases because they lack free ends [5, 6]. As forming a circRNA is usually considered a rare event in cells, it was suggested that they may be considered errors of normal splicing process [4, 7]. Therefore, despite their existence in both unicellular and multicellular organisms, they have been previously even disregarded as transcriptional noise or artifacts [8]. Nevertheless, with the advances of high-throughput deep sequencing and functional genomics, the knowledge of circRNAs has recently been learned substantially [9, 10]. To date, circRNAs have been found in various tissues and cell lines of plants, animals and so on [4, 11, 12]. Some circRNAs can be translated in some tissues or translated into a protein under splicing-dependent, cap-independent manner or other certain conditions [13]. Furthermore, circRNAs are expected to have other functions independent of their host genes because they have much longer half-life than other linear RNA transcripts [10]. Many circRNAs can regulate gene expression because they have strong potential to act as miRNA sponges or decoys [14]. In addition, some circRNAs can also function as protein sponges or decoys, and the best example is that protein MBL is prevented to bind to other targets when being tethered to a circRNA [15]. CircRNA circFoxo3 can also act as a protein scaffold, which binds to sites for mouse MDM2 and p53 [16]. Unlike the above functions of circRNAs are based on the fact that they are located to the cytoplasm, some circRNAs such as exon-intron circRNAs are retained in the nucleus and they may promote with transcription [17]. Through the understanding of functions of circRNAs, many evidences have shown that circRNAs play an important role in occurrence of human complex diseases, such as cancer [18]. CircRNA ciRS-7 has significant implications for diseases through efficiently regulating the activity of miRNA miR-7 [19]. Likewise, by sponging the miR-7, miR-17 and miR-214, cir-ITCH can increase the level of ITCH which further inhibits the Wnt pathway that is frequently aberrant in cancers [20, 21]. SRY can affect the proliferation, migration and invasion of cholangiocarcinoma cells, which is the sponge of miR-138 and can strongly suppress its level [22, 23]. CircRNA-MYLK level is elevated and correlated with BC (bladder carcinoma) progression and plays an oncogenic role in BC in vitro and vivo [24]. Circ-Foxo3 was minimally expressed in patient tumor samples and in a panel of cancer cells and its expression was found to be significantly increased during the cancer cell apoptosis [16, 25]. Circular RNA MTO1 can suppress hepatocellular carcinoma progression by acting as the sponge of miR-9 [26]. In addition, the aberrant expression of circCCDC66 also is associated with a late-stage diagnosis and metastases [27]. In recent years, some databases about circRNAs have been developed to further study the function mechanism of circRNAs. CircBase is the first database about circRNAs, which merges and unifies data sets of circRNAs and provides the interface to access, download, and browse the evidence supporting their expression within the genomic context [28]. CircRNADb is a comprehensively annotated human circular RNAs database, which containes 32,914 human exonic circRNAs from diversified sources and provides the genomic information, exon splicing, genome sequence, internal ribosome entry site (IRES), open reading frame (ORF) and references of these circRNAs [29]. PlantcircBase is a database of plant circRNAs, which also provided other functions such as visualization of the structures of circRNA based on their genomic position [12]. Likewise, PlantCircNet also is a database of plant circRNAs, which has the main feature of plantCircNet to provide visualized plant circRNA-miRNA-mRNA regulatory networks and can identify metabolic effects of circRNAs [30]. ExoRBase is a web-accessible database, which provides the circRNA, lncRNA and mRNA information by RNA-seq data analyses of human blood exosomes [31]. CircNet provides tissue-specific circRNA expression profiles and circRNA-miRNA-gene regulatory networks by utilizing sequencing datasets to systematically identify the expression of circRNAs in RNA-seq samples [32]. TSTD also provides the tissue-specific circRNAs and further characterizes the functions of these circRNAs [33]. The cancer somatic mutations that alter miRNA targeting and functioning are provided by SomamiR 2.0 database which also collects the associations between miRNA and other competing endogenous RNAs such as mRNAs, circRNAs and lncRNAs [34]. The CSCD is also a cancer-specific circRNAs database which identifies the cancer-specific circRNAs by analyzed the RNA-seq samples and further predicts the miRNA response element sites and RNA binding protein sites of each circRNA [35]. Circ2Traits is the circRNA-disease associations database, which is constructed by circRNA-miRNA associations, miRNA-disease associations and disease-SNPS associations [18]. To our knowledge, CircR2Disease is the first manually curated database about circRNA-disease associations by reviewing existing literatures and provides the important foundation to study the associations of circRNAs and diseases [36]. In general, we have obtained some significant progresses in understanding features and functions of circRNAs. In addition, some databases about circRNAs have also been constructed. However, current studies of circRNA-disease associations mainly focus on biomedical experimentations that are notoriously expensive and time-consuming. Therefore, there is a very urgent need to predict circRNA-disease associations by computational methods. To our knowledge, the development of computational approach is very limited because the databases of circRNA-disease associations are incomplete. However, circR2Disease provides the chance to effectively predict novel circRNA-disease associations through developing computational methods. In this study, we develop a novel method (call DWNN-RLS) to predict new circRNA-disease associations. Firstly, DWNN-RLS computes the Gaussian interaction profile (GIP) kernel similarities of circRNAs and diseases based on the known circRNA-disease associations. By considering their direct acyclic graph(DAG) representation, the sematic similarity of diseases is also calculated. We further obtain the final similarity of diseases with the mean of GIP similarity and sematic similarity. Then the association possibility scores of circRNA-disease pairs are predicted by Kronecker product kernel based Regularized Least Squares approach. The kernels of circRNA-disease pairs are calculated by the Kronecker product of kernels of circRNAs and diseases. Furthermore, the decreasing weight k-nearest neighbor (DWNN) method is used to calculate the initial relational scores of new circRNAs and new diseases. In order to assess the prediction performance of DWNN-RLS and compare with other competing methods, we conduct 5-fold cross validation (5CV), 10-fold cross validation (10CV) and leave-one-out cross validation (LOOCV). The experiment results demonstrate that DWNN-RLS outperforms other six competing methods (RLS-avg, RLS-Kron, NetLapRLS, KATZ, NBI, WP) in terms of AUC (area under the ROC curve) values. Specifically, the AUC values of DWNN-RLS in 5CV, 10CV and LOOCV reach 0.8854, 0.9205 and 0.9701, respectively, which are superior to the second best results (KATZ: 0.8224 and 0.8343, RLS-avg: 0.9169). Furthermore, the prediction ability of DWNN-RLS also is illustrated by the case studies. In this study, we download the known circRNA-disease associations data from the CircR2Disease database (http://bioinfo.snnu.edu.cn/CircR2Disease/). These circRNA-disease associations were curated circRNA-disease associations from the existing literature prior to 31 March 2018. After removing the duplicated data, we obtain the benchmark dataset that includes 725 circRNA-disease associations, 676 circRNAs and 100 diseases. In addition, the Mesh database [37] (https://www.nlm.nih.gov/bsd/disted/meshtutorial/themeshdatabase/) is used to compute the sematic similarity of diseases. Similarity of circRNAs As the successful application of GIP kernel similarity in other relative areas [38–42], we also use it to calculate the similarities of circRNAs. The GIP kernel was computed from the known circRNA-disease associations. Let \(C=\left \{c_{1},c_{2},...,c_{N_{c}}\right \}\) be the set of Nc circRNAs and \(D=\left \{d_{1},d_{2},...,d_{N_{d}}\right \}\) be the set of Nd diseases. Let matrix \(\phantom {\dot {i}\!}Y \in R^{N_{c} \times N_{d}}\) represents known circRNA-disease associations, in which the value of yij is 1 if circRNA i and disease j exists a known association, otherwise 0. Then the GIP similarity of circRNA ci and circRNA cj can be computed as follows: $$\begin{array}{@{}rcl@{}} S_{c}\left(c_{i},c_{j}\right)= G_{c}\left(c_{i},c_{j}\right) = exp\left(-\gamma_{c} {||y_{c_{i}}-y_{c_{j}}||}^{2}\right) \end{array} $$ $$ \gamma_{c} = 1 /\left(\frac{1}{N_{c}}\sum\limits_{i=1}^{N_{c}}{||y_{c_{i}}||}^{2}\right), $$ where \(y_{c_{i}}=\left \{y_{i1},y_{i2},...,y_{{i}{N_{d}}}\right \}\) and \(y_{c_{j}}=\left \{y_{j1},y_{j2},...,y_{{j}{N_{d}}}\right \}\) are the association profiles of circRNA ci and circRNA cj, respectively. Since the GIP kernel is computed by a decaying function of the distance between the vectors, this function is of the form of a bell-shaped curve. In addition, since a larger value of γc yields a narrower bell while a smaller value of γc yields a wider bell, the parameter γc can be used to regulate the bandwidth of kernel. In this study, parameter γc is computed as the reciprocal of average number of associations per circRNA. Similarity of diseases Firstly, we also compute the GIP similarity of disease di and disease dj as follows: $$\begin{array}{@{}rcl@{}} G_{d}\left(d_{i},d_{j}\right) = exp\left(-\gamma_{d} {||y_{d_{i}}-y_{d_{j}}||}^{2}\right) \end{array} $$ $$ \gamma_{d} = 1 /\left(\frac{1}{N_{d}}\sum\limits_{i=1}^{N_{d}}{||y_{d_{i}}||}^{2}\right), $$ where \(y_{d_{i}}=\left \{y_{1i},y_{2i},...,y_{{N_{c}}{i}}\right \}^{T}\) is the association profiles of disease di while \(y_{d_{j}}=\left \{y_{1j},y_{2j},...,y_{{N_{c}}{j}}\right \}^{T}\) is the association profiles of disease dj. In addition, the parameter γd is used to regulate the bandwidth of kernel. Secondly, we use the Mesh descriptions of diseases to compute the sematic similarity. Specifically, for disease A which can be represented by a DAG (DAGA,DAGA=TA,EA) in mesh database. Set TA includes the parent diseases nodes of A and itself while set EA includes the direct edges between disease nodes within TA. The similarity of diseases A and B can be calculated as follows: $$\begin{array}{@{}rcl@{}} {D_{semsim}(A,B)} = \frac{\sum\limits_{t \in {T_{A}}\cap{T_{B}}}\left({SV}_{A}(t)+{SV}_{B}(t)\right)}{Sem(A)+Sem(B)}, \end{array} $$ where SVA(t)(SVB(t)) is the sematic value between disease A(B) and t which is the all common ancestors of diseases A and B. In addition, Sem(A) and Sem(B) are the sematic values of diseases A and B, respectively. For disease A, the Sem(A) and SVA(t) can be calculated as follows: $$\begin{array}{@{}rcl@{}} {Sem(A)} = {\sum\limits_{t \in {T_{A}}}{SV}_{A}(t)}, \end{array} $$ $$\begin{array}{@{}rcl@{}} {{SV}_{A}(t)} \,=\,\! \left\{ \begin{aligned} \!\!1&, t=A \\ \Delta^{w} &, t=\!the~ smallest~ w~ layer\ ancestor~ node~ of~ A \\ \end{aligned} \right. \end{array} $$ where Δ is the layer contribution factor between disease node and its direct ancestor disease nodes in DAG. The value of Δ is set to 0.5 in this study [37]. After computing the GIP similarity and sematic similarity of diseases, we integrate the final similarity of diseases with their mean as follows: $$\begin{array}{@{}rcl@{}} S_{d} = \frac{G_{d}+D_{semsim}}{2}, \end{array} $$ DWNN for new circRNAs and diseases The good performance of prediction method largely depends on the quality of known circRNA-disease associations. In fact, new circRNAs (or new diseases) have no any association with diseases (or circRNAs). In this study, we use the DMNN to compute the initial association score based on similarities of circRNAs and diseases. Specifically, the initial association score between new circRNA ci and disease dj can be calculated as follows: $$ y\left(c_{i},d_{j}\right) = \frac{\sum G{^{il}_{c}}y_{lj}}{\sum G{^{il}_{c}}}, c_{l} \in N{(c_{i})} $$ where N(ci) is the set of \(k_{c_{i}}\) nearest neighbors of new circRNA ci. The parameter \(k_{c_{i}}\) is calculated as follows: $$\begin{array}{@{}rcl@{}} {k_{c_{i}}} = \left\{ \begin{aligned} max(k)&,if \ \frac{1-simset(c_{i})_{l}}{l}\le \epsilon{^{l}}\,1\le l \le k \\ 0&,otherwise \\ \end{aligned} \right. \end{array} $$ where simset(ci)l is the l-th similarity value of the ranked vector based on similarity between circRNA ci and other circRNAs from high to low. Furthermore, the parameter ε is used to control the range of εl that is used to select k nearest neighbors for each new circRNA and disease. In this study, the value of ε is set to 1, so the value of εl is 1 and all neighbors are used to calculate initial association score. Similarly, we also compute the initial association scores of new disease dj and circRNA ci as follows: $$ y\left(c_{i},d_{j}\right) = \frac{\sum G{^{jl}_{d}}y_{il}}{\sum G{^{jl}_{d}}}, d_{l} \in N{(d_{j})} $$ where N(dj) is the set of \(k_{d_{j}}\) nearest neighbors of new disease dj. The parameter \(k_{d_{j}}\) is also calculated as follows: $$\begin{array}{@{}rcl@{}} {k_{d_{j}}} = \left\{ \begin{aligned} max(k)&,if \ \frac{1-simset(d_{j})_{l}}{l}\le \epsilon{^{l}}\,1\le l \le k \\ 0&,otherwise \\ \end{aligned} \right. \end{array} $$ where simset(dj)l is the l-th similarity value of the ranked vector based on similarity between disease dj and other diseases from high to low. Parameter ε is also used to control the range for selecting neighbors. Kronecker product kernel based regularized least squares(RLS-Kron) In this study, we use RLS-Kron method to predict new circRNA-disease associations [38, 39, 43]. Based on the kernel K, the predicted circRNA-disease associations matrix has a simple closed-form solution as follows: $$ vec\left({\hat Y}^{T}\right) = K{\left(K+\sigma I\right)}^{-1}vec\left(Y^{T}\right) $$ in which the parameter σ is a regularizations parameter and is set to 0.2 in this study. Kron-RLS has no any prediction ability when σ is set to 0. The kernel K is calculated from the Kronecker product Kc⊗Kd of the circRNA kernel and disease kernel, which is defined as follows: $$ K\left(\left(c_{i},d_{j}\right),\left(c_{u},d_{v}\right)\right) = K_{c}(d_{i},d_{u})K_{d}(t_{j},t_{v}) $$ where matrices Kc and Kd are the similarity matrices of circRNAs and diseases, respectively. In addition, in order to calculate the predicted matrix, Kron-RLS needs to compute the inverse of an NcNd×NcNd matrix. Therefore, we also use an effective method based on matrix eigenvalue decomposition. According to the matrix theory, the eigenvalues (vectors) of a kronecker product are the Kronecker product of eigenvalues (vectors). Specifically, the kernal can be calculated as follows: $$ K = K_{c} \otimes K_{d} = \vee \wedge {\vee}^{T} $$ where ∧=∧c⊗∧d and ∨=∨c⊗∨d are all derived from the eigenvalues decompositions of the two kernel matrices Kc and Kd. As Kc and Kd are real symmetric matrices, their specific eigenvalues decompositions process are defined as follows: $$ K_{c}={\vee}_{c}{\wedge}_{c}{\vee}{_{c}^{T}} $$ $$ K_{d}={\vee}_{d}{\wedge}_{d}{\vee}{_{d}^{T}} $$ where ∨c and ∨d are orthogonal matrices whose columns are the eigenvectors of Kc and Kd, respectively. ∧c and ∧d are diagonal matrices whose diagonal entries are the eigenvalues of Kc and Kd, respectively. Therefore, the final predicted circRNA-disease associations matrix \({\hat Y}\) can be calculated as follows: $$\begin{array}{@{}rcl@{}} {\hat Y} = {\vee}_{c}{Z^{T}}{\vee}{_{d}^{T}} \end{array} $$ $$ vec(Z) =({\wedge}_{c} \otimes {\wedge}_{d})({\wedge}_{c} \otimes {\wedge}_{d}+ \sigma I)^{-1}vec\left({\vee}{_{d}^{T}}Y^{T}{\vee}{_{c}}\right) $$ In this study, we conduct 5CV, 10CV and LOOCV to evaluate the performance of DWNN-RLS for predicting new circRNA-disease associations. AUC (area under the ROC curve) value is used as the evaluation metric. We perform 10 repetitions of 10CV and 5CV. That is, under 10CV, the known circRNA-disease associations data are divided into 10 folds, and each fold takes in turn as the test set and the rest as the train set at each time. Similarly, the data set are randomly divided into 5 folds and each fold takes in turn as the test data and the rest as the train set on each time. In LOOCV, each known circRNA-disease association is in turn chosen as the test set while the rest known circRNA -disease associations as the train set. The larger AUC values show the better prediction ability of the method, while if AUC value is less than or equal to 0.5, the prediction method has no prediction ability. Comparison with other methods As there is no competing computational method for predicting circRNA-disease associations in the literature, to assess the performance of our method, we also compare DWNN-RLS against other six effective methods in other relevant prediction issues. These methods include RLS-avg [38], RLS-Kron [38], NetLapRLS [44], KATZ [45, 46], NBI [47] and WP [47, 48]. We briefly review them here. RLS-avg use the average of the output values which are computed from two kernels, respectively. RLS-Kron compute the prediction scores by Kronecker product kernel based on regularised least squares approach. NetLapRLS is used to predict circRNA-disease associations by exploiting information on similarities of links and nodes. KATZ is a network-based method which considers the number of walks between network nodes and lengths in a heterogeneous network to predict associations. NBI is also a network-based method to infer new associations, which only uses cricRNA-disease bipartite network topology similarity. WP and DBSI are recommendation models which directly use the similarities of circRNAs and diseases. Figure 1 shows the AUC curves of seven prediction methods on CircR2Disease data set in terms of 5CV. The AUC value of DWWN-RLS is the highest among the seven methods, indicating that the prediction performance of DWWN-RLS is better than other methods. The AUC curves of seven methods in the 5CV Figure 2 shows the AUC curves of seven prediction methods in terms of 10CV on CircR2Disease dataset. The AUC value of DWWN-RLS reaches 0.9205, which is better than other methods (RLS-avg: 0.7477, RLS-Kron: 0.8103, NetLapRLS: 0.6744, KATZ: 0.8343, NBI: 0.6648, WP: 0.6198). The AUC curves of seven methods in the 10CV Figure 3 shows the prediction comparison result between DWWN-RLS and other six methods in terms of LOOCV on CircR2Disease data set. We can see from the Fig.3 that the prediction performance of DWWN-RLS (0.9701) is superior to other methods in terms of AUC values (RLS-avg: 0.9169, RLS-Kron: 0.9088, NetLapRLS: 0.6905, KATZ: 0.8432, NBI: 0.699, WP: 0.6362). The AUC curves of seven methods in the LOOCV Note that the advantage of prediction performance is more obvious in 10CV and LOOCV than 5CV, indicating that DWWN-RLS can achieve good result based on more known circRNA-disease associations. In addition, the sematic similarity of diseases can improve the prediction performance of DWWN-RLS. When only the GIP similarity is used, the AUCs of DWNN-RLS are 0.8368, 0.8819 and 0.9423 in 5CV, 10CV and LOOCV, respectively. When the GIP similarity combined with the disease sematic similarity, DWNN-RLS obtains the increased AUCs of 0.8854, 0.9205 and 0.9701 in 5CV, 10CV and LOOCV. By comparing with RLS-Kron method, the DMNN method also can improve the prediction performance. Comparing with KATZ, NBI and WP methods, we think that DWNN-RLS is a machining learning model and has the objective function and solution process that is beneficial to obtain better prediction performance. Parameter analysis for ε and σ To further understand the robustness of DWWN-RLS method, we analyze the influence of parameters ε and σ on the prediction performance in 10CV. The parameter ε is used to control the range for selecting k nearest neighbors of cicrRNAs and diseases. The parameter σ is the regularization parameter of DWWN-RLS method. The value of ε is set to be 1.0 when analyzing parameter σ. Furthermore, we also set the default value of σ to be 0.2 when analyzing parameter ε. With parameter σ of 0.2, Table 1 demonstrates the prediction performance of DWWN-RLS method in 10CV when ε ranges from 0.1 to 1.0 with 0.1 increments. The prediction performance of DWWN-RLS method is best when ε is set to be 1.0, indicating that all neighbors of circRNAs and diseases are involved in calculating their initial associations scores. Table 1 The 10CV prediction performance of various parameter values of ε ranging from 0.1 to 1.0 with 0.1 increments, the best result is in bold face Furthermore, Table 2 describes the prediction performances of DWWN-RLS with different values of σ when ε is set to be 1.0. We can see from Table 2 that DWWN-RLS obtains the best prediction performance when σ is set to be 0.2. Therefore, in this study, we set the default value of σ to be 0.2. Table 2 The 10CV prediction performance of various parameter values of σ ranging from 0.1 to 1.0 with 0.1 increments, the best result is in bold face After confirming the prediction performance and robustness of DWWN-RLS method in 10CV, 5CV and LOOCV, we further analyze the prediction ability of DWWN-RLS in discovering new circRNA-disease associations. In predicting new circRNA-disease associations, all known circRNA-disease associations on CircR2Disease dataset are chosen as the train set and all other circRNA-disease pairs are the candidate circRNA-disease associations. We adapt DWWN-RLS to compute the prediction scores for these candidate circRNA-disease pairs. Here, we analyze the prediction results of Atherosclerotic vascular disease and Breast cancer. Atherosclerotic vascular disease is responsible for the majority of cases of CVD (Cardiovascular disease) in both developing and developed countries, which encompasses coronary heart disease, cerebrovascular disease, and peripheral arterial disease, and which also result the CVD, the leading cause of death and disability all over the world [49, 50]. Table 3 shows that 2 of top 10 predicted associations are confirmed in the previous literature. Elevated cANRIL expression could lead to worse EC (endothelial cells) inflammation, exacerbating AS (atherosclerosis) [51]. CANRIL is transcribed at a locus of atherosclerotic cardiovascular disease on chromosome 9p21, and induces nucleolar stress and apoptosis, and inhibits the proliferation in smooth muscle cells and macrophages [52]. The cZNF292 also associates with atherosclerotic cardiovascular disease by stimulating angiogenesis through vascular sprouting and cell proliferation [53]. Table 3 The validation results of predicted top 10 new circRNA-disease associations of Atherosclerotic vascular disease There is approximately 1 in 12 women developing breast cancer in Western Europe and the United States, and which is characterized by a distinct metastatic pattern involving the regional lymph nodes, bone marrow, lung and liver [54, 55]. Table 4 shows the validation results of top 10 new circRNA-disease associations predicted by DWNN-RLS. There is 3 out of top 10 predicted associations that can be validated in previous studies. CircRNAs circGFRA1 and GFRA1 act as ceRNAs in triple negative breast cancer by regulating miR-34a [56]. The human breast cancer cell line MDA-MB-231 are stably transfected with circ-Foxo3, the ectopic expression of the Foxo3 circular RNA could suppress tumor growth, cancer cell proliferation and survival [25]. CDR1as contains more than 70 selectively conserved target sites of miR-7 which can directly downregulate oncogenes in cancers such as breast cancer [57]. Table 4 The validation results of predicted top 10 new circRNA-disease associations of Breast cancer Above case studies demonstrate that there are a number of prediction results that have not been confirmed by previous literature. To our knowledge, a possible reason is that the database Circ2Disease are still limited and the new studies have not been published yet. In summary, these predicted circRNA-disease associations deserve being studied and considered in the future. With the advances of RNA-Seq, high-throughput sequencing and other techniques, we have achieved some important progresses in understanding characteristics and functions of cricRNAs. CricRNAs may play key roles in diseases as miRNA sponges or decoys, protein sponges or decoys and regulation gene transcription. Therefore, systematically understanding association between circRNAs and diseases has become an important issue of bioinformatics research, which is beneficial to disease diagnose and treatment. Although some databases about circRNA have been established in recent years, these databases rarely focused on the associations between circRNAs and diseases. The computation methods for predicting circRNA-disease associations are also lacking because of these limitations. To our knowledge, CircR2Disease is the first database about circRNA-disease associations, which provides the chance to develop effective methods to identify novel associations between circRNAs and diseases. DWNN-RLS method is developed to predict new associations between circRNAs and diseases on CircR2Disease dataset. Firstly, DWNN-RLS computes the GIP similarities of circRNAs and diseases based on the known circRNA-disease associations. Secondly, we further compute the sematic similarity of disease and compute the final similarity of diseases with the mean of GIP similarity and sematic similarity. Finally, the Kron-RLS is used to predict novel circRNA-disease associations based on their similarities. 10CV, 5CV and LOOCV are used to evaluate the prediction performance of DWNN-RLS. In addition, we use the DWNN to calculate the initial associations scores for new circRNAs and diseases. We also compare our method with other six methods. In terms of 10CV, 5CV and LOOCV, DWNN-RLS all achieves the best prediction performance. In addition, we also show that DWNN-RLS method may achieves better prediction performance with the more known circRNA-disease associations. Case studies further illustrate the prediction performance of DWNN-RLS. However, there still exist some limitations in DWNN-RLS. We all know that cricRNAs can function as miRNA sponges or decoys, protein sponges or decoys. In this study, we only use the GIP similarity of circRNAs. In the future, the similarity computation of circRNAs could consider more relevant biological network information, such as cricRNA-miRNA associations and sequence information. Similarly, the disease functional information also should be considered [58–60]. Other latest matrix factorization methods such as NRLMF [61], SRMF [62], DRRS [63] should be considered to predict cricRNA-disease association when we integrate more biological network information such as circRNA-miRNA associations, circRNA sequence information and disease functional information. Therefore, to further improve the prediction performance, we would develop a more effective approach to discover new circRNA-disease associations by reasonably integrating more biological network information. Nigro JM, Cho KR, Fearon ER, Kern SE, Ruppert JM, Oliner JD, Kinzler KW, Vogelstein B. Scrambled exons. Cell. 1991; 64(3):607–13. Zhang Y, Zhang X-O, Chen T, Xiang J-F, Yin Q-F, Xing Y-H, Zhu S, Yang L, Chen L-L. Circular intronic long noncoding rnas. Mol Cell. 2013; 51(6):792–806. Knupp D, Miura P. Circrna accumulation: A new hallmark of aging?Mech Ageing Dev. 2018; 173:71–9. Memczak S, Jens M, Elefsinioti A, Torti F, Krueger J, Rybak A, Maier L, Mackowiak SD, Gregersen LH, Munschauer M, et al. Circular rnas are a large class of animal rnas with regulatory potency. Nature. 2013; 495(7441):333. Jeck WR, Sorrentino JA, Wang K, Slevin MK, Burd CE, Liu J, Marzluff WF, Sharpless NE. Circular rnas are abundant, conserved, and associated with alu repeats. Rna. 2013; 19(2):141–57. Enuka Y, Lauriola M, Feldman ME, Sas-Chen A, Ulitsky I, Yarden Y. Circular rnas are long-lived and display only minimal early alterations in response to a growth factor. Nucleic Acids Res. 2015; 44(3):1370–83. Cocquerelle C, Mascrez B, Hetuin D, Bailleul B. Mis-splicing yields circular rna molecules. FASEB J. 1993; 7(1):155–60. Lasda E, Parker R. Circular rnas: diversity of form and function. Rna. 2014; 20(12):1829–42. Ye C-Y, Chen L, Liu C, Zhu Q-H, Fan L. Widespread noncoding circular rnas in plants. New Phytol. 2015; 208(1):88–95. Kristensen L, Hansen T, Venø M, Kjems J. Circular rnas in cancer: opportunities and challenges in the field. Oncogene. 2018; 37(5):555. Danan M, Schwartz S, Edelheit S, Sorek R. Transcriptome-wide discovery of circular rnas in archaea. Nucleic Acids Res. 2011; 40(7):3131–42. Chu Q, Zhang X, Zhu X, Liu C, Mao L, Ye C, Zhu Q-H, Fan L. Plantcircbase: a database for plant circular rnas. Mol Plant. 2017; 10(8):1126–8. Legnini I, Di Timoteo G, Rossi F, Morlando M, Briganti F, Sthandier O, Fatica A, Santini T, Andronache A, Wade M, et al. Circ-znf609 is a circular rna that can be translated and functions in myogenesis. Mol Cell. 2017; 66(1):22–37. Qu S, Yang X, Li X, Wang J, Gao Y, Shang R, Sun W, Dou K, Li H. Circular rna: a new star of noncoding rnas. Cancer Lett. 2015; 365(2):141–8. Ashwal-Fluss R, Meyer M, Pamudurti NR, Ivanov A, Bartok O, Hanan M, Evantal N, Memczak S, Rajewsky N, Kadener S. circrna biogenesis competes with pre-mrna splicing. Mol Cell. 2014; 56(1):55–66. Du WW, Fang L, Yang W, Wu N, Awan FM, Yang Z, Yang BB. Induction of tumor apoptosis through a circular rna enhancing foxo3 activity. Cell Death Differ. 2017; 24(2):357. Li Z, Huang C, Bao C, Chen L, Lin M, Wang X, Zhong G, Yu B, Hu W, Dai L, et al. Exon-intron circular rnas regulate transcription in the nucleus. Nat Struct Mol Biol. 2015; 22(3):256. Ghosal S, Das S, Sen R, Basak P, Chakrabarti J. Circ2traits: a comprehensive database for circular rna potentially associated with disease and traits. Front Genet. 2013; 4:283. Li J, Zheng Y, Sun G, Xiong S. Restoration of mir-7 expression suppresses the growth of lewis lung cancer cells by modulating epidermal growth factor receptor signaling. Oncol Rep. 2014; 32(6):2511–6. Li F, Zhang L, Li W, Deng J, Zheng J, An M, Lu J, Zhou Y. Circular rna itch has inhibitory effect on escc by suppressing the wnt/ β-catenin pathway. Oncotarget. 2015; 6(8):6001. Anastas JN, Moon RT. Wnt signalling pathways as therapeutic targets in cancer. Nat Rev Cancer. 2013; 13(1):11. Hansen TB, Jensen TI, Clausen BH, Bramsen JB, Finsen B, Damgaard CK, Kjems J. Natural rna circles function as efficient microrna sponges. Nature. 2013; 495(7441):384. Wang Q, Tang H, Yin S, Dong C. Downregulation of microrna-138 enhances the proliferation, migration and invasion of cholangiocarcinoma cells through the upregulation of rhoc/p-erk/mmp-2/mmp-9. Oncol Rep. 2013; 29(5):2046–52. Zhong Z, Huang M, Lv M, He Y, Duan C, Zhang L, Chen J. Circular rna mylk as a competing endogenous rna promotes bladder cancer progression through modulating vegfa/vegfr2 signaling pathway. Cancer Lett. 2017; 403:305–17. Yang W, Du W, Li X, Yee A, Yang B. Foxo3 activity promoted by non-coding effects of circular rna and foxo3 pseudogene in the inhibition of tumor growth and angiogenesis. Oncogene. 2016; 35(30):3919. Han D, Li J, Wang H, Su X, Hou J, Gu Y, Qian C, Lin Y, Liu X, Huang M, et al. Circular rna mto1 acts as the sponge of mir-9 to suppress hepatocellular carcinoma progression. Hepatology. 2017; 66:1151–64. Hsiao K-Y, Lin Y-C, Gupta SK, Chang N, Yen L, Sun HS, Tsai S-J. Noncoding effects of circular rna ccdc66 promote colon cancer growth and metastasis. Cancer Res. 2017; 77(9):2339–50. Glažar P, Papavasileiou P, Rajewsky N. circbase: a database for circular rnas. Rna. 2014; 20(11):1666–70. Chen X, Han P, Zhou T, Guo X, Song X, Li Y. circrnadb: a comprehensive database for human circular rnas with protein-coding annotations. Sci Rep. 2016; 6:34985. Zhang P, Meng X, Chen H, Liu Y, Xue J, Zhou Y, Chen M. Plantcircnet: a database for plant circrna–mirna–mrna regulatory networks.Database. 2017; 2017:1–6. Li S, Li Y, Chen B, Zhao J, Yu S, Tang Y, Zheng Q, Li Y, Wang P, He X, et al. exorbase: a database of circrna, lncrna and mrna in human blood exosomes. Nucleic Acids Res. 2017; 46(D1):106–12. Liu Y-C, Li J-R, Sun C-H, Andrews E, Chao R-F, Lin F-M, Weng S-L, Hsu S-D, Huang C-C, Cheng C, et al. Circnet: a database of circular rnas derived from transcriptome sequencing data. Nucleic Acids Res. 2015; 44(D1):209–15. Xia S, Feng J, Lei L, Hu J, Xia L, Wang J, Xiang Y, Liu L, Zhong S, Han L, et al. Comprehensive characterization of tissue-specific circular rnas in the human and mouse genomes. Brief Bioinform. 2016; 18(6):984–92. Bhattacharya A, Cui Y. Somamir 2.0: a database of cancer somatic mutations altering microrna–cerna interactions. Nucleic Acids Res. 2015; 44(D1):1005–10. Xia S, Feng J, Chen K, Ma Y, Gong J, Cai F, Jin Y, Gao Y, Xia L, Chang H, et al. Cscd: a database for cancer-specific circular rnas. Nucleic Acids Res. 2017; 46(D1):925–9. Fan C, Lei X, Fang Z, Jiang Q, Wu F-X. Circr2disease: a manually curated database for experimentally supported circular rnas associated with various diseases. Database. 2018; 2018:1–8. Wang D, Wang J, Lu M, Song F, Cui Q. Inferring the human microrna functional similarity and functional network based on microrna-associated diseases. Bioinformatics. 2010; 26(13):1644–50. van Laarhoven T, Nabuurs SB, Marchiori E. Gaussian interaction profile kernels for predicting drug–target interaction. Bioinformatics. 2011; 27(21):3036–43. Yan C, Wang J, Lan W, Wu F-X, Pan Y. Sdtrls: Predicting drug-target interactions for complex diseases based on chemical substructures. Complexity. 2017; 2017(Article ID 2713280):10. Yan C, Wang J, Ni P, Lan W, Wu FX, Pan Y. Dnrlmf-mda:predicting microrna-disease associations based on similarities of micrornas and diseases. IEEE/ACM Trans Comput Bio Bioinforma. 2017(to be published). https://doi.org/10.1109/TCBB.2017.2776101. Lan W, Li M, Zhao K, Liu J, Wu F-X, Pan Y, Wang J. Ldap: a web server for lncrna-disease association prediction. Bioinformatics. 2016; 33(3):458–60. Lu C, Yang M, Luo F, Wu F-X, Li M, Pan Y, Li Y, Wang J. Prediction of lncrna-disease associations based on inductive matrix completion. Bioinformatics. 2018; 1:8. Raymond R, Kashima H. Fast and scalable algorithms for semi-supervised link prediction on static and dynamic graphs. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Heidelberg: Springer: 2010. p. 131–47. Xia Z, Wu LY, Zhou X, et al. Semi-supervised drug-protein interaction prediction from heterogeneous biological spaces. BMC Syst Biol. 2010; 4:S6. Chen X, Huang Y-A, You Z-H, Yan G-Y, Wang X-S. A novel approach based on katz measure to predict associations of human microbiota with non-infectious diseases. Bioinformatics. 2016; 33(5):733–9. Qu Y, Zhang H, Liang C, Dong X. Katzmda: prediction of mirna-disease associations based on katz model. IEEE Access. 2018; 6:3943–50. Cheng F, Liu C, Jiang J, Lu W, Li W, Liu G, Zhou W, Huang J, Tang Y. Prediction of drug-target interactions and drug repositioning via network-based inference. PLoS Comput Biol. 2012; 8(5):1002503. Yamanishi Y, Araki M, Gutteridge A, Honda W, Kanehisa M. Prediction of drug–target interaction networks from the integration of chemical and genomic spaces. Bioinformatics. 2008; 24(13):232–40. Organization WH. The World Health Report 2002. http://www.who.int/whr/en. Accessed 24 Aug 2018. Hackam DG, Anand SS. Emerging risk factors for atherosclerotic vascular disease: a critical review of the evidence. Jama. 2003; 290(7):932–40. Song C-L, Wang J-P, Xue X, Liu N, Zhang X-H, Zhao Z, Liu J-G, Zhang C-P, Piao Z-H, Liu Y, et al. Effect of circular anril on the inflammatory response of vascular endothelial cells in a rat model of coronary atherosclerosis. Cell Physiol Biochem. 2017; 42(3):1202–12. Li C-Y, Ma L, Yu B. Circular rna hsa_circ_0003575 regulates oxldl induced vascular endothelial cells proliferation and angiogenesis. Biomed Pharmacother. 2017; 95:1514–9. Devaux Y. Transcriptome of blood cells as a reservoir of cardiovascular biomarkers. Biochim Biophys Acta (BBA) - Mol Cell Res. 2017; 1864(1):209–16. Wooster R, Bignell G, Lancaster J, Swift S, Seal S, Mangion J, Collins N, Gregory S, Gumbs C, Micklem G, et al. Identification of the breast cancer susceptibility gene brca2. Nature. 1995; 378(6559):789. Müller A, Homey B, Soto H, Ge N, Catron D, Buchanan ME, McClanahan T, Murphy E, Yuan W, Wagner SN, et al. Involvement of chemokine receptors in breast cancer metastasis. nature. 2001; 410(6824):50. He R, Liu P, Xie X, Zhou Y, Liao Q, Xiong W, Li X, Li G, Zeng Z, Tang H. circgfra1 and gfra1 act as cernas in triple negative breast cancer by regulating mir-34a. J Exp Clin Cancer Res. 2017; 36(1):145. Dong Y, He D, Peng Z, Peng W, Shi W, Wang J, Li B, Zhang C, Duan C. Circular rnas in cancer: an emerging key player. J Hematol Oncol. 2017; 10(1):2. Cheng L, Li J, Ju P, Peng J, Wang Y. Semfunsim: a new method for measuring disease similarity by integrating semantic and gene functional association. PloS ONE. 2014; 9(6):99415. Lan W, Wang J, Li M, Liu J, Wu F-X, Pan Y. Predicting microrna-disease associations based on improved microrna and disease similaritiesOncol. IEEE/ACM Trans Comput Biol Bioinforma. 2016(to be published). https://doi.org/10.1109/TCBB.2016.2586190. Ni P, Wang J, Zhong P, Li Y, Wu F, Pan Y. Constructing disease similarity networks based on disease module theory. IEEE/ACM Trans Comput Biol Bioinforma. 2018(to be published). https://doi.org/10.1109/TCBB.2018.2817624. Liu Y, Wu M, Miao C, Zhao P, Li X. -L.Neighborhood regularized logistic matrix factorization for drug-target interaction prediction. PLoS computational biology. 2016; 12(2):1004760. Wang L, Li X, Zhang L, Gao Q. Improved anticancer drug response prediction in cell lines using matrix factorization with similarity regularization. BMC cancer. 2017; 17(1):513. Luo H, Li M, Wang S, Liu Q, Li Y., Wang J.Computational drug repositioning using low-rank matrix approximation and randomized algorithms. Bioinformatics. 2018; 34(11):1904–1912. The authors are very grateful to the anonymous reviewers for their constructive comments which have helped significantly in revising this work. The authors would like to express their gratitude for the support from the National Natural Science Foundation of China under Grant No. 61772552, No. 61420106009, No. 61622213 and No. 61732009. Publication costs are funded by National Natural Science Foundation of China under Grant No. 61420106009. About this supplement This article has been published as part of BMC Bioinformatics Volume 19 Supplement 19, 2018: Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-19-supplement-19. School of Information Science and Engineering, Central South University, 932 South Lushan Rd, ChangSha, 410083, China Cheng Yan & Jianxin Wang School of Computer and Information,Qiannan Normal University for Nationalities, Longshan Road, DuYun, 558000, China Cheng Yan Biomedical Engineering and Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SKS7N5A9, Canada Fang-Xiang Wu Jianxin Wang JW conceived the project; CY designed the experiments; and CY performed the experiments; CY and FXW wrote the paper. All authors read and approved the final manuscript. Correspondence to Jianxin Wang. The used data is provided by Fan et al. [36]. Please download the data from http://bioinfo.snnu.edu.cn/CircR2Disease/ or contact the authors for data requests. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Yan, C., Wang, J. & Wu, FX. DWNN-RLS: regularized least squares method for predicting circRNA-disease associations. BMC Bioinformatics 19, 520 (2018). https://doi.org/10.1186/s12859-018-2522-6 CircRNA CircRNA-disease association Gaussian interaction profile Kron-RLS
CommonCrawl
How do electrical signals propagate or dissipate through a dendritic tree? This post assumes a solid background in the biophysics of the membrane potential, in particular the "equivalent circuit" for the neuron, and the leaky integrate-and-fire neuron model. Go back and read those if you get lost below! Extending Our Neuron Models In previous posts on, e.g., the ionic basis of membrane potentials I've made a few simplifying assumptions about neurons. One of the most common and least explicit involved the shape of neurons. In order to make our derivation of the Nernst equation possible, we assumed that the neuron was a sphere – even more, we assumed that we could treat it as flat from the perspective of an ion (just as we assume the Earth is flat from the perspective of a dropped ball when we do physics problems). Neurons, like the Earth, are not flat (pace B.o.B.). They are also not even approximately spherical. They extend through space in a complicated fashion. Fig. 1 A cartoon neuron, the Earth, and a flat sheet. None of these are actually spherical, though some are better approximated by spheres than others. Photo of the Earth from nasa.gov. The complexity of these shapes makes it much harder to make confident, general statements, as we have before, about the flow of currents and the effects of forces. The simplicity of our model also makes it impossible to say anything about things like currents passing through dendrites. But we need not throw our hands up in despair! We just need to replace our rough, spherical model with a more sophisticated model! We'd like our model to still be easy to work with: in particular, we'd like our equivalent circuits to fit nicely into it. Luckily, there's just such a model available: we can use cylinders instead of spheres! (Why are the electronics of cylinders well-described and easy to work with? Think wires!) The figure below shows our cartoon neuron on the left, and then our new, better approximation, constructed from cylinders and spheres, on the right. We call each cylinder or sphere a "compartment". Each compartment has its tiny copy of the equivalent circuit, and directly connected compartments have connected circuits. Current can flow between these compartments, with the strength depending on how they are connected. A Refresher on the Leaky Integrator Equation We can see how currents flow in our new-and-improved neuron models by reusing the leaky integrator equations that we used to describe our simplest computational neuron model. The basic leaky integrator equation appears below. I've modified it a bit from its form in the original post. $$ \tau*\frac{dV}{dt} = g_{leak}\left(E_{leak}-V \right) +\sum_{channels}g_{ch}\left(E_{ch} - V\right) $$ This is a differential equation. That means that it describes how a variable changes over time. The symbol for "changes over time" is \(\frac{d}{dt}\), where d stands for "tiny change". The variable that changes is V. The right side of the equation describes just how V changes with time. Each g stands for a conductance, which is the inverse of a resistance. Our conductances are our ion channels; the more that are open, the higher the value of g. Each E stands for an equilibrium potential, or the potential at which a current driven by some electrochemical potential will have no net flow. With these terms defined, we're now ready to determine what happens to V. The first term involves our "leak" conductances and their equilbrium potential. Since these "leak" conductances are open at rest, their equilibrium potential is called the "resting membrane potential". The first term says that whenever V is less than \(E_{leak}\), V goes up, and it goes faster the larger \(g_{leak}\) is – the more leak channels we have open. If V is higher than \(E_{leak}\), V goes down. In both cases, the first term is 0 whenever V is equal to \(E_{leak}\). So this term just captures what we mean by "resting membrane potential": it's the potential at which the neuron "rests", or doesn't change its voltage. The second term looks a bit more intimidating, but it's just as simple as the first. Now, instead of concerning ourselves with the "leak" conductances, we're concerned with other ionic conductances, like the AMPA current for excitatory inputs or the chloride current for inhibitory inputs. For each of these conductances, we look at how far V is from the equilibrium potential, E, of the conductance, and then we weight that by how many channels are open, g. We do this for all of our different kinds of channels, and then add the result up. There's one last piece: on the left-hand side, we have a number \(\tau\). \(\tau\) controls how quickly V changes. If \(\tau\) is big, then V changes slowly. \(\tau\) is determined by the over-all conductance and capacitance of the neuron. Putting It Together: Leaky Compartment Models Let's extend our leaky integrator equation from above to handle our new-and-improved compartmentalized neuron model. The below figure shows a cartoon model, the compartment approximation, and then a representation of the compartment model as a graph. The nodes of the graphs are the compartments, and the edges represent connections. We call a set of directly connected nodes a "neighborhood". For now, each node from the dendrites is just a leaky integrator, while the soma has the ability to fire action potentials – it's an "LIF" or "leaky integrate-and-fire" node. With this representation in hand, we can add a term to our leaky integrator equation to represent the influence of neighboring compartments: $$ \tau*\frac{dV}{dt} = g_{leak}\left(E_{leak}-V \right) +\sum_{channels}g_{ch}\left(E_{ch} - V\right) +\sum_{neighbors}c_{n}\left(V_i-V\right) $$ Our additional term is very similar to our original two: whenever the compartment we care about has a voltage different from its neighbors, the compartment's voltage will move towards the neighbors. We weight the connection to each neighbor n with a value \(c_n\), which tells us how well-connected two compartments are. If the compartments aren't very wide where they connect, \(c_n\) will be small. Note also that the time constant, \(\tau\), will be different for each compartment. The value of \(\tau\) is determined by the shape, size, and electrical properties of the compartment. How Do Electrical Signals Propagate or Dissipate Through the Dendritic Tree? The equation above answers this question in detail for any given dendritic tree: currents change the voltages of neighboring pieces of the neuron, dissipating more when the branches are thin and long, or when certain pieces are poorly connected to each other, as in dendritic spines. Before we declare victory, let's note that: there's no action potential generation in our model. Because LIF neurons have no natural spike-generating mechanism, we can't capture back-propagating action potentials or dendritic spikes, which are important phenomena for learning and memory, and can generally substantially increase the computational power of a single neuron. Our compartmental model would still be central to any approach for incorporating thse phenomena, but our neuron model would have to become more complicated.
CommonCrawl
Looking for a dictionary of math/CS notation There is an at-times dizzying array of symbols used in math and CS papers. Yet many assume basic familiarity that seems rarely taught in one place. I am looking for a dictionary something like the following, especially from a CS perspective. It would list all the basic mathematical symbols and give their meanings and examples. It would talk about symbols that are sometimes used in equivalent ways. It would note common beginner mistakes. It would talk about the subtleties surrounding different meanings of a single symbol (much like multiple definitions of the same word in a dictionary). It would not merely be a very terse description of each symbol, such as one word descriptions like "subset". It would show how symbols are sometimes "overloaded". For example, $\binom{x}{y}$ could have $z$ as an integer, but sometimes $z$ can be a set with this notation and it means to choose elements from this set. $[n]$ sometimes means a set of integers $1 \ldots n$, or other times its a one-element array. It might talk about how to describe all kinds of different "objects" in terms of different symbols or equivalent ways of referring to them (but which are more clear) and the operations possible on those objects. In other words, kind of like an API for math objects. I.e. it would be also at times a "style manual" for different nuances in how to present mathematical writing. This would be a very helpful resource for anyone writing questions in mathematical stackexchanges, where many questions fail to make sense based on not fitting into tricky mathematical conventions. Some book introductions have many of these features. however ideally it would be a separate treatment. Also, ideally of course it would be online. There are tables of latex symbols, but they don't really fulfill many of the above criteria. Has anyone seen a "dictionary of symbols" that matches these features? (Alternatively, it seems like an excellent wiki or FAQ project if good references like this don't exist.) terminology reference-request education vznvzn $\begingroup$ Wikipedia's list of mathematical symbols is a good start but still quite a ways from the comprehensive resource you describe. Still, it's been enough to get me through most of the theoretical CS papers I've read. $\endgroup$ – Kyle Jones Dec 13 '12 at 4:45 $\begingroup$ Notation is not always standard. There is "common practice" notation, but it varies geographically and throughout time. When using non-standard or less-common notation, sources would indicate the meaning of the notation. $\endgroup$ – Yuval Filmus Dec 13 '12 at 12:16 $\begingroup$ see also good examples of how to write well in cs, tcs.se, although this is more general style and not so much focused on notation $\endgroup$ – vzn Dec 13 '12 at 15:49 You can use this sheet. It has a lot of handy formulas and some definitions. You can also use Wikipedia's list of mathematical symbols propsed by Kyle Jones in comments. There is also Worfram MathWorld which is a great resource, but you need to know what are you looking for. When it comes to printed resources, each volume of The Art of Computer Programming has a great reference of mathematical notations, especially volume 1. Bartosz PrzybylskiBartosz Przybylski To know something's secret name is to steal its power. - Dr. Daniel Jackson For the times when we can only draw the symbol and want a name. See: shapecatcher You draw the symbol and if there is a unicode char for it, it will find it and give you a name. If there are many such as =, it will list many results. Then it is just a matter of Googling. Guy CoderGuy Coder $\begingroup$ Good idea, but did not work well for me. $\endgroup$ – The Unfun Cat Dec 13 '12 at 19:16 To the other answers' references for mathematical notation and symbols, I would like to add a reference for mathematical use of English words. The Handbook of Mathematical Discourse is a dictionary of words and phrases used in semi-formal mathematical writing—especially words like "where" and "all" that are used quite differently from non-mathematical writing. HeatsinkHeatsink Not the answer you're looking for? Browse other questions tagged terminology reference-request education or ask your own question. How to formulate a computational problem rigorously? Undergraduate studies: Math or CS? Math for TCS major What is the significance of reverse polish notation? Looking for cheat sheet to J.C. Reynolds symbols Courcelle's Theorem: Looking for papers
CommonCrawl
8 Prioritize Stress in crisis managers: evidence from self-report and psychophysiological assessments Version 1 Released on 25 May 2016 under Creative Commons Attribution 4.0 International License Amelie Janka 1,*, Christine Adler 2,*, Pandelis Perakakis3, Pedro Guerra 3,*, Stefan Duschek 1,* Institute of Psychology, Department of Psychology and Medical Sciences - University for Health Sciences, Medical Informatics and Technology (UMIT) Department Psychologie, Fakultät für Psychologie und Pädagogik - Ludwig-Maximilians-Universität München Departamento de Personalidad, Evaluación y Tratamiento Psicológico - Universidad de Granada *. Unregistered author (unverified) Psychological measurement Directing disaster operations represents a major professional challenge. Despite its importance to health and professional performance, research on stress in crisis management remains scarce. The present study aimed to investigate self-reported stress and psychophysiological stress responses in crisis managers. For this purpose, 30 crisis managers were compared with 30 managers from other disciplines, in terms of self-reported stress, health status and psychophysiological reactivity to crisis-related and non-specific visual and acoustic aversive stimuli and cognitive challenge. Crisis managers reported lower stress levels, a more positive strain-recuperation-balance, greater social resources, reduced physical symptoms, as well as more physical exercise and less alcohol consumption. They exhibited diminished electrodermal and heart rate respon- ses to crisis-related and non-specific stressors. The results indicate reduced stress and physical complaints, dimin- ished psychophysiological stress reactivity, and a healthier life-style in crisis managers. Improved stress resistance may limit vulnerability to stress-related performance decline and facilitate preparedness for major incidents. Major incidents are characterized as sudden events, affecting large numbers of people, which overwhelm local health care and infrastructure and have a serious immediate and long-term impact on human welfare, public health and the environment. Major incidents encompass natural catastrophes such as floods, earthquakes and forest fires, in addition to human-made disasters including terror attacks or railway accidents, and industrial incidents like chemical spills or nuclear accidents. Some evidence confirms a global increase in the frequency and severity of major incidents in recent years (e.g., [27,45]); the importance of adequate preparedness is paramount, with a number of guidelines developed aiming to optimize crisis management, to provide adequate medical and social support to victims and to limit deleterious public health consequences (e.g., [56,54,16]). Working in crisis environments represents a major challenge for professionals such as emergency workers, firefighters and medical or paramedical staff ([1]). This particularly applies to executive personal engaged in directing disaster operations, i.e. crisis managers; their duties include the mobilization and coordination of first responders, allocation of tasks, communication with authorities, evaluation of immediate needs of the affected population and ongoing risk assessment in order to maintain the security of victims and emergency personal. Crisis management involves high-level decision-making and principal responsibility for personnel, thereby requiring distinct organizational and leadership skills. The present study is concerned with self-reported stress and psychophysiological stress responses in crisis managers. While previous stress research almost exclusively focused on victims of disasters and first responders (e.g., [5,31,17]) , research on executive personnel remains scarce. Similarly to first responders, during operations crisis managers are exposed to severe suffering, injury and death, which in many cases can be regarded as traumatic stressors (e.g., [6]). Holding a leadership position also necessitates further challenges; in addition to great responsibility for personnel and human lives, crisis managers may also be faced with conflicts of interest, insufficient manpower, ambiguous or conflicting roles and extremely high expectations from others. The stress burden is considerably lower during normal working activities between actual disaster relief operations, in which managers perform executive functions involving administrational and personnel management. These tasks are frequently associated with occupational or organisational stressors such as conflict with authorities and co-workers, time-consuming administrative duties, overwork or work-life conflicts (e.g., [4]). Institutions responsible for disaster management typically comprise various units, pertaining for example to civil protection and fire and rescue services, and are characterized by complex organisational and hierarchical structures. Thus, in terms of size and complexity disaster management organisations may be comparable to companies situated within the fields of economy and industry. The impact of sustained stress on mental and physical health is very well-documented (e.g., [13]). Several studies indicate reduced emotional wellbeing and increased self-reported stress, posttraumatic symptoms, depression and alcohol abuse in emergency medical personnel and firefighters [1,5,31,24]. However, it appears that only a subgroup of rescue workers are affected; moreover, various predictors have been identified that modify symptom occurrence. High levels of social support are associated with reduced posttraumatic and additional psychic symptoms [51,63], while active task-oriented coping-styles are connected with lower emotional and bodily stress [14,38]. Further factors reducing distress include job satisfaction and internal locus of control, as well as sufficient recovery time between operations [1,14]. The specific professional conditions encountered by crisis managers may contribute to the development of coping-skills and psychophysiological stress-resistance, in the context of which flexibility may play an important role. Psychological flexibility refers to an individual's ability to efficiently adapt to changing situational demands, to adjust mental and behavioral resources to suit current requirements and to cope with negative emotions and burdensome experiences [33,55]. While inflexibility has been implicated in various psychopathological conditions, a high degree of flexibility may mitigate against the deleterious effects of stress, thereby promoting health and wellbeing [9,55,33]. The working conditions of crisis managers involve rapid switching between regular management duties and extremely demanding disaster relief operations, which requires sudden activation of mental and physical resources followed by a quick return to the initial state. It could be hypothesized that these challenges may improve crisis managers' ability to flexibly adapt, even when situational demands are extreme. It has been furthermore argued that intermittent acute stress can foster a state of psychophysiological toughness associated, inter alia, with emotional stability, suppression of cortisol release and improved immune system function [20]. In conclusion, work rhythm that involves alternation between limited periods of high strain and sufficiently long phases of moderate load may foster behavioral and psychophysiological adjustment to stress, thereby facilitating the maintenance of health and wellbeing. In addition to its clinical relevance, research on stress in crisis managers addresses the issue of professional performance. It has been well-established that performance in cognitive domains such as attention, memory, logical reasoning and decision-making varies according to the degree of psychophysiological activation, where optimal functional conditions are expected at midrange arousal, with both overarousal and underarousal accompanied by declines in performance [44]. The connection between stress and performance has been investigated, for instance, in the fields of high-risk industry, the military, and aviation [59,29,36]. Furthermore, medical performance appears to be compromised by acute stress. [37] showed that in paramedics experimental stress induced by a challenging scenario involving a human patient simulator reduced the accuracy of drug dosage calculations. Similarly, paramedics made more mistakes during patient care documentation under conditions of high vs. low simulated clinical stress [39]. Research on stress as a possible cause of impaired crisis management performance is still lacking. However, given the potential consequences of stress-related mistakes or incorrect decisions, for individual human lives and public safety, this issue is highly relevant. In the current study, a group of crisis managers from Tyrol (Austria) was compared with a matched control group comprised of managers drawn from other disciplines, in terms of self-reported stress and psychophysiological stress responses. Managers were chosen as a reference group to control for more general effects, which may arise due to the tenure of an executive position per se. In this way, possible group differences in the assessed parameters may be attributed to specific factors inherent in crisis management. Participants were presented with multidimensional questionnaires quantifying perceived stress, resources enabling recuperation from burdensome events or stressful periods, and subjective health status. Furthermore, various experimental stressors were applied to investigate psychophysiological and emotional stress responses. These included short- and long-duration visual and acoustic stimuli resembling stressors specifically related to major incidents and crisis management, as well as non-specific emotional stressors. A mental arithmetic task was additionally employed in order to evoke cognitive stress. Psychophysiological response parameters comprised heart rate and electrodermal activity (EDA). While stress-related changes in heart rate occur due to alterations in the activity of the sympathetic and parasympathetic nervous systems, EDA constitutes a pure sympathetic measure [18,8]. In addition, the high frequency band of the heart rate variability (HRV) spectrum was also assessed. High frequency HRV is a well-established index of parasympathetic influences on heart activity (c.f. [7]). Acute and chronic stress is associated with a reduction in HRV, where a large database confirms the presence of low levels of high frequency HRV in physical and mental disease. Diminishment has been reported, for instance, in posttraumatic stress [60], anxiety disorders [41], depression [15] and physical conditions such as hypertension, cardiac disease or chronic pain [43,47,52]. Reduced HRV is furthermore linked to general morbidity and increased mortality [2]. This contribution constitutes the first controlled study on self-reported stress and psychophysiological stress reactivity in crisis managers. As such, to some extent it has to remain exploratory; definite predictions are therefore difficult to make. On one hand, the aforementioned research on paramedics and firefighters suggests increased stress and stress-related health problems in individuals working in disaster environments [1,5,31,24]. However, these data are not necessarily meaningful for crisis managers, because the respective professional groups clearly differ in their duties and professional roles. On the other hand, crisis managers commonly undergo a careful selection process, are highly trained and can resort to elaborate guidelines [54,56]. There specific professional condition may facilitate the development of flexible strategies to control stress and behaviorally and psychophysiologically adjust to the profound challenges of their role. Thirty crisis managers (24 men, 6 women) participated in the study. The group was comprised of golden and silver commanders of institutions involved in the crisis management of major incidents. In their middle and higher management positions, these individuals were required to have staff and decision-making responsibilities, and furthermore to possess experience in the management of actual crises (and not merely exercises). The participants' mean duration of crisis management was 10.79 years (SD = 7.74 years), with involvement in an average in 73.77 operations (SD = 271.20 operations). Crisis managers currently involved in a crisis intervention were excluded. In order to recruit this group, the study was presented to the government of the Federal State of Tyrol (Austria), which established contacts with the Civil and Disaster Protection Tyrol, the Red Cross Kufstein and the Alpine Rescue Service Tyrol. The heads of these organisations received comprehensive information on the study; they all endorsed the participation of their crisis managers and invited them to do so via email or personal communication. Those who agreed to participate contacted the research team by phone. The control group included 30 mid- and high-level managers (23 men, 7 women) from the sectors of economy, industry, education and public administration. As with the crisis managers, these subjects were also required to operate in a leadership capacity and be responsible for staff. This group was recruited via large- and medium enterprises and public bodies. The exclusion criteria for both groups included any kind of serious physical disease or psychiatric disorder, as well as use of medication affecting the cardiovascular or nervous systems. Health status was assessed based on anamnestic interviews and a comprehensive medical questionnaire. Table 1 delineates the organizations and positions of all participants. Demographic data are given in Table 2. The groups did not differ significantly in age, work experience, educational level or body mass index (c.f. Table 2). The study was approved by the ethics committee of UMIT - University of Health Sciences, Medical Informatics and Technology; all participants provided written informed consent. Organisation N (%) Position N (%) Crisis managers Civil defense 11 (37%) Operation controller 18 (60%) Red Cross 13 (43%) Chief of the office 6 (20%) (e.g., emergency dispatch center, 6 (20%) Chief executive officer 6 (20%) mountain rescue service) Small and medium-sized enterprise (e.g., business consultancy, 11 (37%) Head of department 12 (40%) IT service, architectural firm) Large-scale enterprise 9 (30%) Managing director 8 (27%) (e.g., bank, pharmaceutical industry) (e.g., elementary school, 7 (23%) President/director 3 (10%) special school, university) (e.g., social assistance office, 3 (10%) Project manager 4 (13%) municipal administration) Rector/vize-rector 3 (10%) Table 1. Crisis managers and control group: organizations and positions. Crisis managers Control group F[1, 58] p Partial $\mathbf{\eta^{2}}$ Age (years) 43.00 (10.45) 42.57 (10.05) 0.03 .87 <.001 Total work experience (years) 22.83 (10.96) 19.03 (11.31) 1.75 .19 .03 Time of education (years) 15.67 (4.83) 17.75 (3.88) 3.39 .07 .06 Body mass index (kg/m$^2$) 25.27 (3.74) 23.97 (3.33) 2.01 .16 .03 Table 2. Demographic data. Mean values (standard deviations in brackets); values of F[1, 58], p and Partial Eta Squared of the group comparison. To determine self-reported stress, the German version of the Perceived Stress Questionnaire was applied [40]. This instrument aims to quantify currently perceived stress using four subscales each comprising five items. The Worries scale quantifies worries and anxious concern regarding the future, as well as feelings of desperation and frustration (possible range: 5-20); the Tension scale is concerned with agitation, fatigue and the inability to relax (possible range: 5-20); the Joy scale indexes positive feelings of joy, energy and security (possible range: 5-20). Finally, the Demands scale relates to perceived environmental stressors such as time pressure and overload (possible range: 5-20). In addition, a sum score is provided (possible range: 0-100). Joy scale items are inversely coded, such that higher values on all scales reflect increased stress. The short version of the Questionnaire for Recuperation and Strain ([Erholungs-Belastungs-Fragebogen], EBF-24 A/3, Kallus, 1995) was also administered. This questionnaire estimates an individual's current recuperation-strain-balance; its 24 items refer to events inducing strain and those which facilitate recuperation, per the following 12 subscales: (1) General Strain (possible range: 0-12), (2) Emotional Strain (possible range: 0-12), (3) Social Strain (possible range: 0-12), (4) Unresolved Problems (possible range: 0-12), (5) Fatigue (possible range: 0-12), (6) Lack of Energy (possible range: 0-12), (7) Physical Problems (possible range: 0-12), (8) Success (possible range: 0-12), (9) Social Recreation (possible range: 0-12), (10) Physical Relaxation (possible range: 0-12), (11) General Content (possible range: 0-12) and (12) Sleep (possible range: 0-12). While higher values on scales 1 to 7 indicate increased levels of strain, higher values on scales 8 to 12 indicate the availability of recuperative and coping resources. Subjective health status was assessed using the revised form of the von Zerssen Symptom Checklist ([Beschwerden-Liste], BL-R, [64]). This instrument is widely used in German speaking countries to quantify current burden with somatic complaints. The 20 items of the questionnaire include frequently experienced general symptoms (e.g., fatigue, tiredness) as well as more specific complaints (e.g., chest pain, nausea, insomnia). Higher values represent greater burden with bodily complaints (possible range: 0-60). In addition to these questionnaires, the demographic data sheet included questions about basic features of health behavior, i.e. smoking status, alcohol consumption and physical exercise. Experimental Stress Induction Three types of stimuli were used to experimentally evoke acute emotional and cognitive stress. Participants were initially presented with images from the International Affective Picture System (IAPS) [35], a picture viewing paradigm popular in emotion and stress research. The following three image categories were applied (15 images per category1 ): (1) images resembling stressors related to major incidents (e.g., injuries and mutilations), (2) images representing general (non-specific) stressors (e.g., threatening or anxiety-related situations), and (3) pleasant stimuli (e.g., positive social situations and erotic images). The pleasant images served as a control condition allowing investigation of a possible group difference in general emotional responsiveness. Normative ratings of stimuli on the arousal and valence dimensions were as follows (Lang et al., 1997): crisis-related images, arousal M = 6.24, SD = 0.64, valence M = 7.08, SD = 0.35; non-specific stress images, arousal M = 6.49, SD = 0.41, valence M = 6.39, SD = 0.65; pleasant images, arousal M = 6.17, SD = 0.83, valence M = 1.81, SD = 0.56 (higher values denote higher arousal and more negative valence.) The images were presented in a pseudorandom order, where stimuli of all three categories (crisis-related stressors, non-specific stressors, pleasant images) were intermixed. Stimulus duration was 5 s each, with interstimulus intervals (white cross on the screen) ranging between 8 and 12 s. Two acoustic stressors were additionally applied. First, a series of noises was compiled based on stimuli taken from the International Affective Digitized Sounds database (IADS) [11]2 .2 Selected noises included a human scream, the take-off and landing of a helicopter and a ringing phone, all of which are purportedly associated with major incidents or crisis management. The three sounds were repeatedly presented in a pseudorandom order over 1 min. Normative ratings were as follows [11]: arousal M = 6.48, SD = 1.17, valence M = 3.59, SD = 1.97. As an non-specific acoustic stressor, a piece of self-programmed, non-melodic synthesizer music was used. Each of the acoustic stressors was presented for 1 min via earphones (sound pressure level 59 - 63 dB). The stimulation phases were preceded by 1 min resting periods. Participants rated their affective experience elicited by the emotional stimuli (IAPS pictures, IADS noises), in terms of arousal and valence, using the Self-Assessment Manikin scales (SAM) [10] (possible range of both scales: 1-9). For this purpose, IAPS stimuli were repeated in the same sequence following the initial presentation. Evaluation of IADS noises was accomplished directly after they were presented. The Montreal Imaging Stress Task (MIST) [19] was applied as a cognitive stressor. The MIST requires resolving arithmetic problems under time pressure. The problems are presented in conjunction with an unpleasant noise, where the intensity of the noise increases until the correct solution is entered. The total task time was 3 min, preceded by a 1 min resting period and followed by a 2 min recovery phase. All visual stimuli were displayed on a 19 inch monitor (distance between subject and monitor approx. 1 m). Presentation of the IAPS and IADS stimuli was controlled using the Presentation software package (ver. 17.1, Neurobehavioral Systems, USA). The order of task presentation was as follows (identical order in all participants): 1. visual stimuli; 2. crisis-related acoustic stressor; 3. non-specific acoustic stressor; and 4. cognitive stressor. Psychophysiological Recordings Heart rate and electrodermal activity (EDA) were assessed using a Biopac system (MP 150, Biopac Systems Inc., USA). Heart rate was taken using ECG, which was recorded from two electrodes placed at the right mid-clavicle and lowest left rib. EDA electrodes were attached to the third and fourth fingers of the non-dominant hand. All data were digitized at a sampling rate of 1000 Hz. Prior to the presentation of the stimuli, recordings were accomplished during resting conditions for 10 min. For this part of the procedure, the participants were asked to fixate upon a white cross, which was presented in the center of a black computer screen. Subjects were requested not to drink alcohol or beverages containing caffeine for at least 2 h prior to the experimental session. To aggregate the psychophysiological data, the AcqKnowledge (ver. 4, Biopac Systems Inc., USA) and KARDIA [49] software packages were employed. In a first step, values of heart cycle duration - defined as the RR interval - were computed from the raw ECG. The ECG data was visually screened, and artifacts were corrected by linear interpolation. Subsequently, the beat-to-beat values were transformed to time-based data with a sample rate of 2 Hz. The EDA data was resampled at the same frequency. Mean heart rate values were computed for the 10 min resting period, during which high frequency HRV was also obtained. To this end, the software Kubios HRV (ver. 2) [46], was employed, which follows Task Force (1996) cite{task1996heart} guidelines. HRV was derived from the series of RR intervals by means of Fast-Fourier-Transformation (extraction through a Hamming window). Spectral power density was expressed in absolute units. High frequency HRV was indexed by spectral power density in the frequency range between 0.15–0.40 Hz. To quantify psychophysiological responses to the presentation of stressors, (absolute) changes from baseline were computed for heart rate and EDA. Baseline duration was set at 3 s for the IAPS stimuli, and 5 s for the acoustic stimuli and the MIST, respectively. For the IAPS stimuli, the data was averaged across the 15 images of each category. Peak amplitudes of the psychophysiological responses were computed for all stressors. Therefore, maximum changes were determined in specific response intervals, defined according to visual inspection of the data. Regarding the IAPS stimuli, this time window comprised the first 5 s after stimulus onset for heart rate and EDA. As the acoustic and cognitive stressors elicited the strongest heart rate and EDA modulations during the initial stimulation phase, peak amplitudes were determined in the first 10 seconds following stimulus onset for both acoustic stimuli and in the first 15 seconds for the MIST. In addition, stress-induced changes in high frequency HRV were quantified for the MIST. As the execution of the task involved relatively long recording periods (1 min baseline, 3 min task execution, 2 min recovery), which are recommended for reliable determination of HRV [7], this stressor was the most appropriate for this purpose. As a main instrument of statistical analysis, a MANOVA was computed with study group (crisis managers vs. control group) as a between-subjects factor. Dependent variables comprised the demographic and questionnaire data as well as the psychophysiological indices. Data analysis showed that crisis managers engaged in markedly more physical exercise compared to the control group. To test whether physical exercise moderated group differences in the remaining dependent variables, a second MANOVA was computed with physical exercise (hours/week) used as a covariate. To evaluate changes in HRV during execution of the MIST, an ANOVA was conducted with study group as between-subjects factor and condition (baseline vs. task execution vs. recovery) as a within-subjects factor. The alpha-level was set at .05 for all analyses. Multivariate testing revealed a significant effect of group (F[38, 21] = 2.70, p = .009, Partial Eta Squared = .83). According to univariate testing, crises managers exhibited a lower total score of the Perceived Stress Questionnaire than the control group (F[1, 58] = 4.39, p = .041, Partial Eta Squared = .070) (c.f. Figure 1). Regarding the Questionnaire for Recuperation and Strain, crises managers showed lower scores on the Social Strain and Physical Problems scales, and higher scores on the Social Recreation scale (c.f. Table 3). In addition, lower values for crisis managers vs. the control group were observed on the von Zerssen Symptom Checklist F[1, 58] = 4.47, p = .039, Partial Eta Squared = .072) (c.f. Figure 1). Figure 1 Figure 1. Mean values of the Perceived Stress Scale (total score) and the von Zerssen Symptom Checklist (bars denote standard errors of the mean) Crisis managers Control group F[1,58] p Partial $\mathbf{\eta^{2}}$ Perceived Stress Worries 11.78 (11.30) 16.67 (12.84) 2.45 .12 .04 Tension 21.33 (12.67) 26.44 (16.47) 1.82 .18 .03 Joy 82.44 (14.38) 77.11 (16.11) 1.83 .18 .03 Demands 37.11 (17.65) 52.22 (51.08) 2.35 .13 .04 Questionnaire for Recuperation and Strain General Strain 0.32 (0.40) 0.52 (0.76) 1.62 .21 .03 Emotional Strain 0.55 (0.55) 0.75 (0.70) 1.51 .22 .03 Social Strain 0.80 (0.58) 1.25 (0.87) 5.56 .02 .03 Unresolved Problems 1.72 (1.01) 1.93 (1.30) 0.52 .48 .01 Fatigue 0.32 (0.40) 0.52 (0.67) 1.62 .21 .03 Lack of Energy 1.12 (0.63) 0.98 (0.84) 0.49 .49 .01 Physical Problems 0.70 (0.43) 1.30 (1.18) 6.87 .01 .11 Success 3.48 (0.97) 3.37 (1.16) 0.18 .67 <.01 Social Recreation 3.98 (0.83) 3.30 (1.52) 4.67 .04 .08 Physical Relaxation 4.47 (0.93) 4.08 (1.23) 1.87 .18 .03 General Content 4.82 (0.71) 4.55 (0.82) 1.80 .19 .03 Sleep 5.03 (1.06) 5.12 (0.93) 0.11 .75 <.01 Concerning health behaviors, the groups did not differ in self-reported smoking status (crisis managers: 5 (17%) smokers, control group: 8 (27%) smokers, Chi Squared = 0.88, p = .35), but crisis managers indicated to consume alcohol on fewer days per week (crisis managers: M = 1.94 days/week, SD = 1.07 days/week; control group: 3.50 days/week, SD = 1.83 days/week; F[1, 58] = 13.20, p = .001, partial Eta squared = .26) and to engage in more physical exercise than controls (crisis managers: M = 6.67 hours/week, SD = 6.79 hours/week; control group: 3.50 hours/week, SD = 2.40 hours/week; F[1, 58] = 5.67, p = .021, partial Eta squared = .089). Table 3 presents the values revealed by the SAM. Crisis managers' ratings of unpleasantness and arousal for IAPS crisis-related images were lower than those of the control group. Self-Assessment Manikin Crisis-related images Valence 6.52 (1.51) 7.48 (0.78) 9.64 <.01 .14 Arousal 3.71 (1.44) 5.15 (1.55) 13.84 <.01 .19 Non-specific stress images Valence 6.49 (1.56) 6.99 (0.87) 2.35 .13 .04 Arousal 3.51 (1.56) 3.01 (0.87) 2.35 .13 .04 Pleasant images Valence 2.93 (1.61) 2.94 (1.44) <.01 .98 <.01 Crisis-related noises Valence 7.83 (1.39) 7.97 (1.45) 0.13 .72 <.01 Response amplitudes EDA ($\mathbf{\mu\textrm{S}}$) IAPS: crises-related stress 0.05 (0.07) 0.08 (0.14) 1.12 .29 .02 IAPS: non-specific stress 0.04 (0.05) 0.08 (0.10) 4.63 .04 .07 IAPS: positive stimuli 0.04 (0.07) 0.05 (0.01) 0.03 .86 <.01 Acoustic: crises-related stress 0.82 (0.59) 1.54 (1.15) 9.48 <.01 .14 Acoustic: non-specific stress 0.40 (0.42) 0.85 (0.78) 7.74 <.01 .12 MIST 0.93 (0.73) 1.38 (0.85) 5.00 .03 .08 Heart rate (beats/min) IAPS: crises-related stress -1.21 (0.81) -2.04 (2.04) 4.30 .04 .07 IAPS: non-specific stress -1.22 (0.85) -2.08 (1.85) 5.37 .02 .09 IAPS: positive stimuli -1.91 (1.17) -2.40 (1.43) 2.14 .15 .04 Table 3. Ratings on the Self-Assessment Manikin scales (higher values denote higher arousal and more positive valence) and amplitudes of the psychophysiological stress responses (standard deviations in brackets); values of F[1, 58], p and Partial Eta Squared of the group comparison. Heart rate and values of high frequency HRV obtained during the 10 min resting phase did not differ between groups (heart rate: crisis managers, M = 74.19 beats/min, SD = 11.33 beats/min; control group, M = 74.93 beats/min, SD = 8.95 beats/min; F[1, 58] = 0.08, p = .78, partial Eta squared = .001; high frequency HRV: crisis managers, M = 593.85 ms2, SD = 902.06 ms2; control group, M = 393.02 ms2, SD = 546.22 ms2; F[1, 58] = 1.09, p = .30, partial Eta squared = .018). Figure 2 presents the modulations in EDA and heart rate elicited by the IAPS images depicting crisis-related and non-specific stressors. EDA increased during picture presentation and peaked between the third and fourth second after stimulus onset. For both picture categories amplitudes were lower in crisis managers than in the control group. Heart rate decelerated during the early stimulation phase and subsequently increased above baseline level. The magnitude of the deceleration component was smaller in crisis managers for both picture categories. The EDA responses to the acoustic stressors are displayed in Figure 3. Crisis-related IADS sounds, and non-specific aversive noises, triggered a steep increase in the signal, which reached its maximum five seconds after stimulus onset and returned to baseline around second 10. A similar pattern was observed during execution of the MIST (c.f. Figure 4), where the maximum occurred slightly later. The initial responses were weaker in crisis managers than in controls for both acoustic stressors as well as the MIST. Figure 2 Figure 2. Changes in EDA and heart rate during exposure to IAPS images depicting crisis-related and non-specific stressors Figure 3 Figure 3. Changes in EDA during exposure to crisis-related IADS noises and non-specific acoustic stressors Figure 4 Figure 4. Changes in EDA during exposure of the MIST The additional MANOVA, which was conducted with physical exercise as a covariate, revealed a multivariate group effect (F[37, 20] = 2.93, p = .006, Partial Eta Squared = .85). The multivariate effect of the covariate was not significant (F[37, 20] = 1.38, p = .22, Partial Eta Squared = .72). In univariate testing, the group difference in von Zerssen Symptom Checklist scores no longer reached significance F[1, 57] = 3.60, p = .063, Partial Eta Squared = .059). In the remaining univariate group comparisons, inclusion of the covariate had no effect on the results. The numeric amplitude values of the EDA and heart rate responses are displayed in Table 3. With respect to EDA, group differences reached significance for the IAPS pictures depicting non-specific stressors, for both acoustic stressors, as well as for the MIST. The amplitude of heart rate decline was significantly reduced in crisis managers, for the crisis-related and non-specific stressor pictures. No significant group differences emerged for the positive pictures, nor for heart rate responses to the acoustic and cognitive stressors (data not shown). Analysis of high frequency HRV for the MIST revealed, in both groups, a decrease during task execution and an increase during the recovery period (crisis managers: baseline M = 610.37 ms2, SD = 1143.94 ms2, task execution M = 381.32 ms2, SD = 650.06 ms2, recovery M = 626.16 ms2, SD = 1174.47 ms2; control group: baseline M = 559.61 ms2, SD = 870.42 ms2, task execution M = 465.07 ms2, SD = 700.23 ms2, recovery M = 566.55 ms2, SD = 930.99 ms2). In the ANOVA this was reflected by a main effect of condition (F[2, 116] = 3.77, p = .026, partial Eta squared = .061). However, neither the group effect, nor the interaction between both factors, reached significance (group: F[1, 58] = 0.002, p = .97, partial Eta squared <.001; interaction: F[2, 116] = 0.65, p = .53, partial Eta squared = .011). In the present study, a group of Austrian crisis managers was compared with managers from other disciplines on measures of self-reported stress and recuperative resources, in addition to self-reported health status and psychophysiological and emotional responses to experimental stressors. Crisis managers exhibited a lower sum score on the Perceived Stress Questionnaire, suggesting an overall lower stress burden. Results concerning the Questionnaire for Recuperation and Strain furthermore point toward a more positive recuperation-strain-balance in crisis managers, indexed by lower scores on the Social Strain and Physical Problems scales and higher scores on the Social Recreation scale. The finding of reduced somatic complaints is corroborated by the lower values on the von Zerssen Symptom Checklist. The self-report data are complemented by evidence of diminished psychophysiological reactivity to experimental stressors in crisis managers. Specifically, they were characterized by smaller heart rate modulations when exposed to images depicting crisis-related stressors and non-specific threatening situations, as well as weaker EDA responses to non-specific threatening images. Moreover, initial EDA increases during a series of noises associated with crisis situations, as well as non-specific aversive sounds, were markedly lower in crises managers than in controls. Finally, crisis managers exhibited reduced EDA responses during cognitive stress induced by the MIST. The smaller increases in skin conductance observed in crisis managers indicate lower stress-related sympathetic nervous system activation [18]. It is important to note that the reduced EDA reactivity was not restricted to stressors specifically associated with disaster scenarios and crisis management. Instead, the lower amplitudes elicited by non-specific aversive cues and the cognitive stressor support the notion of there being generally reduced sympathetic reactivity during stress from various sources. On the other hand, study groups did not differ in their EDA responses to pleasant images, suggesting that the crisis managers' lower signal increases were not due to generally blunted autonomic responsiveness. However, that the group difference in EDA reactions to crisis-related images also failed to reach significance should not be overlooked. In the entire sample, EDA modulations were far weaker for the visual stimuli than for both the noise series and the cognitive stressor. The latter conditions were apparently more capable of eliciting sympathetic activation, thereby more clearly differentiating the study groups. It is certainly noteworthy that crisis managers' average initial responses to both acoustic stressors were only approximately half as strong as those of the controls. The pattern of heart rate modulation during presentation of the crisis-related and non-specific threatening images represents the typical response to affective stimuli of negative valence [12]. The initial heart rate deceleration is believed to relate to orientation toward the stimulus and heightened sensory processing [12]. The smaller amplitude of the heart rate decrease exhibited by crisis managers while viewing the pictures may therefore reflect a reduction in the attentional and sensory processing resources devoted to crisis-related and non-specific aversive cues. In addition, crisis managers experienced less unpleasantness and arousal in response to the crisis-related images. This may be due to habituation following repeated exposure to disaster environments, which can be viewed as an adaptive coping mechanism facilitating behavioral adjustment to these highly challenging situations. The reduction in high frequency HRV during execution of the MIST reflects parasympathetic withdrawal, which is commonly elicited by conditions of acute stress [7]. Reductions in heart rate variability have been observed during exposure to various experimental stressors, including cognitive load (e.g., [22,21,53]. However, the study groups did not differ in the magnitude of stress-induced HRV modulation. It may therefore be concluded that the crisis managers' reduced physiological reactivity during cognitive activation was confined to the sympathetic system. In addition, no significant group differences emerged in HRV recorded at rest. High frequency HRV during resting state is a well-established risk marker of numerous physical and mental disorders, as well as a predictor of general morbidity [43,2,60,15]. The present data therefore does not support the view that there is a generally elevated health risk associated with the duties of crisis management. Several factors could account for the superior subjective health, lower self-reported stress and reduced psychophysiological stress reactivity of the crisis managers. First, greater interpersonal resources, to facilitate coping with crises-related burden and general occupational stressors should be considered. This is supported by the higher scores on the Social Recreation scale of the Questionnaire for Recuperation and Strain, which aims to quantify the presence of positive social contacts associated with pleasant feelings and easing of tension (Kallus, 1995). Moreover, crisis managers' lower scores on the Social Strain scale suggest a reduced burden from conflicts, and a more positive perception of the social environment. Social support and a positive social network undoubtedly represent powerful recuperative resources, with empirical evidence confirming that social support reduces stress and facilitates coping [61]. The positive effects of social support on physical and mental health are also beyond question (e.g., [62,51,48]). In the present context it is also notable that social support has been shown to diminish psychophysiological reactivity to experimental stressors (e.g., [58,61]). As mentioned in the Introduction section, psychological flexibility may furthermore contribute to improved stress-resistance in crisis managers. Their working conditions, involving significant but time-limited stressor exposure and longer phases of moderate strain, may foster greater flexibility in behavioral, cognitive and energetic adjustments, thereby conferring health benefits [33]. In terms of "stress inoculation", confrontation with intermittent, strong stressors may furthermore promote psychophysiological toughness and resilience [20]. In contrast, more constant exposure to relatively uniform stressors, which may be more typical of the conditions under which managers in other occupational sectors operate, may be less helpful with respect to the acquisition of coping flexibility and toughness. In some crisis managers, a tendency to pursue high levels of stimulation and limit experiences, i.e. sensation seeking [65], may also play a role. The possible contribution of health behaviors, in particular physical activity, to group differences should also be discussed. Regular exercise is known to increase stress tolerance [28], improve affective wellbeing [50], and confer benefits to mental and physical health (e.g., [23]). Various studies have furthermore demonstrated that higher levels of physical exercise and fitness are associated with reduced psychophysiological stress reactivity (e.g., [26]). The crisis managers investigated did indeed report almost twofold more physical exercise than the control group. However, the role of this variable in moderating group differences is challenged by the finding that almost all of the outcome measures remained significant after its inclusion as a covariate in the MANOVA. The group effect for physical health, assessed using the von Zerssen Symptom Checklist, disappeared when physical exercise was held constant, which is put into perspective by the finding that the difference in the Physical Problems scale of the Questionnaire for Recuperation and Strain remained significant. However, it should be noted that habitual health behaviors were assessed only approximately, by two questions referring to the time per week that participants spent on sports and the days per week on which they consumed alcohol. In future studies, it would be useful to include a more comprehensive health behavior assessment, using a validated psychometric instrument. Moreover, in addition to physical exercise, physical fitness, for example in terms of physical working capacity or maximum oxygen consumption, should also be taken into consideration (c.f. [3]). Moreover, processes of professional selection should be considered in the present findings. The work history of many crisis managers includes substantial activity within rescue services, where they are repeatedly exposed to disaster environments and associated stressors. It may be that it is predominantly these types of individuals who reach leading positions in the field, by demonstrating the ability to efficiently cope with such burdens. Successful acquisition of psychological flexibility may also be of relevance in this selection process. The investigated sample was characterized by a relatively high level of professional experience, i.e. approximately 11 years of crisis management and 23 years' total work experience. Therefore, a "survivor effect" cannot be ruled out, in which only those individuals who are more stress-resistant continue to work in crisis management for such a long period of time; those who are less resilient are more likely to have changed jobs earlier in their careers. As such, the results may be applicable specifically to senior crisis managers, and may not necessarily generalize to those with less experience. It should not be overlooked that some studies have suggested greater strain, and reduced wellbeing, in other professionals working in disaster environments, particularly paramedics and rescue workers [1,5,31]. By definition, crisis managers must be regarded as a specific occupational group that clearly differs from rescue personnel, in terms of their duties and particular role within their team. It has been demonstrated in other occupational fields that holding a leadership position does not confer increased strain and health risk, but rather that perceived occupational stress is inversely related to status in professional hierarchy [42,57]. This may be explained within the framework of the classical job demand-control model [32], in which mental strain due to demanding work is reduced if an individual can exert greater job control in terms of decisional latitude and scope for action. To further clarify this issue, our research approach could be extended in future studies. While in the present design crisis management was contrasted with other leading positions, comparisons between crisis managers and action forces or medical personnel would allow for assessment of differences in self-reported stress and psychophysiological stress responses associated with different emergency aid roles and functions: the contributions of specific occupational and organizational factors (e.g., command structure, allocation of discretionary competencies and management responsibilities, workload or job content) to stress levels could be systematically analyzed. By definition, the present findings primarily reflect only crisis managers in Austria and cannot be generalized to other countries. Due to the limited number and availability of local crisis managers, the sample employed was relatively small. This is particularly relevant to the self-report measures, where a larger sample size may have enabled a more efficient investigation of the specific pattern of stress components and psychosocial resources. Moreover, even though the study groups did not differ significantly in terms of relevant demographics, a case-control design specifically matching each crisis manager to a non-crisis manager for these variables may have been superior to the one used presently. As a further limitation, possible distortion of the self-report data due to demand effects of the questionnaires may be taken into account. It may be that some crisis managers' self-concept includes, for example, particularly high stress resistance and recuperative resources. These capacities may be more inherent to the occupational image of crisis managers compared to leaders in other fields. It therefore is possible that a part of this group exhibited a response tendency toward presenting themselves as particularly robust and resilient. Selection bias in the recruitment of participants should also be taken into account. Because only a subgroup of crisis managers, who were invited to participate by their supervisors, responded to this offer, it cannot be ruled out that those with a low subjective stress burden and pronounced resistance were more likely to participate. As the first systematic analysis of stress in crisis managers, this study revealed evidence of lower stress burden, a more positive recuperation-strain-balance and improved self-rated health status. Moreover, crisis managers' reduced autonomic reactivity to both crisis-related and non-specific stressors and cognitive challenge point towards elevated bodily stress resistance. These data underline crisis managers' increased behavioral and psychophysiological adjustment to the extreme demands of their vocation. In addition to its importance for the conservation of physical and mental health, improved stress tolerance may also reduce vulnerability to stress-related performance decline, thereby helping to ensure adequate preparedness to major incidents and maintenance of public security. The study was supported by the European Commission (project PsyCris, FP7-SEC-2012-1). A. Janka, C. Adler, L. Fischer, P. Perakakis, P. Guerra and S. Duschek declared that they have no conflict of interest. Human and animal rights and Informed Consent All procedures followed were in accordance with ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all patients for being included in the study. Alexander, D. A. , & Klein, S. (2001). Ambulance personnel and critical incidents. The British Journal of Psychiatry, 178(1), 76–81. doi:10.1192/bjp.178.1.76 Almoznino-Sarafian, D. , Sarafian, G. , Zyssman, I. , Shteinshnaider, M. , Tzur, I. , Kaplan, B.- Z. &ldots;Gorelik, O. (2009). Application of HRV-CD for estimation of life expectancy in various clinical disorders. European Journal of Internal Medicine, 20(8), 779–783. doi:10.1016/j.ejim.2009.08.006 American College of Sports Medicine. (2013). ACSM's guidelines for exercise testing and prescription. Lippincott Williams & Wilkins. Beaton, R. , Murphy, S. , & Pike, K. (1996). Work and nonwork stressors, negative affective states, and pain complaints among firefighters and paramedics. International Journal of Stress Management, 3(4), 223–237. doi:10.1017/S1049023X00040218 Bennett, P. , Williams, Y. , Page, N. , Hood, K. , & Woollard, M. (2004). Levels of mental health problems among uk emergency ambulance workers. Emergency Medicine Journal, 21(2), 235–236. doi:10.1136/emj.2003.005645 Berger, W. , Coutinho, E. S. F. , Figueira, I. , Marques-Portella, C. , Luz, M. P. , Neylan, T. C. &ldots;Mendlowicz, M. V. (2012). Rescuers at risk: a systematic review and meta-regression analysis of the worldwide current prevalence and correlates of ptsd in rescue workers. Journal of Social psychiatry and psychiatric epidemiology, 47(6), 1001–1011. doi:10.1007/s00127-011-0408-2 Berntson, G. G. , Bigger, J. T. , Eckberg, D. K. , Grossman, P. , Kaufmann, P. G. , Malik, M. &ldots;van der Molen, M. W. (1997). Heart rate variability: Origins, methods, and interpretive caveats. Psychophysiology, 34, 623–648. doi:10.1111/j.1469-8986.1997.tb02140.x Berntson, G. G. , Quigley, K. S. , & Lozano, D. (2007). Cardiovascular psychophysiology. In Cacioppo, J.T., Tassinary, L.G., & Berntson, G.G. (Eds.). Handbook of psychophysiology. Bonanno, G. A. , Papa, A. , Lalande, K. , Westphal, M. , & Coifman, K. (2004). The importance of being flexible the ability to both enhance and suppress emotional expression predicts long-term adjustment. Psychological Science, 15(7), 482–487. doi:10.1111/j.0956-7976.2004.00705.x Bradley, M. M. , & Lang, P. J. (1994). Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry, 25(1), 49–59. doi:10.1016/0005-7916(94)90063-9 Bradley, M. M. , & Lang, P. J. (2007a). The International Affective Digitized Sounds (2nd; IADS-2): Affective ratings of sounds and instruction manual. University of Florida, Gainesville, FL, Tech. Rep. B-3. Bradley, M. M. , & Lang, P. J. (2007b). Emotion and Motivation. In Cacioppo, J.T., Tassinary, L.G., & Berntson, G.G. (Eds.). Handbook of Psychophysiology. Brannon, L. , Feist, J. , & Updegraff, J. (2007). Health psychology: An introduction to behavior and health. Cengage Learning. Brown, J. , Mulhern, G. , & Joseph, S. (2002). Incident-related stressors, locus of control, coping, and psychological distress among firefighters in Northern Ireland. Journal of Traumatic Stress, 15(2), 161–168. doi:10.1023/A:1014816309959 Bylsma, L. M. , Salomon, K. , Taylor-Clift, A. , Morris, B. H. , & Rottenberg, J. (2014). RSA Reactivity in Current and Remitted Major Depressive Disorder. Psychosomatic medicine, 76(1), 66. doi:10.1097/PSY.0000000000000019 Critical Response in Security and Safety Emergencies. (2011). Retrieved from http://www.2020 horizon. com/CRISYS-Critical-Response-in-Security-and-Safety-Emergencies%28CRISYS%29-s5246.html. Cukor, J. , Wyka, K. , Mello, B. , Olden, M. , Jayasinghe, N. , Roberts, J. &ldots;Difede, J. (2011). The longitudinal course of PTSD among disaster workers deployed to the World Trade Center following the attacks of September 11th. Journal of Traumatic Stress, 24(5), 506–514. doi:10.1002/jts.20672 Dawson, M. E. , Schell, A. M. , & Filion, D. L. (2007). The electrodermal system. In Cacioppo, J.T., Tassinary, L.G., & Berntson, G.G. (Eds.). Handbook of psychophysiology. Dedovic, K. , Renwick, R. , Mahani, N. K. , Engert, V. , Lupien, S. J. , & Pruessner, J. N. (2005). The Montreal Imaging Stress Task: using functional imaging to investigate the effects of perceiving and processing psychosocial stress in the human brain. Journal of psychiatry & neuroscience: JPN, 30(5), 319. Dienstbier, R. A. (1989). Arousal and physiological toughness: implications for mental and physical health. Psychological review, 96(1), 84. Duschek, S. , Muckenthaler, M. , Werner, N. , & Reyes del Paso, G. A. (2009). Relationships between features of autonomic cardiovascular control and cognitive performance. Biological psychology, 81(2), 110–117. doi:10.1016/j.biopsycho.2009.03.003 Duschek, S. , Werner, N. , Kapan, N. , & Reyes del Paso, G. A. (2008). Patterns of cerebral blood flow and systemic hemodynamics during arithmetic processing. Journal of Psychophysiology, 22(2), 81–90. doi:10.1027/0269-8803.22.2.81 Dylewicz, P. , Borowicz-Bienkowska, S. , Deskur-Smielecka, E. , Kocur, P. , Przywarska, I. , & Wilk, M. (2005). Value of exercise capacity and physical activity in the prevention of cardiovascular diseases – brief review of the current literature. Journal of Public Health, 13(6), 313–317. doi:10.1007/s10389-005-0127-9 Essex, B. , & Scott, L. B. (2008). Chronic stress andassociated coping strategies among volunteer ems personnel. Prehospital Emergency Care, 12(1), 69–75. doi:10.1080/10903120701707955 Fliege, H. , Rose, M. , Arck, P. , Levenstein, S. , & Klapp, B. (2001). Validierung des "Perceived Stress Questionnaire"(PSQ) an einer deutschen Stichprobe. Diagnostica, 47(3), 142–152. doi:10.1026//0012-1924.47.3.142 Forcier, K. , Stroud, L. R. , Papandonatos, G. D. , Hitsman, B. , Reiches, M. , Krishnamoorthy, J. , & Niaura, R. (2006). Links between physical fitness and cardiovascular reactivity and recovery to psychological stressors: A meta-analysis. Health Psychology, 25(6), 723. doi:10.1037/0278-6133.25.6.723 Guha-Sapir, D. , Vos, F. , Below, R. , & Penserre, S. (2012). Annual disaster statistical review 2011: the Numbers and Trends (Tech. Rep.). UCL. doi:10.1097/PSY.0b013e318148c4c0 Hamer, M. , & Steptoe, A. (2007). Association between physical fitness, parasympathetic control, and proinflammatory responses to mental stress. Psychosomatic medicine, 69(7), 660–666. Harris, W. C. , Hancock, P. , & Harris, S. C. (2005). Information processing changes following extended stress. Military Psychology, 17(2), 115. doi:10.1207/s15327876mp1702_4 Hoffman, B. M. , Babyak, M. A. , Craighead, W. E. , Sherwood, A. , Doraiswamy, P. M. , Coons, M. J. , & Blumenthal, J. A. (2011). Exercise and pharmacotherapy in patients with major depression: one-year follow-up of the smile study. Psychosomatic Medicine, 73(2), 127. doi:10.1097/PSY.0b013e31820433a5 Kalemoglu, M. , & Keskin, O. (2006). Burnout syndrome at the emergency service. Scand J Trauma Resusc Emerg Med, 14, 37–40. Karasek Jr, R. A. (1979). Job demands, job decision latitude, and mental strain: Implications for job redesign. Administrative science quarterly, 24, 285–308. Kashdan, T. B. , & Rottenberg, J. (2010). Psychological flexibility as a fundamental aspect of health. Clinical psychology review, 30(7), 865–878. doi:10.1016/j.cpr.2010.03.001 Klaperski, S. , von Dawans, B. , Heinrichs, M. , & Fuchs, R. (2014). Effects of a 12-week endurance training program on the physiological response to psychosocial stress in men: a randomized controlled trial. Journal of behavioral medicine, 37(6), 1118–1133. Lang, P. J. , Bradley, M. M. , & Cuthbert, B. N. (1997). International affective picture system (IAPS): Technical manual and affective ratings. NIMH Center for the Study of Emotion and Attention.. LeBlanc, V. R. (2009). The effects of acute stress on performance: implications for health professions education. Academic Medicine, 84(10), S25–S33. LeBlanc, V. R. , MacDonald, R. D. , McArthur, B. , King, K. , & Lepine, T. (2005). Paramedic performance in calculating drug dosages following stressful scenarios in a human patient simulator. Prehospital Emergency Care, 9(4), 439–444. doi:10.1080/10903120500255255 LeBlanc, V. R. , Regehr, C. , Birze, A. , King, K. , Scott, A. K. , MacDonald, R. , & Tavares, W. (2011). The association between posttraumatic stress, coping, and acute stress responses in paramedics. Traumatology, 17(4), 10. doi:10.1177/1534765611429078 LeBlanc, V. R. , Regehr, C. , Tavares, W. , Scott, A. K. , MacDonald, R. , & King, K. (2012). The impact of stress on paramedic performance during simulated critical events. Prehospital and disaster medicine, 27(04), 369–374. doi:10.1017/S1049023X12001021 Levenstein, S. , Prantera, C. , Varvo, V. , Scribano, M. L. , Berto, E. , Luzi, C. , & Andreoli, A. (1993). Development of the Perceived Stress Questionnaire: a new tool for psychosomatic research. Journal of psychosomatic research, 37(1), 19–32. doi:10.1016/0022-3999(93)90120-5 Licht, C. M. , De Geus, E. J. , Van Dyck, R. , & Penninx, B. W. (2009). Association between anxiety disorders and heart rate variability in The Netherlands Study of Depression and Anxiety (NESDA). Psychosomatic medicine, 71(5), 508–518. doi:10.1097/PSY.0b013e3181a292a6 Marmot, M. G. , Stansfeld, S. , Patel, C. , North, F. , Head, J. , White, I. &ldots;Smith, G. D. (1991). Health inequalities among British civil servants: the Whitehall II study. The Lancet, 337(8754), 1387–1393. doi:10.1016/0140-6736(91)93068-K Masi, C. M. , Hawkley, L. C. , Rickett, E. M. , & Cacioppo, J. T. (2007). Respiratory sinus arrhythmia and diseases of aging: Obesity, diabetes mellitus, and hypertension. Biological psychology, 74(2), 212–223. doi:10.1016/j.biopsycho.2006.07.006 McClelland, D. C. (2000). Human motivation. CUP Archive. Munich Re. (2014). doi:10.1016/j.cmpb.2004.03.004 Peltola, M. , Tulppo, M. P. , Kiviniemi, A. , Hautala, A. J. , Seppänen, T. , Barthel, P. &ldots;Mäkikallio, T. H. (2008). Respiratory sinus arrhythmia as a predictor of sudden cardiac death after myocardial infarction. Annals of medicine, 40(5), 376–382. Penwell, L. , & Larkin, K. (2010). Social support and risk for cardiovascular disease and cancer: A qualitative review examining the role of inflammatory processes. Health Psychology Review, 4, 42–55. doi:10.1080/17437190903427546 Perakakis, P. , Joffily, M. , Taylor, M. , Guerra, P. , & Vila, J. (2010). KARDIA: A Matlab software for the analysis of cardiac interbeat intervals. Computer methods and programs in biomedicine, 98(1), 83–89. doi:10.1016/j.cmpb.2009.10.002 Reed, J. , & Buck, S. (2009). The effect of regular aerobic exercise on positive-activated affect: A meta-analysis. Psychology of Sport and Exercise, 10(6), 581–594. doi:10.1016/j.psychsport.2009.05.009 Reinhard, F. , & Maercker, A. (2004). Sekundäre traumatisierung, posttraumatische belastungsstörung, burnout und soziale unterstützung bei medizinischem rettungspersonal. Zeitschrift für Medizinische Psychologie, 13(1), 29–36. Reyes del Paso, G. A. , Garrido, S. , Pulgar, Á. , Mart\'\in-Vázquez, M. , & Duschek, S. (2010). Aberrances in autonomic cardiovascular regulation in fibromyalgia syndrome and their relevance for clinical pain reports. Psychosomatic Medicine, 72(5), 462–470. doi:10.1097/PSY.0b013e3181da91f1 Reyes del Paso, G. A. , Langewitz, W. , Mulder, L. J. , Roon, A. , & Duschek, S. (2013). The utility of low frequency heart rate variability as an index of sympathetic cardiac tone: a review with emphasis on a reanalysis of previous studies. Psychophysiology, 50(5), 477–487. doi:10.1111/psyp.12027 Ritchie, E. C. , Watson, P. J. , & Friedman, M. J. (2006). Interventions following mass violence and disasters: Strategies for mental health practice. Guilford Publications. Rozanski, A. , & Kubzansky, L. D. (2005). Psychologic functioning and physical health: a paradigm of flexibility. Psychosomatic medicine, 67, S47–S53. doi:10.1097/01.psy.0000164253.69550.49 Saynaeve, G. (2001). Psycho-social support in situations of mass emergency: A european policy paper concerning different aspects of psychosocial support and social accompaniment for people involved in major accidents and disasters. Brussels: Ministry of Public Health. Schnall, P. L. , Landsbergis, P. A. , & Baker, D. (1994). Job strain and cardiovascular disease. Annual review of public health, 15(1), 381–411. doi:10.1146/annurev.pu.15.050194.002121 Smith, A. M. , Loving, T. J. , Crockett, E. E. , & Campbell, L. (2009). What's closeness got to do with it? Men's and women's cortisol responses when providing and receiving support. Psychosomatic Medicine, 71(8), 843–851. doi:10.1097/PSY.0b013e3181b492e6 Svensson, E. , Angelborg-Thanderz, M. , & Sjöberg, L. (1993). Mission challenge, mental workload and performance in military aviation. Aviation, space, and environmental medicine. Tan, G. , Dao, T. K. , Farmer, L. , Sutherland, R. J. , & Gevirtz, R. (2011). Heart rate variability (HRV) and posttraumatic stress disorder (PTSD): A pilot study. Applied psychophysiology and biofeedback, 36(1), 27–35. doi:10.1007/s10484-010-9141-y Taylor, S. E. (2010). Social support: A review. In Friedman, H.S. (Eds.). The Oxford handbook of health psychology. Uchino, B. N. , Cacioppo, J. T. , & Kiecolt-Glaser, J. K. (1996). The relationship between social support and physiological processes: a review with emphasis on underlying mechanisms and implications for health. Psychological bulletin, 119(3), 488. Van Der Ploeg, E. , & Kleber, R. J. (2003). Acute and chronic job stressors among ambulance personnel: predictors of health symptoms. Occupational and environmental medicine, 60(suppl 1), i40–i46. doi:10.1136/oem.60.suppl_1.i40 von Zerssen, D. , & Petermann, F. (2011). Beschwerden-Liste - Revidierte Fassung (BL-R). Hogrefe: Göttingen, Germany. Zuckerman, M. (1990). The psychophysiology of sensation seeking. Journal of personality, 58(1), 313–345. 1. The following stimuli were selected: crisis-related images 3001, 3010, 3030, 3051, 3059, 3064, 3071, 3101, 3103, 3120, 3130, 3180, 3181, 3185, 3191; general stress images 1525, 1930, 6211, 6231, 6250, 6300, 6315, 6350, 6510, 6520, 6560, 6563, 6570, 6838, 6840; pleasant images 2071, 2080, 2340, 4220, 4520, 4599, 4607, 4608, 4652, 4658, 4659, 4660, 4668, 4676, 4800. 2. Stimuli 277, 319 and 403 were selected. Maria Chiara Pievatolo Michael Rowe Ludo Waltman Andre L. Appel David Trémouilles Cuitlahuac Hernandez-Santiago Felipe Duque Belfort Charles Baldacchino Amelie Janka , Christine Adler , Pandelis Perakakis, Pedro Guerra , Stefan Duschek . "Stress in crisis managers: evidence from self-report and psychophysiological assessments - Version 1". SJS (25 May. 2016)
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Relativity of simultaneity example in Resnick My question is a follow-up to this question about simultaneity. I would have posted it as a comment to the replies for that question, but I wasn't allowed to. When Resnick introduces relativity of simultaneity, he gives the following example (see figure): S & S' are two inertial frames with a relative velocity v, and each with its own synchronised clocks and meter sticks. Two events leave marks, at A & B in reference frame S and at A' & B' in reference frame S'. The observers in the two frames are located at O (equidistant from A,B) and O'(equidistant from A',B'), respectively. When the event happens at A, A' coincides with A, and when the event happens at B, B' coincides with B. Resnick goes on to show that the events can't be simultaneous for both observers because, if the events are simultaneous for O, then O' will see the light pulses at slightly different times (viewed from the S frame). I'm missing something in this argument: What basic inconsistency will arise if the events were simultaneous in both the frames? What will happen if the clocks in S at A,B show the same time for the events, and the clocks in S' at A', B' show the same time for the events? Also, a related question: Will the synchronised clocks of S' appear unsynchronised (to each other) to the observer in S? How will the observer in S check this? special-relativity KartManKartMan $\begingroup$ KartMan: "When the event happens at A, A' coincides with A, and when the event happens at B, B' coincides with B." -- Rather, the one event is the coincidence event of A and A' "meeting in passing" (consisting explicitly of of (A)'s indication of "being passed by (A')", and of (A')'s indication of "being passed by (A)"); and the other event is the coincidence event of B and B' "meeting in passing" (consisting explicitly of of (B)'s indication of "being passed by (B')", and of (B')'s indication of "being passed by (B)"). You notice that "primes" as part of names are inconvenient, btw. $\endgroup$ $\begingroup$ Nevertheless: +1 for picking Resnick's pics which explicitly identify participant A' in distinction to participant A, and B' in distinction to B. Because: this allows (also in accordance with Einstein's definition) for instance to conclude and say that (A)'s indication of "being passed by (A')" was simultaneous to (B)'s indication of "being passed by (B')" as well as (A')'s indication of "being passed by (A)" was not simultaneous to (B')'s indication of "being passed by (B)". (I plan to elaborate this and submit as an answer, but I can get to that only in more than a week.) $\endgroup$ I'm missing something in this argument: what basic inconsistency will arise if the events were simultaneous in both the frames? The one-way speed of light will not be c in all inertial frames of reference. Essentially, the clocks at rest in each frame are synchronized according to the Einstein synchronization convention. Indeed, the Lorentz transformation assumes Einstein synchronization. Thus, Einstein synchronization guarantees that the one-way speed of light is c and thus, that simultaneity is relative - synchronized spatially separated clocks at rest in one frame are not synchronized according to a relatively moving reference frame. This is most easily seen by direct application of the Lorentz transformation. Let clock A be located at $x= 0$ and clock B be located at $x = 1$ in the unprimed reference frame and assume that the clocks are synchronized in that frame: $$t_A = t_B$$ Assuming standard configuration, according to a relatively moving reference frame with velocity $v$, when clock A reads $t_A = 0$, clock B reads $$t_B = \frac{v}{c^2}s $$ Thus, clocks A & B are not synchronized in relatively moving frame of reference. Alfred CentauriAlfred Centauri $\begingroup$ Yes, the synchronisation procedure (along with invariance of c) is really key to this. Once you concede that this is the correct way to synchronise clocks in an intertial frame, the other things follow. However, can you show that any other consistent way of synchronising clocks is equivalent to this? $\endgroup$ – KartMan $\begingroup$ @KartMan, you might find this interesting: Synchronization Gauges and the Principles of Special Relativity arxiv.org/abs/gr-qc/0409105 $\endgroup$ – Alfred Centauri $\begingroup$ @AlfredCentauri You say "the Lorentz transformation assumes Einstein synchronization". Is it really necessary to have exactly Einstein's synchronization or would other consistent synchronizations provide the same kind of equations? $\endgroup$ – Marco Disce First, from the point of view of $O$. The lightning strikes at points $A$ and $B$ happen simultaneously. Light propagates away from those points, and since $O$ is halfway between $A$ and $B$, the light fronts reach him at the same moment (equal distances and equal velocities gives equal times). Now, for things as $O'$ sees them. The important thing to remember is that by the postulates of Special relativity, $O'$ must measure the speed of light as being $c$. It doesn't matter that he's moving towards $B$ and away from $A$ - the light fronts must be moving at speed $c$. So the two light fronts have equal speeds, and they started at equal distances from him (remember that he says that the lightning bolts hit $A'$ and $B'$, which are equidistant from him). So they should take equal amounts of time to reach him. And they do - if you're just looking at the time intervals. But they don't reach him at the same moment of time. How can we resolve this? They must have started at different times. The lightning bolt at $B'$ must have struck earlier than the lightning bolt at $A'$, which is why the $B'$ light front reaches him first. To clarify one point which may not be obvious: $O$ says the two light spheres are centered about $A$ and $B$, which are stationary with respect to him/her. $O'$ says that the light spheres are centered about $A'$ and $B'$, which are stationary with respect to him ($O'$). Regarding the synchronization question - whenever we talk about an inertial reference frame (say $S$), we actually mean a family of observers that all have the same motion. All the observers in the same inertial reference frame have their clocks synchronized. How do they do it? They know their distances from each other, and they know the speed of light is constant. They start with a specific observer at the (designated) origin, and he sends out light pulses every second. The observers who are $1\,\mbox{m}$ away from him know that it will take $\frac{1\,\mbox{m}}{3\times 10^{8}\mbox{m}/\mbox{s}}$ for the signal to reach them, and offset their clocks accordingly so that all their clocks are synchronized with the clock at the origin. And so forth. When you ask about how the clocks look like from another reference frame (say $S'$), you have to specify - do you mean just one particular observer in $S'$ (say the observer at $O'$)? Or do you mean the family of observers in $S'$? The family of observers in $S'$ would state that the clocks in $S$ are all running slow at the same rate, so that they're synchronized. For a single observer, you'd have to take into account light travel times and the like - I'm not sure what the answer to that would be. G. PailyG. Paily What I find interesting is that one event can be seen as two events by another. Can it not ? If one bolt of lightening struck the ground near the little red man, and the light expanded outward until eventually striking mirrors that are located at the ends of a train passing by, the light will eventually return to the little red man via these reflections. Thus he gets a second view of the lightning strike. But to those aboard the train, the light did not reach to opposite ends at the same time, thus to them it seemed to be 2 separate lightening strikes. Am I missing something ? $\begingroup$ Sean: "If one bolt of lightening struck the ground near the little red man, and the light expanded outward until eventually striking mirrors that are located at the ends of a train passing by [...] to those aboard the train, the light did not reach to opposite ends at the same time" -- Correct. (Details can be supplied. Note: there are several photons required.) "thus to them it seemed to be 2 separate lightening strikes." -- No; but two separate reflections of one signal (lightning strike). "Am I missing something ?" -- For one: the "blue signal fronts" seem unreal/unphysical. $\endgroup$ $\begingroup$ The two blue signal fonts indicate what would not actually occur, meaning the light reflected off the mirrors would not actually reach the green man located on the train simultaneously. Therefore, if he used two small mirrors such that he could observe both ends of the train at the same time, he would see the 2 reflected images of the lightning strike, but not simultaneously. Thus to him, the 2 reflected images of the lightning strike do not seem to be 1 in the same, but seem to be two separate events. $\endgroup$ $\begingroup$ Sean: "The two blue signal fonts indicate what would not actually occur" -- O.k. "the light reflected off the mirrors would not actually reach the green man located on the train simultaneously." -- It's incorrect to speak of "simultaneity" in this case. Instead: The light reflected off the mirrors would not actually reach the green man located on the train in coincidence. "Thus to him, the 2 reflected images of the lightning strike do not seem to be 1 in the same, but seem to be two separate events." -- Sure: two separate reflection events; in addition to the initial strike event. $\endgroup$ Before trying to address your questions let's discuss some general features of the figure and setup: Two events leave marks, at A & B in reference frame S and at A' & B' in reference frame S'. The observers in the two frames are located at O (equidistant from A,B) and O'(equidistant from A',B'). Along with O and O' the named elements A, B, A', and B' should be considered ("locations of") observers, too, since each is supposed to determine (and remember) with whom they took part in coincidence events, and in which order. Foremost, they are all likewise identifiable participants of the setup. (It is of course beyond the scope of this question how to determine which, if any, such participants were and remained at rest to each other, allowing to be attributed some particular "distance" from each other, and permitting the comparison of "distances".) O' will see the light pulses at slightly different times (viewed from the S frame). The figure, especially comparing pictures (2) and (4), shows that O' saw the two signal events of "A and A' meeting in passing" and of "B and B' meeting in passing" not in coincidence (i.e. unlike O saw these two signal events) but first "B and B' meeting in passing" and afterwards "A and A' meeting in passing". (Indeed, inbetween O' observed the coincidence (in passing) with B; if the figure is meant to be so precise, and if $\frac{v}{c} = \beta \gt \frac{1}{2}$.) These determinations (or setup presciptions) are not subject to any secondary arbitrary "view" such as "from the S frame", but they are proper and unambiguous. However, the individual pictures in Resnicks's figure are apparently suggestive of the "perspective from the S frame", or rather: with particular attention to the relations between participants A, O, and B (instead of the relations between participants A', O', and B'). For instance: The first picture shows the three coincidence events of "A and A' meeting in passing", "O and O' meeting in passing", and "B and B' meeting in passing" together. And, consistent with the overall setup presciption (if I understand it correctly), there is something in common (which suggests putting these three coincidence events in the same picture): A's indication of "being passed by A'" and O's indication of "being passed by O'" and B's indication of "being passed by B'" were (mutually) simultaneous each other. But in turn, A''s indication of "being passed by A" and O''s indication of "being passed by O" and B''s indication of "being passed by B" were (pairwise) not simultaneous each other. The other pictures are therefore presumably also meant to illustrate corresponding relations between participants A, O, and B. Since, according to the overall setup presciption, O's indication of "being passed by A'" and B's indication of "being passed by O'" were simultaneous to each other, therefore picture 4 (with $\beta \gt \frac{1}{2}$) displays consistently some particular indication of O after having been passed by A'. In contrast, if $\frac{\sqrt{5} - 1}{2} \gt \beta \gt \frac{1}{2}$ then O''s indication of "seeing the coincidence event of A and A' having met in passing" was before A''s indication of "being passed by O"; therefore picture (4) is certainly not representative of relations between participants A', O', and B', for all values $\beta \gt \frac{1}{2}$. if the events are simultaneous for O Simultaneity is not defined for entire events (which in general have many different participants who coincide at either one of these events); but simultaneity (or otherwise: "temporal order") is defined for indications of certain pairs participants at events (namely those who were and remained at rest to each other). To repeat the conclusions concerning simultaneity (or "temporal order") for the two signal events of the setup given above: were simultaneous each other. And: were (pairwise) not simultaneous each other; indeed B''s indication of "being passed by B" had been before A''s indication of "being passed by A". But simultaneity (or dis-simultaneity) is not attributable to the entire two signal events "A and A' meeting in passing", and "B and B' meeting in passing". Now to your concrete questions (in an order which I find convenient): What will happen if the clocks in S at A,B show the same time for the events, and the clocks in S' at A', B' show the same time for the events? Then such clocks associated with A' and B' would be called not synchronized. Will the synchronised clocks of S' appear unsynchronised (to each other) to the observer in S? Having been and remained synchronized (or un-synchronized) is a proper attribute of any given system of clocks (which were and remained at rest to each other) themselves; it's not a matter of "appearance" to anyone who does not belong to that system. How will the observer in S check this? At least in principle anyone would consult the observations and (proper) conclusions obtained by the members of the system under consideration themselves. The required observational data, namely indentifying who took part together in coincidence events, and who saw which signal events in which order, is presumably so basic as to be unambiguously comprehensible by anyone who is conscientious enough to wonder "how to check?". what basic inconsistency will arise if the events were simultaneous in both the frames? First of all note again that simultaneity is a matter of indications of individual participants in events, not a matter of entire events. If we require (different from the conclusions listed above) that were simultaneous to each other, and also (as above) that were simultaneous each other, too, then, yes, this requirement is inconsistent with the setup presciption given above (if I understand it correctly). (If one setup condition could be dropped or modified, in order to restore consistency with the "new requirements", then the "direction of motion, $\vec v$" might be changed from "along A, O, and B" and likewise in turn "along B', O', and A'" into "perpendicular motion". Accordingly, the first picture could remain; but the other three would no longer apply at all.) Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged special-relativity or ask your own question. The example of relativity of simultaneity given by Einstein A question about special relativity theory Relativity of Simultaneity - confusion regarding the ordering of events when taking different inertial frames as the origin Relativity of Simultaneity Relativity of simultaneity - An example Relativity of Simultaneity - Issue with the train car example Einstein's relativity of simultaneity - train Simultaneity Postulate for Special Relativity Simultaneity for stationary and inertially moving frames is different? Symmetry in the Simultaneity of events in Special Relativity
CommonCrawl
Coexistency on Hilbert Space Effect Algebras and a Characterisation of Its Symmetry Transformations György Pál Gehér ORCID: orcid.org/0000-0003-1499-32291 & Peter Šemrl2,3 Communications in Mathematical Physics volume 379, pages 1077–1112 (2020)Cite this article The Hilbert space effect algebra is a fundamental mathematical structure which is used to describe unsharp quantum measurements in Ludwig's formulation of quantum mechanics. Each effect represents a quantum (fuzzy) event. The relation of coexistence plays an important role in this theory, as it expresses when two quantum events can be measured together by applying a suitable apparatus. This paper's first goal is to answer a very natural question about this relation, namely, when two effects are coexistent with exactly the same effects? The other main aim is to describe all automorphisms of the effect algebra with respect to the relation of coexistence. In particular, we will see that they can differ quite a lot from usual standard automorphisms, which appear for instance in Ludwig's theorem. As a byproduct of our methods we also strengthen a theorem of Molnár. On the classical mathematical formulation of quantum mechanics Throughout this paper H will denote a complex, not necessarily separable, Hilbert space with dimension at least 2. In the classical mathematical formulation of quantum mechanics such a space is used to describe experiments at the atomic scale. For instance, the famous Stern–Gerlach experiment (which was one of the firsts showing the reality of the quantum spin) can be described using the two-dimensional Hilbert space \({\mathbb {C}}^2\). In the classical formulation of quantum mechanics, the space of all rank-one projections \({{\mathcal {P}}}_1(H)\) plays an important role, as its elements represent so-called quantum pure-states (in particular in the Stern–Gerlach experiment they represent the quantum spin). The so-called transition probability between two pure states \(P, Q \in {{\mathcal {P}}}_1(H)\) is the number \(\mathrm{tr}PQ\), where \(\mathrm{tr}\) denotes the trace. For the physical meaning of this quantity we refer the interested reader to e.g. [33]. A very important cornerstone of the mathematical foundations of quantum mechanics is Wigner's theorem, which states the following. Wigner's Theorem Given a bijective map \(\phi :{{\mathcal {P}}}_1(H)\rightarrow {{\mathcal {P}}}_1(H)\) that preserves the transition probability, i.e. \(\mathrm{tr}\phi (P)\phi (Q) = \mathrm{tr}PQ\) for all \(P,Q\in {{\mathcal {P}}}_1(H)\), one can always find either a unitary, or an antiunitary operator \(U:H\rightarrow H\) that implements \(\phi \), i.e. we have \(\phi (P) = UPU^*\) for all \(P\in {{\mathcal {P}}}_1(H)\). For an elementary proof see [11]. As explained thoroughly by Simon in [29], this theorem plays a crucial role (together with Stone's theorem and some representation theory) in obtaining the general time-dependent Schrödinger equation that describes quantum systems evolving in time (and which is usually written in the form \(i \hslash \tfrac{d}{dt} |\Psi (t)\rangle = {\hat{H}} |\Psi (t)\rangle \), where \(\hslash \) is the reduced Planck constant, \({\hat{H}}\) is the Hamiltonian operator, and \(|\Psi (t)\rangle \) is the unit vector that describes the system at time t). One of the main objectives of quantum mechanics is the study of measurement. In the classical formulation an observable (such as the position/momentum of a particle, or a component of a particle's spin) is represented by a self-adjoint operator. Equivalently, we could say that an observable is represented by a projection-valued measure \(E:{{\mathcal {B}}}_{\mathbb {R}}\rightarrow {{\mathcal {P}}}(H)\) (i.e. the spectral measure of the representing self-adjoint operator), where \({{\mathcal {B}}}_{\mathbb {R}}\) denotes the set of all Borel sets in \({\mathbb {R}}\) and \({{\mathcal {P}}}(H)\) the space of all projections (also called sharp effects) acting on H. If \(\Delta \) is a Borel set, then the quantum event that we get a value in \(\Delta \) corresponds to the projection \(E(\Delta )\). However, this mathematical formulation of observables implicitly assumes that measurements are perfectly accurate, which is far from being the case in real life. This was the crucial thought which led Ludwig to give an alternative axiomatic formulation of quantum mechanics which was introduced in his famous books [18] and [19]. On Ludwig's mathematical formulation of quantum mechanics This paper is related to Ludwig's formulation of quantum mechanics, more precisely, we shall examine one of the theory's most important relations, called coexistence (see the definition later). The main difference compared to the classical formulation is that (due to the fact that no perfectly accurate measurement is possible in practice) quantum events are not sharp, but fuzzy. Therefore, according to Ludwig, a quantum event is not necessarily a projection, but rather a self-adjoint operator whose spectrum lies in [0, 1]. Such an operator is called an effect, and the set of all such operators is called the Hilbert space effect algebra, or simply the effect algebra, which will be denoted by \({{\mathcal {E}}}(H)\). Clearly, we have \({{\mathcal {P}}}(H)\subset {{\mathcal {E}}}(H)\). A fuzzy or unsharp quantum observable corresponds to an effect-valued measure on \({{\mathcal {B}}}_{\mathbb {R}}\), which is often called a normalised positive operator-valued measure, see e.g. [13] for more details on this. We point out that the role of effects and positive operator-valued measures was already emphasised in the earlier book [8] of Davies. For some of the subsequent contributions to the theory we refer the reader to the work of Kraus [17] and the recent book of Busch–Lahti–Pellonpää–Ylinen [3]. Let us point out that, contradicting to its name, \({{\mathcal {E}}}(H)\) is obviously not an actual algebra. There are a number of operations and relations on the effect algebra that are relevant in mathematical physics. First of all, the usual partial order \(\le \), defined by \(A\le B\) if and only if \(\langle Ax, x \rangle \le \langle B x , x \rangle \) for all \(x \in H\), expresses that the occurrence of the quantum event A implies the occurrence of B. We emphasise that \(({{\mathcal {E}}}(H),\le )\) is not a lattice, because usually there is no largest effect C whose occurrence implies both A and B (see [1, 25, 31] for more details on this). Note that, as can be easily shown, we have \({{\mathcal {E}}}(H) = \{A \in {{\mathcal {B}}}(H) :A=A^*, 0\le A\le I\}\), where \({{\mathcal {B}}}(H)\) denotes the set of all bounded operators on H, \(A^*\) the adjoint of A, and I the identity operator. Hence sometimes the literature refers to \({{\mathcal {E}}}(H)\) as the operator interval [0, I]. Second, the so called ortho-complementation \(\perp \) is defined by \(A^\perp = I - A\), and it can be thought of as the complement event (or negation) of A, i.e. A occurs if and only if \(A^\perp \) does not. We are mostly interested in the relation of coexistence. Ludwig called two effects coexistent if they can be measured together by applying a suitable apparatus. In the language of mathematics (see [18, Theorem IV.1.2.4]), this translates into the following definition: Definition 1.1 \(A, B \in {{\mathcal {E}}}(H)\) are said to be coexistent, in notation \(A \sim B\), if there are effects \(E,F,G \in {{\mathcal {E}}}(H)\) such that $$\begin{aligned} A = E + G, \quad B = F + G \quad \text {and}\quad E+F+G \in {{\mathcal {E}}}(H). \end{aligned}$$ We point out that in the earlier work [8] Davies examined the simultaneous measurement of unsharp position and momentum, which is closely related to coexistence. It is apparent from the definition that coexistence is a symmetric relation. Although it is not trivial from the above definition, two sharp effects \(P,Q \in {{\mathcal {P}}}(H)\) are coexistent if and only if they commute (see Sect. 2), which corresponds to the classical formulation. We will denote the set of all effects that are coexistent with \(A\in {{\mathcal {E}}}(H)\) by $$\begin{aligned} A^\sim := \{ C\in {{\mathcal {E}}}(H):C\sim A \}, \end{aligned}$$ and more generally, if \({\mathcal {M}} \subset {{\mathcal {E}}}(H)\), then \({\mathcal {M}}^\sim := \cap \{ A^\sim :A\in {\mathcal {M}}\}\). The relation of order in \({{\mathcal {E}}}(H)\) is fairly well-understood. However, the relation of coexistence is very poorly understood. In the case of qubit effects (i.e. when \(\dim H=2\)) the recent papers of Busch–Schmidt [4], Stano–Reitzner–Heinosaari [30] and Yu–Liu–Li–Oh [36] provide some (rather complicated) characterisations of coexistence. Although there are no similar results in higher dimensions, it was pointed out by Wolf–Perez–Garcia–Fernandez in [35] that the question of coexistence of pairs of effects can be phrased as a so-called semidefinite program, which is a manageable numerical mathematical problem. We also mention that Heinosaari–Kiukas–Reitzner in [14] generalised the qubit coexistence characterisation to pairs of effects in arbitrary dimensions that belong to the von Neumann algebra generated by two projections. To illustrate how poorly the relation of coexistence is understood, we note that the following very natural question has not been answered before—not even for qubit effects: What does it mean for two effects A and B to be coexistent with exactly the same effects? As our first main result we answer this very natural question. Namely, we will show the following theorem, where \({{\mathcal {F}}}(H)\) and \(\mathcal {SC}(H)\) denote the set off all finite-rank and scalar effects on H, respectively. For any effects \(A,B\in {{\mathcal {E}}}(H)\) the following are equivalent: \(B\in \{A, A^\perp \}\) or \(A,B \in \mathcal {SC}(H)\), \(A^\sim = B^\sim \). Moreover, if H is separable, then the above statements are also equivalent to \(A^\sim \cap {{\mathcal {F}}}(H) = B^\sim \cap {{\mathcal {F}}}(H)\). Physically speaking, the above theorem says that the (unsharp) quantum events A and B can be measured together with exactly the same quantum events if and only if they are the same, or they are each other's negation, or both of them are scalar effects. Automorphisms of \({{\mathcal {E}}}(H)\) with respect to two relations Automorphisms of mathematical structures related to quantum mechanics are important to study because they provide the right tool to understand the time-evolution of certain quantum systems (see e.g. [18, Chapters V-VII] or [29]). In case when this mathematical structure is \({{\mathcal {E}}}(H)\), we call a map \(\phi :{{\mathcal {E}}}(H)\rightarrow {{\mathcal {E}}}(H)\) a standard automorphism of the effect algebra if there exists a unitary or antiunitary operator \(U:H\rightarrow H\) that (similarly to Wigner's theorem) implements \(\phi \), i.e. we have $$\begin{aligned} \phi (A) = UAU^*\qquad (A\in {{\mathcal {E}}}(H)). \end{aligned}$$ Obviously, standard automorphisms are automorphisms with respect to the relations of order: of ortho-complementation: and also of coexistence: One of the fundamental theorems in the mathematical foundations of quantum mechanics states that every ortho-order automorphism is a standard automorphism, which was first stated by Ludwig. Ludwig's Theorem (1954, Theorem V.5.23 in [18]). Let H be a Hilbert space with \(\dim H \ge 2\). Assume that \(\phi :{{\mathcal {E}}}(H) \rightarrow {{\mathcal {E}}}(H)\) is a bijective map satisfying (\(\le \)) and (\(\perp \)). Then \(\phi \) is a standard automorphism of \({{\mathcal {E}}}(H)\). Conversely, every standard automorphism satisfies (\(\le \)) and (\(\perp \)). We note that Ludwig's proof was incomplete and that he formulated his theorem under the additional assumption that \(\dim H \ge 3\). The reader can find a rigorous proof of this version for instance in [5]. Let us also point out that the two-dimensional case of Ludwig's theorem was only proved in 2001 in [22]. It is very natural to ask whether the conclusion of Ludwig's theorem remains true, if one replaces either (\(\le \)) by (\(\sim \)), or (\(\perp \)) by (\(\sim \)). Note that in light of Theorem 1.1, in the former case the condition (\(\perp \)) becomes almost redundant, except on \(\mathcal {SC}(H)\). However, as scalar effects are exactly those that are coexistent with every effect (see Sect. 2), this problem basically reduces to the characterisation of automorphisms with respect to coexistence only—which we shall consider later on. In 2001, Molnár answered the other question affirmatively under the assumption that \(\dim H \ge 3\). Molnár's Theorem (2001, Theorem 1 in [20]). Let H be a Hilbert space with \(\dim H \ge 3\). Assume that \(\phi :{{\mathcal {E}}}(H) \rightarrow {{\mathcal {E}}}(H)\) is a bijective map satisfying ( \(\le \)) and ( \(\sim \)). Then \(\phi \) is a standard automorphism of \({{\mathcal {E}}}(H)\). Conversely, every standard automorphism satisfies ( \(\le \)) and ( \(\sim \)). In this paper we shall prove the two-dimensional version of Molnár's theorem. Assume that \(\phi :{{\mathcal {E}}}({\mathbb {C}}^2) \rightarrow {{\mathcal {E}}}({\mathbb {C}}^2)\) is a bijective map satisfying (\(\le \)) and (\(\sim \)). Then \(\phi \) is a standard automorphism of \({{\mathcal {E}}}({\mathbb {C}}^2)\). Conversely, every standard automorphism satisfies (\(\le \)) and (\(\sim \)). Note that Molnár used the fundamental theorem of projective geometry to prove the aforementioned result, therefore his proof indeed works only if \(\dim H \ge 3\). Here, as an application of Theorem 1.1, we shall give an alternative proof of Molnár's theorem that does not use this dimensionality constraint, hence fill this dimensionality gap in. More precisely, we will reduce Molnár's theorem and Theorem 1.2 to Ludwig's theorem (see the end of Sect. 2). Automorphisms of \({{\mathcal {E}}}(H)\) with respect to only one relation It is certainly a much more difficult problem to describe the general form of automorphisms with respect to only one relation. Of course, here we mean either order preserving (\(\le \)), or coexistence preserving (\(\sim \)) maps, as it is easy (and not at all interesting) to describe bijective transformations that satisfy (\(\perp \)). It has been known for quite some time that automorphisms with respect to the order relation on \({{\mathcal {E}}}(H)\) may differ a lot from standard automorphisms, although they are at least always continuous with respect to the operator norm. We do not state the related result here, but only mention that the answer finally has been given by the second author in [26, Corollary 1.2] (see also [28]). The other main purpose of this paper is to give the characterisation of all automorphisms of \({{\mathcal {E}}}(H)\) with respect to the relation of coexistence. As can be seen from our result below, these maps can also differ a lot from standard automorphisms, moreover, unlike in the case of (\(\le \)) they are in general not even continuous. Let H be a Hilbert space with \(\dim H \ge 2\), and \(\phi :{{\mathcal {E}}}(H) \rightarrow {{\mathcal {E}}}(H)\) be a bijective map that satisfies (\(\sim \)). Then there exists a unitary or antiunitary operator \(U:H\rightarrow H\) and a bijective map \(g:[0,1] \rightarrow [0,1]\) such that we have $$\begin{aligned} \{\phi (A), \phi (A^\perp )\} = \{UAU^*, UA^\perp U^*\} \qquad (A\in {{\mathcal {E}}}(H)\setminus \mathcal {SC}(H)) \end{aligned}$$ $$\begin{aligned} \phi (tI) = g(t)I \qquad (t\in [0,1]). \end{aligned}$$ Conversely, every map of the above form preserves coexistence in both directions. Observe that in the above theorem if we assume that our automorphism is continuous with respect to the operator norm, then up to unitary-antiunitary equivalence we obtain that \(\phi \) is either the identity map, or the ortho-complementation: \(A\mapsto A^\perp \). Also note that the converse statement of the theorem follows easily by Theorem 1.1. As we mentioned earlier, the description of all automorphisms with respect to (\(\sim \)) and (\(\perp \)) now follows easily, namely, we get the same conclusion as in the above theorem, except that now g further satisfies \(g(1-t) = 1-g(t)\) for all \(0\le t\le 1\). Quantum mechanical interpretation of automorphisms of \({{\mathcal {E}}}(H)\) In order to explain the above automorphism theorems' physical interpretation, let us go back first to Wigner's theorem. Assume there are two physicists who analyse the same quantum mechanical system using the same Hilbert space H, but possibly they might associate different rank-one projections to the same quantum (pure) state. However, we know that they always agree on the transition probabilities. Then according to Wigner's theorem, there must be either a unitary, or an antiunitary operator with which we can transform from one analysis into the other (like a "coordinate transformation"). For the interpretation of Ludwig's theorem, let us say there are two physicists who analyse the same quantum fuzzy measurement, but they might associate different effects to the same quantum fuzzy event. If we at least know that both of them agree on which pairs of effects are ortho-complemented, and which effect is larger than the other (i.e. implies the occurrence of the other), then by Ludwig's theorem there must exist either a unitary, or an antiunitary operator that gives us the way to transform from one analysis into the other. As for the interpretation of our Theorem 1.3, if we only know that our physicists agree on which pairs of effects are coexistent (i.e. which pairs of quantum events can be measured together), then there is a map \(\phi \) satisfying (2) and (3) that transforms the first physicist's analysis into the other's. The outline of the paper In the next section we will prove our first main result, Theorem 1.1, and as an application, we prove Molnár's theorem in an alternative way that works for qubit effects as well. This will be followed by Sect. 3 where we prove our other main result, Theorem 1.3, in the case when \(\dim H = 2\). Then in Sect. 4, using the two-dimensional case, we shall prove the general version of our result. Let us point out once more that, unless otherwise stated, H is not assumed to be separable. We will close our paper with some discussion on the qubit case and some open problems in Sects. 5–6. Proofs of Theorems 1.1, 1.2, and Molnár's Theorem We start with some definitions. The symbol \({{\mathcal {P}}}(H)\) will stand for the set of all projections (idempotent and self-adjoint operators) on H, and \({{\mathcal {P}}}_1(H)\) will denote the set of all rank-one projections. The commutant of an effect A intersected with \({{\mathcal {E}}}(H)\) will be denoted by $$\begin{aligned} A^c := \{ C\in {{\mathcal {E}}}(H) :CA=AC\}, \end{aligned}$$ and more generally, for a subset \({\mathcal {M}}\subset {{\mathcal {E}}}(H)\) we will use the notation \({\mathcal {M}}^c := \cap \{ A^c :A\in {\mathcal {M}} \}\). Also, we set \(A^{cc} := (A^c)^c\) and \({\mathcal {M}}^{cc} := ({\mathcal {M}}^c)^c\). We continue with three known lemmas on the structure of coexistent pairs of effects that can all be found in [27]. The first two have been proved earlier, see [4, 21]. Lemma 2.1 For any \(A\in {{\mathcal {E}}}(H)\) and \(P \in {{\mathcal {P}}}(H)\) the following statements hold: \(A^\sim = {{\mathcal {E}}}(H)\) if and only if \(A \in \mathcal {SC}(H)\), \(P^\sim = P^c\), \(A^c \subseteq A^\sim \). Let \(A,B \in {{\mathcal {E}}}(H)\) so that their matrices are diagonal with respect to some orthogonal decomposition \(H = \oplus _{i\in \mathcal {I}} H_i\), i.e. \(A = \oplus _{i\in \mathcal {I}} A_i\) and \(B = \oplus _{i\in \mathcal {I}} B_i \in {{\mathcal {E}}}(\oplus _{i\in \mathcal {I}} H_i)\). Then \(A\sim B\) if and only if \(A_i\sim B_i\) for all \(i\in \mathcal {I}\). Let \(A,B \in {{\mathcal {E}}}(H)\). Then the following are equivalent: \(A \sim B\), there exist effects \(M,N \in {{\mathcal {E}}}(H)\) such that \(M \le A\), \(N \le I-A\), and \(M+N = B\). We continue with a corollary of Lemma 2.1. For any effect A and projection \(P\in A^\sim \) we have \(P\in A^c\). In particular, we have \(A^\sim \cap {{\mathcal {P}}}(H) = A^c\cap {{\mathcal {P}}}(H)\). Since coexistence is a symmetric relation, we obtain \(A\in P^\sim \), which implies \(AP=PA\). \(\square \) The next four statements are easy consequences of Lemma 2.3, we only prove two of them. For any effect A we have \(A^\sim = \left( A^\perp \right) ^\sim \). Let \(A\in {{\mathcal {E}}}(H)\) such that either \(0\notin \sigma (A)\), or \(1\notin \sigma (A)\). Then there exists an \(\varepsilon >0\) such that \(\{C\in {{\mathcal {E}}}(H):C\le \varepsilon I\} \subseteq A^\sim \). We recall the definition of the strength function of \(A\in {{\mathcal {E}}}(H)\): $$\begin{aligned} \Lambda (A,P) = \max \{ \lambda \ge 0 :\lambda P \le A \} \qquad (P\in {{\mathcal {P}}}_1(H)), \end{aligned}$$ see [2] for more details and properties. Assume that \(A\in {{\mathcal {E}}}(H)\), \(0 < t \le 1\), and \(P\in {{\mathcal {P}}}_1(H)\). Then the following conditions are equivalent: \(A\sim tP\); $$\begin{aligned} t \le \Lambda (A,P) + \Lambda (A^\perp ,P). \end{aligned}$$ By (ii) of Lemma 2.3 we have \(A\sim tP\) if and only if there exist \(t_1, t_2 \ge 0\) such that \(t = t_1 + t_2\), \(t_1 P \le A\) and \(t_2 P \le A^\perp \), which is of course equivalent to (4). \(\square \) Let \(A,B \in {{\mathcal {E}}}(H)\) such that \(A^\sim \subseteq B^\sim \). Assume that with respect to the orthogonal decomposition \(H = H_1 \oplus H_2\) the two effects have the following block-diagonal matrix forms: $$\begin{aligned} A = \left[ \begin{matrix} A_1 &{} 0 \\ 0 &{} A_2 \end{matrix}\right] \qquad \text {and} \qquad B = \left[ \begin{matrix} B_1 &{} 0 \\ 0 &{} B_2 \end{matrix}\right] \in {{\mathcal {E}}}(H_1 \oplus H_2). \end{aligned}$$ Then we also have $$\begin{aligned} A_1^\sim \subseteq B_1^\sim \qquad \text {and} \qquad A_2^\sim \subseteq B_2^\sim . \end{aligned}$$ In particular, if \(A^\sim = B^\sim \), then \(A_1^\sim = B_1^\sim \) and \(A_2^\sim = B_2^\sim \). Let \(P_1\) be the orthogonal projection onto \(H_1\). By Lemma 2.2 we observe that $$\begin{aligned}&\Bigg \{ \left[ \begin{matrix} C &{} 0 \\ 0 &{} D \end{matrix}\right] \in {{\mathcal {E}}}\left( H_1\oplus H_2 \right) :C \sim A_1, D \sim A_2 \Bigg \} = P_1^c \cap A^\sim \\&\qquad \subseteq P_1^c \cap B^\sim = \Bigg \{ \left[ \begin{matrix} E &{} 0 \\ 0 &{} F \end{matrix}\right] \in {{\mathcal {E}}}\left( H_1\oplus H_2 \right) :E \sim B_1, F \sim B_2 \Bigg \}, \end{aligned}$$ which immediately implies (5). \(\square \) Next, we recall the Busch–Gudder theorem about the explicit form of the strength function, which we shall use frequently here. We also adopt their notation, so whenever it is important to emphasise that the range of a rank-one projection P is \({\mathbb {C}}\cdot x\) with some \(x\in H\) such that \(\Vert x\Vert =1\), we write \(P_x\) instead. Furthermore, the symbol \(A^{-1/2}\) denotes the algebraic inverse of the bijective restriction \(A^{1/2}|_{(\mathrm{Im\,}A)^-}:(\mathrm{Im\,}A)^-\rightarrow \mathrm{Im\,}(A^{1/2})\), where \(\cdot ^-\) stands for the closure of a set. In particular, for all \(x \in \mathrm{Im\,}(A^{1/2})\) the vector \(A^{-1/2} x\) is the unique element in \((\mathrm{Im\,}A)^-\) which \(A^{1/2}\) maps to x. Busch–Gudder Theorem (1999, Theorem 4 in [2]) For every effect \(A \in {{\mathcal {E}}}(H)\) and unit vector \(x\in H\) we have $$\begin{aligned} \Lambda (A,P_x) = \left\{ \begin{matrix} \Vert A^{-1/2} x \Vert ^{-2}, &{} \text {if} \; x \in \mathrm{Im\,}(A^{1/2}), \\ 0, &{} \text {otherwise.} \end{matrix} \right. \end{aligned}$$ We proceed with proving some new results which will be crucial in the proofs of our main theorems. The first lemma is probably well-known, but as we did not find it in the literature, we state and prove it here. Recall that WOT and SOT stand for the weak- and strong operator topologies, respectively. For any effect \(A\in {{\mathcal {E}}}(H)\), the set \(A^\sim \) is convex and WOT-compact, hence it is also SOT- and norm-closed. Moreover, if H is separable, then the subset \(A^\sim \cap {{\mathcal {F}}}(H)\) is SOT-dense, hence also WOT-dense, in \(A^\sim \). Let \(t\in [0,1]\) and \(B_1,B_2\in A^\sim \). By Lemma 2.3 there are \(M_1,M_2,N_1,N_2\in {{\mathcal {E}}}(H)\) such that \(M_1+N_1 = B_1\), \(M_2+N_2 = B_2\), \(M_1\le A, N_1\le I-A\) and \(M_2\le A, N_2\le I-A\). Hence setting \(M = tM_1+(1-t)M_2\in {{\mathcal {E}}}(H)\) and \(N = tN_1+(1-t)N_2\in {{\mathcal {E}}}(H)\) gives \(M+N = tB_1+(1-t)B_2\) and \(M\le A, N\le I-A\), thus \(tB_1+(1-t)B_2\sim A\), so \(A^\sim \) is indeed convex. Next, we prove that \(A^\sim \) is WOT-compact. Clearly, \({{\mathcal {E}}}(H)\) is WOT-compact, as it is a bounded WOT-closed subset of \({{\mathcal {B}}}(H)\) (see [7, Proposition IX.5.5]), therefore it is enough to show that \(A^\sim \) is WOT-closed. Let \(\{B_\nu \}_\nu \subseteq A^\sim \) be an arbitrary net that WOT-converges to B, we shall show that \(B\sim A\) holds. For every \(\nu \) we can find two effects \(M_\nu \) and \(N_\nu \) such that \(M_\nu +N_\nu = B_\nu \), \(M_\nu \le A\) and \(N_\nu \le I-A\). By WOT-compactness of \({{\mathcal {E}}}(H)\), there exists a subnet \(\{B_\xi \}_\xi \) such that \(M_\xi \rightarrow M\) in WOT with some effect M. Again, by WOT-compactness of \({{\mathcal {E}}}(H)\), there exists a subnet \(\{B_\eta \}_\eta \) of the subnet \(\{B_\xi \}_\xi \) such that \(N_\eta \rightarrow N\) in WOT with some effect N. Obviously we also have \(B_\eta \rightarrow B\) and \(M_\eta \rightarrow M\) in WOT. Therefore we have \(M+N = B\) and by definition of WOT convergence we also obtain \(M\le A\), \(N\le I-A\), hence indeed \(B\sim A\). Closedness with respect to the other topologies is straightforward. Concerning our last statement for separable spaces, first we point out that for every effect C there exists a net of finite rank effects \(\{C_\nu \}_\nu \) such that \(C_\nu \le C\) holds for all \(\nu \) and \(C_\nu \rightarrow C\) in SOT. Denote by \(E_C\) the projection-valued spectral measure of C, and set \(C_n = \sum _{j=0}^{n} \frac{j}{n} E_C\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) \) for every \(n\in {\mathbb {N}}\). Clearly, each \(C_n\) has finite spectrum, satisfies \(C_n\le C\), and \(\Vert C_n - C\Vert \rightarrow 0\) as \(n\rightarrow \infty \). For each spectral projection \(E_C\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) \) we can take a sequence of finite-rank projections \(\{P_k^{j,n}\}_{k=1}^\infty \) such that \(P_k^{j,n}\le E_C\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) \) for all k and \(P_k^{j,n} \rightarrow E_C\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) \) in SOT as \(k\rightarrow \infty \). Define \(C_{n,k} := \sum _{j=0}^{n} \frac{j}{n} P_k^{j,n}\). It is apparent that \(C_{n,k} \le C_n\) for all n and k, and that for each n we have \(C_{n,k} \rightarrow C_n\) in SOT as \(k\rightarrow \infty \). Therefore the SOT-closure of \(\{C_{n,k}:n,k\in {\mathbb {N}}\}\) contains each \(C_n\), hence also C, thus we can construct a net \(\{C_\nu \}_\nu \) with the required properties. Now, let \(B\in A^\sim \) be arbitrary, and consider two other effects \(M,N\in {{\mathcal {E}}}(H)\) that satisfy the conditions of Lemma 2.3 (ii). Set \(C:= M\oplus N \in {{\mathcal {E}}}(H\oplus H)\), and denote by \(E_M\) and \(E_N\) the projection-valued spectral measures of M and N, respectively. Clearly, \(E_C\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) = E_M\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) \oplus E_N\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) \) for each j and n. In the above construction we can choose finite-rank projections of the form \(P_k^{j,n} = Q_k^{j,n}\oplus R_k^{j,n} \in {{\mathcal {P}}}(H\oplus H)\) where \(Q_k^{j,n}, R_k^{j,n} \in {{\mathcal {P}}}(H)\), \(Q_k^{j,n}\le E_M\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) \) and \(R_k^{j,n}\le E_N\left( \left[ \frac{j}{n},\frac{j+1}{n}\right) \right) \) holds for all k, n. Then each element \(C_\nu \) of the convergent net is an orthogonal sum of the form \(M_\nu \oplus N_\nu \in {{\mathcal {E}}}(H\oplus H))\). It is apparent that \(M_\nu , N_\nu \in {{\mathcal {F}}}(H)\), \(M_\nu \le M\) and \(N_\nu \le N\) for all \(\nu \), and that \(M_\nu \rightarrow M\), \(N_\nu \rightarrow N\) holds in SOT. Therefore \(M_\nu +N_\nu \in {{\mathcal {F}}}(H)\cap A^\sim \) and \(M_\nu +N_\nu \) converges to \(M+N = B\) in SOT, the proof is complete. \(\square \) We proceed to investigate when do we have the equation \(A^\sim = B^\sim \) for two effects A and B, which will take several steps. We will denote the set of all rank-one effects by \({{\mathcal {F}}}_1(H) := \{tP :P\in {{\mathcal {P}}}_1(H), 0<t\le 1\}\). Lemma 2.10 Let \(H = H_1\oplus H_2\) be an orthogonal decomposition and assume that \(A, B\in {{\mathcal {E}}}(H)\) have the following matrix decompositions: $$\begin{aligned} A = \left[ \begin{matrix} \lambda _1 I_1 &{} 0 \\ 0 &{} \lambda _2 I_2 \end{matrix} \right] \quad \text {and} \quad B = \left[ \begin{matrix} \mu _1 I_1 &{} 0 \\ 0 &{} \mu _2 I_2 \end{matrix} \right] \; \in {{\mathcal {E}}}(H_1\oplus H_2) \end{aligned}$$ where \(\lambda _1, \lambda _2, \mu _1, \mu _2 \in [0,1]\), and \(I_1\) and \(I_2\) denote the identity operators on \(H_1\) and \(H_2\), respectively. Then the following are equivalent: \(\Lambda (A,P) + \Lambda (A^\perp ,P) = \Lambda (B,P) + \Lambda (B^\perp ,P)\) holds for all \(P\in {{\mathcal {P}}}_1(H)\), \(A^\sim \cap {{\mathcal {F}}}_1(H) = B^\sim \cap {{\mathcal {F}}}_1(H)\), either \(\lambda _1=\lambda _2\) and \(\mu _1=\mu _2\), or \(\lambda _1=\mu _1\) and \(\lambda _2=\mu _2\), or \(\lambda _1 + \mu _1 = \lambda _2 + \mu _2 =1\). The directions (iii)\(\Longrightarrow \)(ii)\(\iff \)(i) are trivial by Lemma 2.1 (a) and Corollaries 2.5, 2.7, so we shall only consider the direction (i)\(\Longrightarrow \)(iii). First, a straightforward calculation using the Busch–Gudder theorem gives the following for every \(x_1\in H_1, x_2\in H_2, \Vert x_1\Vert = \Vert x_2\Vert = 1\) and \(0\le \alpha \le \tfrac{\pi }{2}\): $$\begin{aligned} \Lambda \left( A, P_{\cos \alpha x_1 + \sin \alpha x_2} \right) = \frac{1}{ \left( \tfrac{1}{\lambda _1}\right) \cdot \cos ^2\alpha + \left( \tfrac{1}{\lambda _2}\right) \cdot \sin ^2\alpha }, \end{aligned}$$ where we use the interpretations \(\tfrac{1}{0} = \infty \), \(\tfrac{1}{\infty } = 0\), \(\infty \cdot 0 = 0\), \(\infty + \infty = \infty \), and \(\infty + a = \infty \), \(\infty \cdot a = \infty \) (\(a>0\)), in order to make the formula valid also for the case when \(\lambda _1 = 0\) or \(\lambda _2 = 0\). Clearly, (8) depends only on \(\alpha \), but not on the specific choices of \(x_1\) and \(x_2\). We define the following two functions $$\begin{aligned} T_A:\left[ 0,\tfrac{\pi }{2}\right] \rightarrow [0,1], \qquad T_A(\alpha ) = \Lambda \left( A, P_{\cos \alpha x_1 + \sin \alpha x_2} \right) + \Lambda \left( A^\perp , P_{\cos \alpha x_1 + \sin \alpha x_2} \right) \end{aligned}$$ $$\begin{aligned} T_B:\left[ 0,\tfrac{\pi }{2}\right] \rightarrow [0,1], \qquad T_B(\alpha ) = \Lambda \left( B, P_{\cos \alpha x_1 + \sin \alpha x_2} \right) + \Lambda \left( B^\perp , P_{\cos \alpha x_1 + \sin \alpha x_2} \right) , \end{aligned}$$ which are the same by our assumptions. By (8), for all \(0\le \alpha \le \tfrac{\pi }{2}\) we have $$\begin{aligned} T_A(\alpha )&= \frac{1}{ \left( \tfrac{1}{\lambda _1}\right) \cdot \cos ^2\alpha + \left( \tfrac{1}{\lambda _2}\right) \cdot \sin ^2\alpha } + \frac{1}{ \left( \tfrac{1}{1-\lambda _1}\right) \cdot \cos ^2\alpha + \left( \tfrac{1}{1-\lambda _2}\right) \cdot \sin ^2\alpha } \nonumber \\&= \frac{1}{ \left( \tfrac{1}{\mu _1}\right) \cdot \cos ^2\alpha + \left( \tfrac{1}{\mu _2}\right) \cdot \sin ^2\alpha } + \frac{1}{ \left( \tfrac{1}{1-\mu _1}\right) \cdot \cos ^2\alpha + \left( \tfrac{1}{1-\mu _2}\right) \cdot \sin ^2\alpha } =T_B(\alpha ). \end{aligned}$$ Next, we observe the following implications: if \(\lambda _1 = \lambda _2\), then \(T_A(\alpha )\) is the constant 1 function, if \(\lambda _1 = 0\) and \(\lambda _2 = 1\), then \(T_A(\alpha )\) is the characteristic function \(\chi _{\{0,\pi /2\}}(\alpha )\), if \(\lambda _1 = 0\) and \(0< \lambda _2 < 1\), then \(T_A(\alpha )\) is continuous on \(\left[ 0,\tfrac{\pi }{2}\right) \), but has a jump at \(\tfrac{\pi }{2}\), namely \(\lim _{\alpha \rightarrow \tfrac{\pi }{2}-} T_A(\alpha ) = 1-\lambda _2\) and \(T_A(\tfrac{\pi }{2}) = 1\), if \(\lambda _1 = 1\) and \(0< \lambda _2 < 1\), then \(T_A(\alpha )\) is continuous on \(\left[ 0,\tfrac{\pi }{2}\right) \), but has a jump at \(\tfrac{\pi }{2}\), namely \(\lim _{\alpha \rightarrow \tfrac{\pi }{2}-} T_A(\alpha ) = \lambda _2\) and \(T_A(\tfrac{\pi }{2}) = 1\), if \(\lambda _1,\lambda _2 \in (0,1)\), then \(T_A(\alpha )\) is continuous on \(\left[ 0,\tfrac{\pi }{2}\right] \), if \(\lambda _1 \ne \lambda _2\), then we have \(T_A(0) = T_A(\tfrac{\pi }{2}) = 1\) and \(T_A(\alpha ) < 1\) for all \(0< \alpha < \tfrac{\pi }{2}\). All of the above statements are rather straightforward computations using the formula (9), let us only show the last one here. Clearly, \(T_A(0) = T_A(\tfrac{\pi }{2}) = 1\) is obvious. As for the other assertion, if \(\lambda _1,\lambda _2 \in (0,1)\), then we can use the strict version of the weighted harmonic-arithmetic mean inequality: $$\begin{aligned}&\frac{1}{ \tfrac{1}{\lambda _1} \cos ^2\alpha + \tfrac{1}{\lambda _2} \sin ^2\alpha } + \frac{1}{ \tfrac{1}{1-\lambda _1} \cos ^2\alpha + \tfrac{1}{1-\lambda _2} \sin ^2\alpha } \\&\quad< (\lambda _1 \cos ^2\alpha + \lambda _2 \sin ^2\alpha ) + ((1-\lambda _1) \cos ^2\alpha + (1-\lambda _2) \sin ^2\alpha ) = 1 \quad (0<\alpha <\tfrac{\pi }{2}). \end{aligned}$$ If \(\lambda _1 = 0< \lambda _2 < 1\), then we calculate in the following way: $$\begin{aligned}&\frac{1}{ \left( \tfrac{1}{0}\right) \cos ^2\alpha + \tfrac{1}{\lambda _2} \sin ^2\alpha } + \frac{1}{ \cos ^2\alpha + \tfrac{1}{1-\lambda _2} \sin ^2\alpha }\\&\quad = \frac{1}{ 1-\sin ^2\alpha + \tfrac{1}{1-\lambda _2} \sin ^2\alpha }< 1 \quad (0<\alpha <\tfrac{\pi }{2}). \end{aligned}$$ The remaining cases are very similar. The above observations together with Corollary 2.5 and (9) readily imply the following: \(A\in \mathcal {SC}(H)\) if and only if \(B\in \mathcal {SC}(H)\), \(A\in {{\mathcal {P}}}(H)\setminus \mathcal {SC}(H)\) if and only if \(B\in {{\mathcal {P}}}(H)\setminus \mathcal {SC}(H)\), in which case \(B \in \{A, A^\perp \}\), there exists a \(P\in {{\mathcal {P}}}(H)\setminus \mathcal {SC}(H)\) and a \(t\in (0,1)\) with \(A\in \{tP, I-tP\}\) if and only if \(B\in \{tP, I-tP\}\), \(\lambda _1,\lambda _2 \in (0,1)\) and \(\lambda _1\ne \lambda _2\) if and only if \(\mu _1, \mu _2 \in (0,1)\) and \(\mu _1\ne \mu _2\). So what remained is to show that in the last case we further have \(B \in \{A, A^\perp \}\), which is what we shall do below. Let us introduce the following functions: $$\begin{aligned}&{\mathcal {T}}_A:[0,1]\rightarrow [0,1], \qquad {\mathcal {T}}_A(s) := T_A(\arcsin \sqrt{s}) = \frac{\lambda _1\lambda _2}{\lambda _1 s + \lambda _2 (1-s)} \\&\quad + \frac{(1-\lambda _1)(1-\lambda _2)}{(1-\lambda _1) s + (1-\lambda _2) (1-s)} \end{aligned}$$ $$\begin{aligned}&{\mathcal {T}}_B:[0,1]\rightarrow [0,1], \qquad {\mathcal {T}}_B(s) := T_B(\arcsin \sqrt{s}) = \frac{\mu _1\mu _2}{\mu _1 s + \mu _2 (1-s)}\\&\quad + \frac{(1-\mu _1)(1-\mu _2)}{(1-\mu _1) s + (1-\mu _2) (1-s)}. \end{aligned}$$ Our aim is to prove that \({\mathcal {T}}_A(s) = {\mathcal {T}}_B(s)\) (\(s\in [0,1]\)) implies either \(\lambda _1=\mu _1\) and \(\lambda _2=\mu _2\), or \(\lambda _1 + \mu _1 = \lambda _2 + \mu _2 =1\). The derivative of \({\mathcal {T}}_A\) is $$\begin{aligned} \tfrac{d {\mathcal {T}}_A}{ds}(s) = (\lambda _1-\lambda _2)\left( \frac{-\lambda _1\lambda _2}{(\lambda _1 s + \lambda _2 (1-s))^2} + \frac{(1-\lambda _1)(1-\lambda _2)}{((1-\lambda _1) s + (1-\lambda _2) (1-s))^2}\right) , \end{aligned}$$ from which we calculate $$\begin{aligned} \tfrac{d {\mathcal {T}}_A}{ds}(0) = -\frac{(\lambda _1-\lambda _2)^2}{(1-\lambda _2)\lambda _2} \quad \text {and} \quad \tfrac{d {\mathcal {T}}_A}{ds}(1) = \frac{(\lambda _1-\lambda _2)^2}{(1-\lambda _1)\lambda _1}. \end{aligned}$$ Therefore, if we managed to show that the function $$\begin{aligned} F:(0,1)^2 \rightarrow {\mathbb {R}}^{2}, \quad F(x,y) = \left( \tfrac{(x-y)^2}{(1-x)x}, \tfrac{(x-y)^2}{(1-y)y} \right) \end{aligned}$$ is injective on the set \(\Delta := \{(x,y)\in {\mathbb {R}}^{2}:0<y<x<1 \}\), then we are done (note that \(F(x,y) = F(1-x,1-y)\)). For this assume that with some \(c,d>0\) we have $$\begin{aligned} \frac{(x-y)^2}{(1-x)x} = \frac{1}{c} \quad \text {and} \quad \frac{(x-y)^2}{(1-y)y} = \frac{1}{d}, \end{aligned}$$ or equivalently, $$\begin{aligned} (1-x)x = c(x-y)^2 \quad \text {and} \quad (1-y)y = d(x-y)^2. \end{aligned}$$ If we substitute \(u = \tfrac{x-y}{2}\) and \(v = \tfrac{x+y}{2}\), then we get $$\begin{aligned} (u+v)^2-(u+v) = -4cu^2 \quad \text {and} \quad (v-u)^2-(v-u) = -4du^2. \end{aligned}$$ Now, considering the sum and difference of these two equations and manipulate them a bit gives $$\begin{aligned} v^2-v = -(2c+2d+1)u^2 \quad \text {and} \quad v = (d-c) u + \tfrac{1}{2}. \end{aligned}$$ From these latter equations we conclude $$\begin{aligned} x-y = 2u = \sqrt{\tfrac{1}{(d-c)^2+2c+2d+1}} \quad \text {and} \quad x+y = 2v = 2(d-c) u + 1, \end{aligned}$$ which clearly implies that F is globally injective on \(\Delta \), and the proof is complete. \(\square \) We have an interesting consequence in finite dimensions. Corollary 2.11 Assume that \(2\le \dim H < \infty \) and \(A,B \in {{\mathcal {E}}}(H)\). Then the following are equivalent: \(\Lambda (A, P) + \Lambda (A^\perp , P) = \Lambda (B, P) + \Lambda (B^\perp , P)\) for all \(P\in P_1(H)\), either \(A,B \in \mathcal {SC}(H)\), or \(A=B\), or \(A=B^\perp \). The directions (i)\(\iff \)(ii)\(\Longleftarrow \)(iii) are trivial, so we shall only prove the (ii)\(\Longrightarrow \)(iii) direction. First, let us consider the two-dimensional case. As we saw in the proof of Lemma 2.10, we have \(A^\sim \cap {{\mathcal {F}}}_1(H) = {{\mathcal {F}}}_1(H)\) if and only if A is a scalar effect (see the first set of bullet points there). Therefore, without loss of generality we may assume that none of A and B are scalar effects. Notice that by Lemma 2.1, A and B commute with exactly the same rank-one projections, hence A and B possess the forms in (7) with some one-dimensional subspaces \(H_1\) and \(H_2\), and an easy application of Lemma 2.10 gives (iii). As for the general case, since again A and B commute with exactly the same rank-one projections, we can jointly diagonalise them with respect to some orthonormal basis \(\{e_j\}_{j=1}^n\), where \(n = \dim H\): $$\begin{aligned} A = \left[ \begin{matrix} \lambda _1 &{} 0 &{} \dots &{} 0 &{} 0 \\ 0 &{} \lambda _2 &{} \dots &{} 0 &{} 0 \\ \vdots &{} &{} \ddots &{} &{} \vdots \\ 0 &{} 0 &{} \dots &{} \lambda _{n-1} &{} 0 \\ 0 &{} 0 &{} \dots &{} 0 &{} \lambda _n \\ \end{matrix}\right] \quad \text {and} \quad B = \left[ \begin{matrix} \mu _1 &{} 0 &{} \dots &{} 0 &{} 0 \\ 0 &{} \mu _2 &{} \dots &{} 0 &{} 0 \\ \vdots &{} &{} \ddots &{} &{} \vdots \\ 0 &{} 0 &{} \dots &{} \mu _{n-1} &{} 0 \\ 0 &{} 0 &{} \dots &{} 0 &{} \mu _n \\ \end{matrix}\right] . \end{aligned}$$ Of course, for any two distinct \(i, j \in \{1,\dots , n\}\) we have the following equation for the strength functions: $$\begin{aligned} \Lambda (A, P) + \Lambda (A^\perp , P) = \Lambda (B, P) + \Lambda (B^\perp , P) \qquad (P \in P_1({\mathbb {C}}\cdot e_i + {\mathbb {C}}\cdot e_j)), \end{aligned}$$ which instantly implies $$\begin{aligned} \left[ \begin{matrix} \mu _i &{} 0 \\ 0 &{} \mu _j \end{matrix}\right] ^\sim \cap {{\mathcal {F}}}_1({\mathbb {C}}\cdot e_i + {\mathbb {C}}\cdot e_j) = \left[ \begin{matrix} \lambda _i &{} 0 \\ 0 &{} \lambda _j \end{matrix}\right] ^\sim \cap {{\mathcal {F}}}_1({\mathbb {C}}\cdot e_i + {\mathbb {C}}\cdot e_j). \end{aligned}$$ By the two-dimensional case this means that we have one of the following cases: \(\lambda _i = \lambda _j\) and \(\mu _i = \mu _j\), \(\lambda _i \ne \lambda _j\) and either \(\mu _i = \lambda _i\) and \(\mu _j = \lambda _j\), or \(\mu _i = 1-\lambda _i\) and \(\mu _j = 1-\lambda _j\). From here it is easy to conclude (iii). \(\square \) The commutant of an operator \(T\in {{\mathcal {B}}}(H)\) will be denoted by \(T' := \{ S\in {{\mathcal {B}}}(H) :ST=TS \}\), and more generally, if \({\mathcal {M}}\subseteq {{\mathcal {B}}}(H)\), then we set \({\mathcal {M}}' := \cap \{ T' :T\in {\mathcal {M}}\}\). We shall use the notations \(T'' := (T')'\) and \({\mathcal {M}}'' := ({\mathcal {M}}')'\) for the double commutants. For any \(A,B \in {{\mathcal {E}}}(H)\) the following three assertions hold: If \(A^\sim \subseteq B^\sim \), then \(B\in A''\). If \(\dim H \le \aleph _0\) and \(A^\sim \subseteq B^\sim \), then there exists a Borel function \(f:[0,1] \rightarrow [0,1]\) such that \(B = f(A)\). If B is a convex combination of \(A, A^\perp , 0\) and I, then \(A^\sim \subseteq B^\sim \). (a): Assume that \(C\in A'\). Our goal is to show \(B\in C'\). We express C in the following way: $$\begin{aligned} C = C_{\mathfrak {R}} + i C_{\mathfrak {I}}, \; C_{\mathfrak {R}} = \frac{C+C^*}{2}, \; C_{\mathfrak {I}} = \frac{C-C^*}{2i} \end{aligned}$$ where \(C_{\mathfrak {R}}\) and \(C_{\mathfrak {I}}\) are self-adjoint (they are usually called the real and imaginary parts of C). Since A is self-adjoint, \(C^* \in A'\), hence \(C_{\mathfrak {R}}, C_{\mathfrak {I}} \in A'\). Let \(E_{\mathfrak {R}}\) and \(E_{\mathfrak {I}}\) denote the projection-valued spectral measures of \(C_{\mathfrak {R}}\) and \(C_{\mathfrak {I}}\), respectively. By the spectral theorem ([7, Theorem IX.2.2]), Lemma 2.1 and Corollary 2.4, we have \(E_{\mathfrak {R}}(\Delta ), E_{\mathfrak {I}}(\Delta ) \in A^c \subseteq A^\sim \subseteq B^\sim \), therefore also \(E_{\mathfrak {R}}(\Delta ), E_{\mathfrak {I}}(\Delta ) \in B'\) for all \(\Delta \in {{\mathcal {B}}}_{\mathbb {R}}\), which gives \(C\in B'\). (b): This is an easy consequence of [7, Proposition IX.8.1 and Lemma IX.8.7]. (c): If \(A\sim C\), then also \(A^\perp , 0\) and \(I\sim C\). Hence by the convexity of \(C^\sim \) we obtain \(B\sim C\). \(\square \) Now, we are in the position to prove our first main result. Proof of Theorem 1.1 If H is separable, then the equivalence (ii)\(\iff \)(iii) is straightforward by Lemma 2.9. For general H the direction (i)\(\Longrightarrow \)(ii) is obvious, therefore we shall only prove (ii)\(\Longrightarrow \)(i), first in the separable, and then in the general case. By Lemma 2.1, we may assume throughout the rest of the proof that A and B are non-scalar effects. We will denote the spectral subspace of a self-adjoint operator T associated to a Borel set \(\Delta \subseteq {\mathbb {R}}\) by \(H_T(\Delta )\). (ii)\(\Longrightarrow \)(i) in the separable case: We split this part into two steps. STEP 1: Here, we establish two estimations, (11) and (12), for the strength functions of A and B on certain subspaces of H. Let \(\lambda _1, \lambda _2 \in \sigma (A), \lambda _1 \ne \lambda _2\) and \(0< \varepsilon < \tfrac{1}{2}|\lambda _1-\lambda _2|\). Then the spectral subspaces \(H_1 = H_A\left( (\lambda _1-\varepsilon , \lambda _1+\varepsilon ) \right) \) and \(H_2 = H_A\left( (\lambda _2-\varepsilon , \lambda _2+\varepsilon ) \right) \) are non-trivial and orthogonal. Set \(H_3\) to be the orthogonal complement of \(H_1\oplus H_2\), then the matrix of A written in the orthogonal decomposition \(H = H_1\oplus H_2 \oplus H_3\) is diagonal: $$\begin{aligned} A = \left[ \begin{matrix} A_1 &{} 0 &{} 0 \\ 0 &{} A_2 &{} 0 \\ 0 &{} 0 &{} A_3 \end{matrix}\right] \in {{\mathcal {B}}}(H_1\oplus H_2 \oplus H_3). \end{aligned}$$ Note that \(H_3\) might be a trivial subspace. Since by Corollary 2.4A and B commute with exactly the same projections, the matrix of B in \(H = H_1\oplus H_2 \oplus H_3\) is also diagonal: $$\begin{aligned} B = \left[ \begin{matrix} B_1 &{} 0 &{} 0 \\ 0 &{} B_2 &{} 0 \\ 0 &{} 0 &{} B_3 \end{matrix}\right] \in {{\mathcal {B}}}(H_1\oplus H_2 \oplus H_3). \end{aligned}$$ At this point, let us emphasise that of course \(H_j, A_j\) and \(B_j\) (\(j=1,2,3\)) all depend on \(\lambda _1, \lambda _2\) and \(\varepsilon \), but in order to keep our notation as simple as possible, we will stick with these symbols. However, if at any point it becomes important to point out this dependence, we shall use for instance \(B_j^{(\lambda _1, \lambda _2, \varepsilon )}\) instead of \(B_j\). Similar conventions apply later on. Observe that by Corollary 2.8 we have $$\begin{aligned} \left[ \begin{matrix} A_1 &{} 0 \\ 0 &{} A_2 \end{matrix}\right] ^\sim = \left[ \begin{matrix} B_1 &{} 0 \\ 0 &{} B_2 \end{matrix}\right] ^\sim . \end{aligned}$$ Now, we pick two arbitrary points \(\mu _1\in \sigma (B_1)\) and \(\mu _2\in \sigma (B_2)\). Then obviously, the following two subspaces are non-zero subspaces of \(H_1\) and \(H_2\), respectively: $$\begin{aligned} {\widehat{H}}_1 := (H_1)_{B_1}\big ( (\mu _1-\varepsilon , \mu _1+\varepsilon ) \big ), \;\; {\widehat{H}}_2 := (H_2)_{B_2}\big ( (\mu _2-\varepsilon , \mu _2+\varepsilon ) \big ). \end{aligned}$$ Similarly as above, we have the following matrix forms where \({\check{H}}_j = H_j \ominus {\widehat{H}}_j\) \((j=1,2)\): $$\begin{aligned} B_1 = \left[ \begin{matrix} {\widehat{B}}_1 &{} 0 \\ 0 &{} {\check{B}}_1 \end{matrix}\right] \in {{\mathcal {B}}}({\widehat{H}}_1 \oplus {\check{H}}_1) \;\; \text {and} \;\; B_2 = \left[ \begin{matrix} {\widehat{B}}_2 &{} 0 \\ 0 &{} {\check{B}}_2 \end{matrix}\right] \in {{\mathcal {B}}}({\widehat{H}}_2 \oplus {\check{H}}_2) \end{aligned}$$ $$\begin{aligned} A_1 = \left[ \begin{matrix} {\widehat{A}}_1 &{} 0 \\ 0 &{} {\check{A}}_1 \end{matrix}\right] \in {{\mathcal {B}}}({\widehat{H}}_1 \oplus {\check{H}}_1) \;\; \text {and} \;\; A_2 = \left[ \begin{matrix} {\widehat{A}}_2 &{} 0 \\ 0 &{} {\check{A}}_2 \end{matrix}\right] \in {{\mathcal {B}}}({\widehat{H}}_2 \oplus {\check{H}}_2). \end{aligned}$$ Note that \({\check{H}}_1\) or \({\check{H}}_2\) might be trivial subspaces. Again by Corollary 2.8, we have $$\begin{aligned} \left[ \begin{matrix} {\widehat{A}}_1 &{} 0 \\ 0 &{} {\widehat{A}}_2 \end{matrix}\right] ^\sim = \left[ \begin{matrix} {\widehat{B}}_1 &{} 0 \\ 0 &{} {\widehat{B}}_2 \end{matrix}\right] ^\sim . \end{aligned}$$ Let us point out that by construction \(\sigma ({\widehat{A}}_j) \subseteq [\lambda _j-\varepsilon ,\lambda _j+\varepsilon ]\) and \(\sigma ({\widehat{B}}_j) \subseteq [\mu _j-\varepsilon ,\mu _j+\varepsilon ]\). Corollary 2.7 gives the following identity for the strength functions, where \({\widehat{I}}_j\) denotes the identity on \({\widehat{H}}_j\) \((j=1,2)\): $$\begin{aligned}&\Lambda \left( \left[ \begin{matrix} {\widehat{A}}_1 &{} 0 \\ 0 &{} {\widehat{A}}_2 \end{matrix}\right] , P \right) + \Lambda \left( \left[ \begin{matrix} {\widehat{I}}_1 - {\widehat{A}}_1 &{} 0 \\ 0 &{} {\widehat{I}}_2 - {\widehat{A}}_2 \end{matrix}\right] , P \right) \nonumber \\&\quad = \Lambda \left( \left[ \begin{matrix} {\widehat{B}}_1 &{} 0 \\ 0 &{} {\widehat{B}}_2 \end{matrix}\right] , P \right) + \Lambda \left( \left[ \begin{matrix} {\widehat{I}}_1 - {\widehat{B}}_1 &{} 0 \\ 0 &{} {\widehat{I}}_2 - {\widehat{B}}_2 \end{matrix}\right] , P \right) \quad \left( \forall \; P\in {{\mathcal {P}}}_1\left( {\widehat{H}}_1\oplus {\widehat{H}}_2\right) \right) . \end{aligned}$$ $$\begin{aligned} \Theta :{\mathbb {R}}\rightarrow [0,1], \quad \Theta (t) = \left\{ \begin{matrix} 0 &{} \text {if} \; t< 0 \\ t &{} \text {if} \; 0 \le t \le 1 \\ 1&{} \text {if} \; 1 < t \end{matrix} \right. , \end{aligned}$$ and notice that we have the following two estimations for all rank-one projections P: $$\begin{aligned}&\Lambda \left( \left[ \begin{matrix} \Theta (\lambda _1-\varepsilon ){\widehat{I}}_1 &{} 0 \\ 0 &{} \Theta (\lambda _2-\varepsilon ){\widehat{I}}_2 \end{matrix}\right] , P \right) \nonumber \\&\qquad + \Lambda \left( \left[ \begin{matrix} \Theta (1-\lambda _1-\varepsilon ){\widehat{I}}_1 &{} 0 \\ 0 &{} \Theta (1-\lambda _2-\varepsilon ){\widehat{I}}_2 \end{matrix}\right] , P \right) \nonumber \\&\quad \le \text {the expression in }(10) \nonumber \\&\quad \le \Lambda \left( \left[ \begin{matrix} \Theta (\lambda _1+\varepsilon ){\widehat{I}}_1 &{} 0 \\ 0 &{} \Theta (\lambda _2+\varepsilon ){\widehat{I}}_2 \end{matrix}\right] , P \right) \nonumber \\&\qquad + \Lambda \left( \left[ \begin{matrix} \Theta (1-\lambda _1+\varepsilon ){\widehat{I}}_1 &{} 0 \\ 0 &{} \Theta (1-\lambda _2+\varepsilon ){\widehat{I}}_2 \end{matrix}\right] , P \right) \end{aligned}$$ $$\begin{aligned}&\Lambda \left( \left[ \begin{matrix} \Theta (\mu _1-\varepsilon ){\widehat{I}}_1 &{} 0 \\ 0 &{} \Theta (\mu _2-\varepsilon ){\widehat{I}}_2 \end{matrix}\right] , P \right) \nonumber \\&\qquad + \Lambda \left( \left[ \begin{matrix} \Theta (1-\mu _1-\varepsilon ){\widehat{I}}_1 &{} 0 \\ 0 &{} \Theta (1-\mu _2-\varepsilon ){\widehat{I}}_2 \end{matrix}\right] , P \right) \nonumber \\&\quad \le \text {the expression in} (10) \nonumber \\&\quad \le \Lambda \left( \left[ \begin{matrix} \Theta (\mu _1+\varepsilon ){\widehat{I}}_1 &{} 0 \\ 0 &{} \Theta (\mu _2+\varepsilon ){\widehat{I}}_2 \end{matrix}\right] , P \right) \nonumber \\&\qquad + \Lambda \left( \left[ \begin{matrix} \Theta (1-\mu _1+\varepsilon ){\widehat{I}}_1 &{} 0 \\ 0 &{} \Theta (1-\mu _2+\varepsilon ){\widehat{I}}_2 \end{matrix}\right] , P \right) . \end{aligned}$$ Note that the above estimations hold for any arbitrarily small \(\varepsilon \) and for all suitable choices of \(\mu _1\) and \(\mu _2\) (which of course depend on \(\varepsilon \)). STEP 2: Here we show that \(B\in \{A,A^\perp \}\). Let us define the following set that depends only on \(\lambda _j\): $$\begin{aligned} {\mathcal {C}}_j= & {} {\mathcal {C}}_j^{(\lambda _j)} := \bigcap \left\{ \sigma \left( B_j^{(\lambda _1, \lambda _2, \varepsilon )} \right) :0< \varepsilon< \tfrac{1}{2}|\lambda _1 - \lambda _2| \right\} \\= & {} \bigcap \left\{ \sigma \left( B|_{H_A((\lambda _j-\varepsilon ,\lambda _j+\varepsilon ))} \right) :0 < \varepsilon \right\} \quad (j=1,2). \end{aligned}$$ Notice that as this set is an intersection of monotonically decreasing (as \(\varepsilon \searrow 0\)), compact, non-empty sets, it must contain at least one element. Also, observe that if \(\mu _1\in {\mathcal {C}}_1\) and \(\mu _2\in {\mathcal {C}}_2\), then (11) and (12) hold for all \(\varepsilon >0\). We proceed with proving that either \({\mathcal {C}}_1 = \{\lambda _1\}\) and \({\mathcal {C}}_2 = \{\lambda _2\}\), or \({\mathcal {C}}_1 = \{1-\lambda _1\}\) and \({\mathcal {C}}_2 = \{1-\lambda _2\}\) hold. Fix two arbitrary elements \(\mu _1 \in {\mathcal {C}}_1\) and \(\mu _2 \in {\mathcal {C}}_2\), and assume that neither \(\lambda _1 = \mu _1\) and \(\lambda _2 = \mu _2\), nor \(\lambda _1 + \mu _1 = \lambda _2 + \mu _2 = 1\) hold. From here our aim is to get a contradiction. As we showed in the proof of Lemma 2.10, there exists an \(\alpha _0 \in \left( 0,\tfrac{\pi }{2}\right) \) such that we have $$\begin{aligned}&\frac{1}{ \left( \tfrac{1}{\lambda _1}\right) \cdot \cos ^2\alpha _0 + \left( \tfrac{1}{\lambda _2}\right) \cdot \sin ^2\alpha _0 } + \frac{1}{ \left( \tfrac{1}{1-\lambda _1}\right) \cdot \cos ^2\alpha _0 + \left( \tfrac{1}{1-\lambda _2}\right) \cdot \sin ^2\alpha _0 } \\&\quad \ne \frac{1}{ \left( \tfrac{1}{\mu _1}\right) \cdot \cos ^2\alpha _0 + \left( \tfrac{1}{\mu _2}\right) \cdot \sin ^2\alpha _0 } + \frac{1}{ \left( \tfrac{1}{1-\mu _1}\right) \cdot \cos ^2\alpha _0 + \left( \tfrac{1}{1-\mu _2}\right) \cdot \sin ^2\alpha _0 } \end{aligned}$$ where we interpret both sides as in (8). Notice that both summands on both sides depend continuously on \(\lambda _1,\lambda _2,\mu _1\) and \(\mu _2\). Therefore there exists an \(\varepsilon >0\) small enough and a rank-one projection \(P = P_{\cos \alpha _0 {\widehat{x}}_1+\sin \alpha _0 {\widehat{x}}_2}\), with \({\widehat{x}}_1\in {\widehat{H}}_1, {\widehat{x}}_2\in {\widehat{H}}_2, \Vert {\widehat{x}}_1\Vert =\Vert {\widehat{x}}_2\Vert =1\), such that the closed intervals bounded by the right- and left-hand sides of (11), and those of (12) are disjoint—which is a contradiction. Observe that as we can do the above for any two disjoint elements of the spectrum \(\sigma (A)\), we can conclude that one of the following possibilities occur: $$\begin{aligned} \{\lambda \} = \bigcap \left\{ \sigma \left( B|_{H_A((\lambda -\varepsilon , \lambda +\varepsilon ))} \right) :\varepsilon > 0 \right\} \qquad (\lambda \in \sigma (A)) \end{aligned}$$ $$\begin{aligned} \{1-\lambda \} = \bigcap \left\{ \sigma \left( B|_{H_A((\lambda -\varepsilon , \lambda +\varepsilon ))} \right) :\varepsilon > 0 \right\} \qquad (\lambda \in \sigma (A)). \end{aligned}$$ From here, we show that (13) implies \(A=B\), and (14) implies \(B=A^\perp \). As the latter can be reduced to the case (13), by considering \(B^\perp \) instead of B, we may assume without loss of generality that (13) holds. By Lemma 2.12 and [7, Theorem IX.8.10], there exists a function \(f\in L^\infty (\mu )\), where \(\mu \) is a scalar-valued spectral measure of A, such that \(B = f(A)\). Moreover, we have \(B = A\) if and only if \(f(\lambda ) = \lambda \) \(\mu \)-a.e, so we only have to prove the latter equation. Let us fix an arbitrarily small number \(\delta > 0\). By the spectral mapping theorem ( [7, Theorem IX.8.11]) and (13) we notice that for every \(\lambda \in \sigma (A)\) there exists an \(0< \varepsilon _{\lambda } < \delta \) such that $$\begin{aligned} \mu -\mathrm {essran} \left( f|_{(\lambda -\varepsilon _{\lambda },\lambda +\varepsilon _{\lambda })} \right) = \sigma \left( B|_{H_A((\lambda -\varepsilon _{\lambda },\lambda +\varepsilon _{\lambda }))} \right) \subseteq (\lambda -\delta ,\lambda +\delta ), \end{aligned}$$ where \(\mu -\mathrm {essran}\) denotes the essential range of a function with respect to \(\mu \) (see [7, Example IX.2.6]). Now, for every \(\lambda \in \sigma (A)\) we fix such an \(\varepsilon _{\lambda }\). Clearly, the intervals \(\{(\lambda -\varepsilon _{\lambda },\lambda +\varepsilon _{\lambda }) :\lambda \in \sigma (A)\}\) cover the whole spectrum \(\sigma (A)\), which is a compact set. Therefore we can find finitely many of them, let's say \(\lambda _1, \dots , \lambda _n\) so that $$\begin{aligned} \sigma (A) \subseteq \bigcup _{j=1}^{n} \left( \lambda _j-\varepsilon _{\lambda _j},\lambda _j+\varepsilon _{\lambda _j} \right) . \end{aligned}$$ Finally, we define the function $$\begin{aligned} h(\lambda ) = \lambda _j, \text { where } |\lambda -\lambda _{i}| \ge \varepsilon _{\lambda _i} \text { for all } 1\le i< j \text { and } |\lambda -\lambda _{j}| < \varepsilon _{\lambda _j}. \end{aligned}$$ By definition we have \(\Vert h - \mathrm {id}_{\sigma (A)} \Vert _{\infty } \le \delta \) where the \(\infty \)-norm is taken with respect to \(\mu \) and \(\mathrm {id}_{\sigma (A)}(\lambda ) = \lambda \) \((\lambda \in \sigma (A))\). But notice that by (15) we also have \(\Vert h - f \Vert _{\infty } \le \delta \), and hence \(\Vert f - \mathrm {id}_{\sigma (A)} \Vert _\infty \le 2\delta \). As this inequality holds for all positive \(\delta \), we actually get that \(f(\lambda ) = \lambda \) for \(\mu \)-a.e. \(\lambda \). (ii)\(\Longrightarrow \)(i) in the non-separable case: It is well-known that there exists an orthogonal decomposition \(H = \oplus _{i\in \mathcal {I}} H_i\) such that each \(H_i\) is a non-trivial, separable, invariant subspace of A, see for instance [7, Proposition IX.4.4]. Since A and B commute with exactly the same projections, both are diagonal with respect to the decomposition \(H = \oplus _{i\in \mathcal {I}} H_i\): $$\begin{aligned} A = \oplus _{i\in \mathcal {I}} A_i \;\; \text {and} \;\; B = \oplus _{i\in \mathcal {I}} B_i \in {{\mathcal {E}}}(\oplus _{i\in \mathcal {I}} H_i). \end{aligned}$$ By Corollary 2.8 we have \(A_i^\sim = B_i^\sim \) for all \(i\in \mathcal {I}\), therefore the separable case implies $$\begin{aligned} \text {either} \; A_i = B_i, \;\; \text {or} \; B_i = A_i^\perp , \;\; \text {or} \; A_i, B_i \in \mathcal {SC}(H_i) \qquad (i\in \mathcal {I}). \end{aligned}$$ Without loss of generality we may assume from now on that there exists an \(i_0\in \mathcal {I}\) so that \(A_{i_0}\) is not a scalar effect. (In case all of them are scalar, we simply combine two subspaces \(H_{i_1}\) and \(H_{i_2}\) so that \(\sigma (A_{i_1})\ne \sigma (A_{i_2})\)). This implies either \(A_{i_0} = B_{i_0}\), or \(B_{i_0} = A_{i_0}^\perp \). By considering \(B^\perp \) instead of B if necessary, we may assume from now on that \(A_{i_0} = B_{i_0}\) holds. Finally, let \(i_1 \in \mathcal {I} \setminus \{i_0\}\) be arbitrary, and let us consider the orthogonal decomposition \(H = \oplus _{i\in \mathcal {I}\setminus \{i_0,i_1\}} H_i \oplus K\) where \(K = H_{i_0}\oplus H_{i_1}\). Similarly as above, we obtain either \(A_{i_0}\oplus A_{i_1} = B_{i_0} \oplus B_{i_1}\), or \(B_{i_0}\oplus B_{i_1} = A_{i_0}^\perp \oplus A_{i_1}^\perp \), but since \(A_{i_0} = B_{i_0}\), we must have \(A_{i_1} = B_{i_1}\). As this holds for arbitrary \(i_1\), the proof is complete. \(\square \) Now, we are in the position to give an alternative proof of Molnár's theorem which also extends to the two-dimensional case. Proof of Theorem 1.2 and Molnár's theorem By (a) of Lemma 2.1 and (\(\sim \)) we obtain \(\phi (\mathcal {SC}(H)) = \mathcal {SC}(H)\), moreover, the property (\(\le \)) implies the existence of a strictly increasing bijection \(g:[0,1] \rightarrow [0,1]\) such that \(\phi (\lambda I) = g(\lambda ) I\) for every \(\lambda \in [0,1]\). By Theorem 1.1 we conclude $$\begin{aligned} \phi (A^\perp ) = \phi (A)^\perp \qquad (A\in {{\mathcal {E}}}(H)\setminus \mathcal {SC}(H)). \end{aligned}$$ We only have to show that the same holds for scalar operators, because then the theorem is reduced to Ludwig's theorem. For any effect A and any set of effects \({{\mathcal {S}}}\) let us define the following sets \(A^\le := \{B\in {{\mathcal {E}}}(H):A\le B\}\), \(A^\ge := \{B\in {{\mathcal {E}}}(H):A\ge B\}\) and \({{\mathcal {S}}}^\perp := \{B^\perp :B\in {{\mathcal {S}}}\}\). Observe that for any \(s,t\in [0,1]\) we have $$\begin{aligned} \left( (sI)^\le \cap (tI)^\ge \setminus \mathcal {SC}(H)\right) ^\perp = (sI)^\le \cap (tI)^\ge \setminus \mathcal {SC}(H) \ne \emptyset \end{aligned}$$ if and only if \(t=1-s\) and \(s< \tfrac{1}{2}\). Thus for all \(s < \tfrac{1}{2}\) we obtain $$\begin{aligned} \emptyset&\ne \left( (g(s)I)^\le \cap (g(1-s)I)^\ge \setminus \mathcal {SC}(H)\right) ^\perp = \phi \left( \left( (sI)^\le \cap ((1-s)I)^\ge \setminus \mathcal {SC}(H)\right) ^\perp \right) \\&= \phi \left( (sI)^\le \cap ((1-s)I)^\ge \setminus \mathcal {SC}(H)\right) = (g(s)I)^\le \cap (g(1-s)I)^\ge \setminus \mathcal {SC}(H), \end{aligned}$$ which by (16) implies \(g(1-s) = 1-g(s)\) and \(g(s)< \tfrac{1}{2}\), therefore we indeed have (\(\perp \)) for every effect. \(\square \) Proof of Theorem 1.3 in Two Dimensions In this section we prove our other main theorem for qubit effects. In order to do that we need to prove a few preparatory lemmas. We start with a characterisation of rank-one projections in terms of coexistence. For any \(A\in {{\mathcal {E}}}({\mathbb {C}}^2)\) the following are equivalent: there are no effects \(B\in {{\mathcal {E}}}({\mathbb {C}}^2)\) such that \(B^\sim \subsetneq A^\sim \), \(A \in {{\mathcal {P}}}_1({\mathbb {C}}^2)\). The case when \(A\in \mathcal {SC}({\mathbb {C}}^2)\) is trivial, therefore we may assume otherwise throughout the proof. (i)\(\Longrightarrow \)(ii): Suppose that \(A \notin {{\mathcal {P}}}_1({\mathbb {C}}^2)\), then by Corollary 2.6 there exists an \(\varepsilon >0\) such that \(\{C\in {{\mathcal {E}}}({\mathbb {C}}^2) :C \le \varepsilon I\} \subseteq A^\sim \). Let \(B\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\cap A^c\), then we have \(B^\sim = B^c = A^c\subseteq A^\sim \). But it is very easy to find a \(C\in {{\mathcal {E}}}({\mathbb {C}}^2)\) such that \(C \le \varepsilon I\) and \(C\notin B^c\), therefore we conclude \(B^\sim \subsetneq A^\sim \). (ii)\(\Longrightarrow \)(i): If \(A \in {{\mathcal {P}}}_1({\mathbb {C}}^2)\), \(B\in {{\mathcal {E}}}({\mathbb {C}}^2)\) and \(B^\sim \subsetneq A^\sim \), then also \(B^c\subsetneq A^c\), which is impossible. \(\square \) Note that the above statement does not hold in higher dimensions, see the final section of this paper for more details. We continue with a characterisation of rank-one and ortho-rank-one qubit effects in terms of coexistence. Let \(A\in {{\mathcal {E}}}({\mathbb {C}}^2)\setminus \mathcal {SC}({\mathbb {C}}^2)\). Then the following are equivalent: A or \(A^\perp \in {{\mathcal {F}}}_1({\mathbb {C}}^2)\setminus {{\mathcal {P}}}_1({\mathbb {C}}^2)\), There exists at least one \(B\in {{\mathcal {E}}}({\mathbb {C}}^2)\) such that \(B^\sim \subsetneq A^\sim \), and for every such pair of effects \(B_1, B_2\) we have either \(B_1^\sim \subseteq B_2^\sim \), or \(B_2^\sim \subseteq B_1^\sim \). Moreover, if (i) holds, i.e. A or \(A^\perp = tP\) with \(P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) and \(0<t<1\), then we have \(B^\sim \subseteq A^\sim \) if and only if B or \(B^\perp = sP\) with some \(t\le s\le 1\). First, notice that by Theorem 1.1 and Lemma 2.12 (c) we have $$\begin{aligned} (sP)^\sim \subseteq (tP)^\sim \quad \iff \quad t\le s \qquad (P\in {{\mathcal {P}}}_1({\mathbb {C}}^2),\; t, s \in (0,1]). \end{aligned}$$ (i)\(\Longrightarrow \)(ii): If we have \(B^\sim \subseteq (tP)^\sim \) with some rank-one projection P, \(t \in (0,1]\) and qubit effect B, then by Lemma 2.12 (b) we obtain \(P\in B^c\) and \(B\notin \mathcal {SC}({\mathbb {C}}^2)\). Furthermore, since \(B^\sim \cap {{\mathcal {F}}}_1({\mathbb {C}}^2) \subseteq (tP)^\sim \cap {{\mathcal {F}}}_1({\mathbb {C}}^2)\), by Corollary 2.7 we obtain $$\begin{aligned} T_B(\alpha ) \le T_{tP}(\alpha ) \qquad (0\le \alpha \le \tfrac{\pi }{2}), \end{aligned}$$ where we use the notation from the proof of Lemma 2.10. Thus, the discontinuity of \(T_{tP}(\alpha )\) at either \(\alpha = 0\), or \(\alpha = \tfrac{\pi }{2}\), implies the discontinuity of \(T_B(\alpha )\) at the same \(\alpha \). Whence we conclude either \(B=sP\), or \(B=I-sP\) with some \(t\le s\le 1\). (ii)\(\Longrightarrow \)(i): By Lemma 3.1, (ii) cannot hold for elements of \({{\mathcal {P}}}_1({\mathbb {C}}^2)\), so we only have to check that if \(A, A^\perp \notin {{\mathcal {F}}}_1({\mathbb {C}}^2) \cup \mathcal {SC}({\mathbb {C}}^2)\), then (ii) fails. Suppose that the spectral decomposition of A is \(\lambda _1 P + \lambda _2 P^\perp \) where \(1> \lambda _1>\lambda _2 > 0\). Then by Lemma 2.12 (c) we find that \(\left( \lambda _1 P\right) ^\sim \subseteq A^\sim \) and \(\left( (1-\lambda _2) P^\perp \right) ^\sim \subseteq A^\sim \) (see Figure 1), but by the previous part neither \((\lambda _1 P)^\sim \subseteq \left( (1-\lambda _2) P^\perp \right) ^\sim \), nor \(\left( (1-\lambda _2) P^\perp \right) ^\sim \subseteq (\lambda _1 P)^\sim \) holds. \(\square \) The figure shows all effects commuting with \(A \in {{\mathcal {E}}}({\mathbb {C}}^2)\setminus \mathcal {SC}({\mathbb {C}}^2)\), whose spectral decomposition is \(A = \lambda _1 P + \lambda _2 P^\perp \) with \(1>\lambda _1>\lambda _2>0\) For a visualisation of \((tP)^\sim \cap {{\mathcal {F}}}_1({\mathbb {C}}^2)\) see Sect. 5. Before we proceed with the proof of Theorem 1.3 for qubit effects, we need a few more lemmas about rank-one projections acting on \({\mathbb {C}}^2\). For all \(P, Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) we have $$\begin{aligned} \Vert P-Q\Vert ^2 = -\det (P-Q) = 1-\mathrm{tr}PQ = 1 - \Vert P^\perp -Q\Vert ^2. \end{aligned}$$ Since \(\mathrm{tr}(P-Q) = 0\), the eigenvalues of the self-adjoint operator \(P-Q\) are \(\lambda \) and \(-\lambda \) with some \(\lambda \ge 0\). Hence we have \(\Vert P-Q\Vert ^2 = -\det (P-Q)\). Applying a unitary similarity if necessary, we may assume without loss of generality that \((1,0) \in \mathrm{Im\,}P\). Obviously, there exist \(0\le \vartheta \le \tfrac{\pi }{2}\) and \(0\le \mu \le 2\pi \) such that \((\cos \vartheta , e^{i\mu }\sin \vartheta ) \in \mathrm{Im\,}Q\). Thus the matrix forms of P and Q in the standard basis are $$\begin{aligned} P = P_{(1,0)} = \left[ \begin{matrix} 1 \\ 0 \end{matrix} \right] \cdot \left[ \begin{matrix} 1 \\ 0 \end{matrix} \right] ^* = \left[ \begin{matrix} 1 &{} 0 \\ 0 &{} 0 \end{matrix} \right] \end{aligned}$$ $$\begin{aligned} Q = P_{(\cos \vartheta , e^{i\mu }\sin \vartheta )} = \left[ \begin{matrix} \cos \vartheta \\ e^{i\mu }\sin \vartheta \end{matrix} \right] \cdot \left[ \begin{matrix} \cos \vartheta \\ e^{i\mu }\sin \vartheta \end{matrix} \right] ^* = \left[ \begin{matrix} \cos ^2\vartheta &{} e^{-i\mu }\cos \vartheta \sin \vartheta \\ e^{i\mu }\cos \vartheta \sin \vartheta &{} \sin ^2\vartheta \end{matrix} \right] , \end{aligned}$$ where we used the notation of the Busch–Gudder theorem. Now, an easy calculation gives us \(\det (P-Q) = -\sin ^2\vartheta \) and \(\mathrm{tr}PQ = \cos ^2\vartheta \). Hence the second equation in (17) is proved, and the third one follows from \(\mathrm{tr}P^\perp Q = 1 - \mathrm{tr}PQ\). \(\square \) For \(P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) and \(s\in [0,1]\), let us use the following notation: $$\begin{aligned} {\mathcal {M}}_{P,s} := \left\{ Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2) :\Vert P-Q\Vert = s \right\} . \end{aligned}$$ Next, we examine this set. For all \(P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) the following statements are equivalent: \(s = \sin \tfrac{\pi }{4}\), there exists an \(R\in {\mathcal {M}}_{P,s}\) such that \(R^\perp \in {\mathcal {M}}_{P,s}\), for all \(R\in {\mathcal {M}}_{P,s}\) we have also \(R^\perp \in {\mathcal {M}}_{P,s}\). One could use the Bloch representation (see Sect. 5), however, let us give here a purely linear algebraic proof. Note that for any \(R_1, R_2 \in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) we have \(\Vert R_1-R_2\Vert = 1\) if and only if \(R_2 = R_1^\perp \). Without loss of generality we may assume that P has the matrix form of (18). Then for any \(0\le \vartheta \le \tfrac{\pi }{2}\) and \(R_1, R_2 \in {\mathcal {M}}_{P,\sin \vartheta }\) we have $$\begin{aligned} R_1= & {} \left[ \begin{matrix} \cos ^2\vartheta &{} e^{-i\mu _1}\cos \vartheta \sin \vartheta \\ e^{i\mu _1}\cos \vartheta \sin \vartheta &{} \sin ^2\vartheta \end{matrix} \right] \quad \text {and}\\ R_2= & {} \left[ \begin{matrix} \cos ^2\vartheta &{} e^{-i\mu _2}\cos \vartheta \sin \vartheta \\ e^{i\mu _2}\cos \vartheta \sin \vartheta &{} \sin ^2\vartheta \end{matrix} \right] \end{aligned}$$ with some \(\mu _1,\mu _2\in {\mathbb {R}}\). Hence, we get $$\begin{aligned} \Vert R_1-R_2\Vert&= \sqrt{1-\mathrm{tr}R_1R_2} = \sqrt{\sin ^2\vartheta \cos ^2\vartheta (2-e^{i(\mu _1-\mu _2)}-e^{i(\mu _2-\mu _1)})} \\&= |e^{i\mu _1}-e^{i\mu _2}| \cos \vartheta \sin \vartheta = \tfrac{1}{2} |e^{i\mu _1}-e^{i\mu _2}| \sin (2 \vartheta ). \end{aligned}$$ Notice that the right-hand side is always less than or equal to 1. Moreover, for any \(\mu _1\in {\mathbb {R}}\) there exist a \(\mu _2\in {\mathbb {R}}\) such that \(\Vert R_1-R_2\Vert =1\) if and only if \(\vartheta = \tfrac{\pi }{4}\). This completes the proof. \(\square \) Let \(P,Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) and \(s,t\in (0,1)\). Then the following are equivalent: \(tP\sim sQ\) either \(Q=P\), or \(Q=P^\perp \), or $$\begin{aligned} s \le \frac{1}{\tfrac{1}{1-t}\Vert P^\perp -Q\Vert ^2+\Vert P-Q\Vert ^2}. \end{aligned}$$ The case when \(Q\in \{P,P^\perp \}\) is trivial, so from now on we assume otherwise. Recall that two rank-one effects with different images are coexistent if and only if their sum is an effect, see [20, Lemma 2]. Therefore, (i) is equivalent to \(I-tP-sQ \ge 0\). Since \(\mathrm{tr}(I-tP-sQ) = 2-t-s > 0\), the latter is further equivalent to \(\det (I-tP-sQ) \ge 0\). Without loss of generality we may assume that P and Q have the matrix forms written in (18) and (19) with \(0<\vartheta <\tfrac{\pi }{2}\). Then a calculation gives $$\begin{aligned} \det (I-tP-sQ) = s(t-1)\sin ^2\vartheta - s\cos ^2\vartheta +1-t = 1-t-s+ts\Vert P-Q\Vert ^2. \end{aligned}$$ From the latter we get that \(\det (I-tP-sQ) \ge 0\) holds if and only if $$\begin{aligned} s\le \frac{1-t}{1-t\Vert P-Q\Vert ^2}, \end{aligned}$$ which, by (17) is equivalent to (ii). \(\square \) Note that we have $$\begin{aligned} 0< \frac{1}{\tfrac{1}{1-t}\Vert P^\perp -Q\Vert ^2+\Vert P-Q\Vert ^2} < 1 \qquad (t\in (0,1), P,Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2), Q\notin \{P,P^\perp \}). \end{aligned}$$ We need one more lemma. Let \(P,Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\). Then there exists a projection \(R\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) such that $$\begin{aligned} \Vert P-R\Vert = \Vert Q-R\Vert = \sin \tfrac{\pi }{4}. \end{aligned}$$ Again, one could use the Bloch representation, however, let us give here a purely linear algebraic proof. We may assume without loss of generality that P and Q are of the form (18) and (19). Then for any \(z\in {\mathbb {C}}\), \(|z|=1\) the rank-one projection $$\begin{aligned} R = \frac{1}{\sqrt{2}}\left[ \begin{matrix} 1 \\ z \end{matrix} \right] \cdot \left( \frac{1}{\sqrt{2}}\left[ \begin{matrix} 1\\ z \end{matrix} \right] \right) ^* = \frac{1}{2}\left[ \begin{matrix} 1 &{} {\overline{z}} \\ z &{} 1 \end{matrix} \right] \end{aligned}$$ satisfies \(\Vert P-R\Vert = \sin \tfrac{\pi }{4}\). In order to complete the proof we only have to find a z with \(|z|=1\) such that \(\mathrm{tr}RQ = \tfrac{1}{2}\), which is an easy calculation. Namely, we find that \(z=ie^{i\mu }\) is a suitable choice. \(\square \) Now, we are in the position to prove our second main result in the low-dimensional case. The proof is divided into the following three steps: we show some basic properties of \(\phi \), in particular, that it preserves commutativity in both directions, we show that \(\phi \) maps pairs of rank-one projections with distance \(\sin \tfrac{\pi }{4}\) into pairs of rank-one projections with the same distance, we finish the proof by examining how \(\phi \) acts on rank-one projections and rank-one effects. STEP 1: First of all, the properties of \(\phi \) imply $$\begin{aligned} \phi (A)^\sim = \phi (A^\sim ) \quad (A\in {{\mathcal {E}}}({\mathbb {C}}^2)), \end{aligned}$$ $$\begin{aligned} B^\sim \subseteq A^\sim \;\;\iff \;\; \phi (B)^\sim \subseteq \phi (A)^\sim \quad (A,B\in {{\mathcal {E}}}({\mathbb {C}}^2)). \end{aligned}$$ Hence, it is straightforward from Lemma 2.1 that there exists a bijection \(g:[0,1] \rightarrow [0,1]\) such that $$\begin{aligned} \phi (t I) = g(t)I \quad (t\in [0,1]). \end{aligned}$$ Also, by Lemma 3.1 we easily infer $$\begin{aligned} \phi ({{\mathcal {P}}}_1({\mathbb {C}}^2)) = {{\mathcal {P}}}_1({\mathbb {C}}^2), \end{aligned}$$ thus, in particular, we get $$\begin{aligned} \phi (P^c) = \phi (P^\sim ) = \phi (P)^\sim = \phi (P)^c \quad (P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ By Theorem 1.1 we also obtain $$\begin{aligned} \phi (A^\perp ) = \phi (A)^\perp \quad (A\in {{\mathcal {E}}}({\mathbb {C}}^2)\setminus \mathcal {SC}({\mathbb {C}}^2)). \end{aligned}$$ Now, we observe that \(\phi \) preserves commutativity in both directions. Indeed we have the following for every \(A,B \in {{\mathcal {E}}}({\mathbb {C}}^2)\setminus \mathcal {SC}({\mathbb {C}}^2)\): $$\begin{aligned} AB =&BA \,\iff \, A^\sim \cap {{\mathcal {P}}}_1({\mathbb {C}}^2) = B^\sim \cap {{\mathcal {P}}}_1({\mathbb {C}}^2) = \{P,P^\perp \} \text { for some } P\in {{\mathcal {P}}}_1({\mathbb {C}}^2) \\&\,\iff \, \phi (A)^\sim \cap {{\mathcal {P}}}_1({\mathbb {C}}^2) = \phi (B)^\sim \cap {{\mathcal {P}}}_1({\mathbb {C}}^2)\\ =&\{Q,Q^\perp \} \text { for some } Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2) \\&\,\iff \, \phi (A)\phi (B) = \phi (B)\phi (A). \end{aligned}$$ Note that we easily get the same conclusion using (20) if any of the two effects is a scalar effect. Next, notice that Lemma 3.2 implies $$\begin{aligned} A \,\text {or}\, A^\perp \in {{\mathcal {F}}}_1({\mathbb {C}}^2)\setminus {{\mathcal {P}}}_1({\mathbb {C}}^2) \;\;\iff \;\; \phi (A) \,\text {or}\, \phi (A)^\perp \in {{\mathcal {F}}}_1({\mathbb {C}}^2)\setminus {{\mathcal {P}}}_1({\mathbb {C}}^2). \end{aligned}$$ Therefore, by interchanging the \(\phi \)-images of tP and \(I-tP\) for some \(0<t<1\) and \(P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\), we may assume without loss of generality that $$\begin{aligned} \phi \left( {{\mathcal {F}}}_1({\mathbb {C}}^2)\setminus {{\mathcal {P}}}_1({\mathbb {C}}^2)\right) = {{\mathcal {F}}}_1({\mathbb {C}}^2)\setminus {{\mathcal {P}}}_1({\mathbb {C}}^2). \end{aligned}$$ Hence we obtain the following for all rank-one projections P: $$\begin{aligned} \phi \left( \{tP, tP^\perp :0<t\le 1\}\right)= & {} \phi \left( P^c\cap {{\mathcal {F}}}_1({\mathbb {C}}^2)\right) = \phi (P)^c\cap {{\mathcal {F}}}_1({\mathbb {C}}^2) \\= & {} \{t\phi (P), t\phi (P)^\perp :0<t\le 1\}. \end{aligned}$$ Thus, again by interchanging the \(\phi \)-images of P and \(P^\perp \) for some \(P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\), and using Lemma 3.2, we may assume without loss of generality that for every \(P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) there exists a strictly increasing bijective map \(f_P:(0,1]\rightarrow (0,1]\) such that $$\begin{aligned} \phi (tP) = f_P(t) \phi (P) \quad (0<t\le 1, P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ STEP 2: We define the following set for any qubit effect of the form tP, \(0<t<1, P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\): $$\begin{aligned} \ell _{tP} := \left\{ \frac{1}{ \tfrac{1}{1-t} \Vert P^\perp -Q\Vert ^2 + \Vert P-Q\Vert ^2 } Q :Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\setminus \{P, P^\perp \} \right\} . \end{aligned}$$ (For a visualisation of \(\ell _{tP}\) see Sect. 5.) Using Lemma 3.5 we see that $$\begin{aligned} \ell _{tP} = \big ((tP)^\sim \setminus \cup \{(sP)^\sim :t<s<1\}\big ) \cap {{\mathcal {F}}}_1({\mathbb {C}}^2) \qquad (0<t<1, P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ By the properties of \(\phi \) we obtain $$\begin{aligned} \phi (\ell _{tP}) = \ell _{\phi (tP)} = \ell _{f_P(t)\phi (P)} \qquad (0<t<1, P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ Next, using the set introduced in (22), we prove the following property of \(\phi \): $$\begin{aligned} \Vert P-Q\Vert = \sin \tfrac{\pi }{4} \;\iff \; \Vert \phi (P)-\phi (Q)\Vert = \sin \tfrac{\pi }{4} \qquad (P,Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ By a straightforward calculation we get that $$\begin{aligned} \ell _{tP}\cap \ell _{rP^\perp }= & {} \left\{ \frac{1-t}{1-t \cdot s(t,r)^2}Q :Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2), \Vert P-Q\Vert = s(t,r) \right\} \\&(t,r\in (0,1), P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)) \end{aligned}$$ $$\begin{aligned} s(t,r) := \sqrt{\frac{\tfrac{t}{1-t}}{\tfrac{t}{1-t}+\tfrac{r}{1-r}}}. \end{aligned}$$ Note that \(s(t,r) = \sin \tfrac{\pi }{4}\) holds if and only if \(t= r\). By Lemma 3.4, this is further equivalent to the following: $$\begin{aligned} \forall \; A_1 \in \ell _{tP}\cap \ell _{rP^\perp }, \; \exists \, A_2 \in \ell _{tP}\cap \ell _{rP^\perp }, A_1\ne A_2:(A_1)^\sim \cap {{\mathcal {P}}}_1({\mathbb {C}}^2) = (A_2)^\sim \cap {{\mathcal {P}}}_1({\mathbb {C}}^2). \end{aligned}$$ Notice that by (23) this is equivalent to the following: $$\begin{aligned}&\forall \; B_1 \in \ell _{f_P(t)\phi (P)}\cap \ell _{f_{P^\perp }(r)\phi (P)^\perp }, \; \exists \, B_2 \in \ell _{f_P(t)\phi (P)}\cap \ell _{f_{P^\perp }(r)\phi (P)^\perp }, B_1\ne B_2:\\&\quad (B_1)^\sim \cap {{\mathcal {P}}}_1({\mathbb {C}}^2) = (B_2)^\sim \cap {{\mathcal {P}}}_1({\mathbb {C}}^2), \end{aligned}$$ which is further equivalent to \(f_P(t) = f_{P^\perp }(r)\). Hence we can conclude a few important properties of \(\phi \). First, we have $$\begin{aligned} f_P(t) = f_{P^\perp }(t) \quad (0< t \le 1, P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ Second, since for every \(0<t<1\) and \(P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) we have $$\begin{aligned}&\left\{ f_Q\left( \tfrac{1-t}{1-t/2}\right) \phi (Q) :Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2), \Vert P-Q\Vert = \sin \tfrac{\pi }{4} \right\} = \phi \left( \ell _{tP}\cap \ell _{tP^\perp }\right) \\&\quad = \ell _{f_P(t)\phi (P)}\cap \ell _{f_P(t)\phi (P)^\perp } = \left\{ \tfrac{1-f_P(t)}{1-f_P(t)/2}R :R\in {{\mathcal {P}}}_1({\mathbb {C}}^2), \Vert \phi (P)-R\Vert = \sin \tfrac{\pi }{4} \right\} , \end{aligned}$$ therefore using (21) gives (24). Furthermore, we also obtain $$\begin{aligned} f_Q\left( \tfrac{1-t}{1-t/2}\right) = \tfrac{1-f_P(t)}{1-f_P(t)/2} \qquad \left( 0< t < 1, P,Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2), \Vert P-Q\Vert = \sin \tfrac{\pi }{4}\right) . \end{aligned}$$ By Lemma 3.6, for all \(Q_1, Q_2\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) there exists a rank-one projection P such that $$\begin{aligned} \Vert Q_1-P\Vert = \Vert Q_2-P\Vert = \sin \tfrac{\pi }{4}. \end{aligned}$$ Therefore, applying (25) and noticing that \(t\mapsto \tfrac{1-t}{1-t/2}\) is a strictly decreasing bijection of (0, 1) gives that $$\begin{aligned} f_{Q_1}(t) = f_{Q_2}(t) \quad (t\in (0,1), Q_1, Q_2\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ Thus we conclude that there exists a strictly increasing bijection \(f:(0,1]\rightarrow (0,1]\) such that $$\begin{aligned} \phi (tP) = f(t) \phi (P) \quad (0<t\le 1, P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ We also observe that (25) implies $$\begin{aligned} f\left( \tfrac{1-t}{1-t/2} \right) = \tfrac{1-f(t)}{1-f(t)/2}, \end{aligned}$$ therefore we notice that $$\begin{aligned} f\left( 2-\sqrt{2} \right) = 2-\sqrt{2}, \end{aligned}$$ which is a consequence of the fact that the unique solution of the equation \(t = \tfrac{1-t}{1-t/2}\), \(0<t<1\), is \(t = 2-\sqrt{2}\). STEP 3: Next, applying [12, Theorem 2.3] gives that there exists a unitary or antiunitary operator \(U:{\mathbb {C}}^2\rightarrow {\mathbb {C}}^2\) such that we have $$\begin{aligned} U^*\phi (P)U \in \{P,P^\perp \} \quad (P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ Since either both \(U^*\phi (\cdot )U\) and \(\phi (\cdot )\) satisfy our assumptions simultaneously, or none of them does, therefore without loss of generality we may assume that we have $$\begin{aligned} \phi (P) \in \{P,P^\perp \} \quad (P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ We now claim that $$\begin{aligned} \text {either}\; \phi (P) = P \;\; (P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)), \; \text {or} \; \phi (P) = P^\perp \;\; (P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)). \end{aligned}$$ Let us assume otherwise, then there exist two rank-one projections P and Q such that \(\Vert P-Q\Vert < \sin \tfrac{\pi }{4}\), \(\phi (P) = P\) and \(\phi (Q) = Q^\perp \). Note that \(\Vert P-Q^\perp \Vert = \sqrt{1-\Vert P-Q\Vert ^2}> \sin \tfrac{\pi }{4} > \Vert P-Q\Vert \). By (23) and (28) we have $$\begin{aligned}&\left\{ \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-R\Vert ^2} R:R\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\setminus \{P,P^\perp \} \right\} = \ell _{(2-\sqrt{2})P} = \phi \left( \ell _{(2-\sqrt{2})P}\right) \nonumber \\&\quad = \left\{ f\left( \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-R\Vert ^2}\right) \phi (R):R\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\setminus \{P,P^\perp \} \right\} . \end{aligned}$$ Therefore putting first \(R=Q\) and then \(R=Q^\perp \) gives $$\begin{aligned} \phi \left( \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-Q\Vert ^2} Q \right)&= f\left( \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-Q\Vert ^2}\right) \phi (Q) \\&= f\left( \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-Q\Vert ^2}\right) Q^\perp = \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-Q^\perp \Vert ^2} Q^\perp \end{aligned}$$ $$\begin{aligned} \phi \left( \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-Q^\perp \Vert ^2} Q^\perp \right)&= f\left( \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-Q^\perp \Vert ^2}\right) \phi (Q^\perp ) \\&= f\left( \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-Q^\perp \Vert ^2}\right) Q = \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2})\Vert P-Q\Vert ^2} Q. \end{aligned}$$ But this implies that f interchanges two different numbers which contradicts to its strict increasingness—proving our claim (29). Note that for every \(0\le \vartheta \le \tfrac{\pi }{2}\) and \(0\le \mu < 2\pi \) we have $$\begin{aligned} (P_{(\cos \vartheta , e^{i\mu } \sin \vartheta )})^\perp= & {} \left[ \begin{matrix} \sin ^2\vartheta &{} -e^{-i\mu } \cos \vartheta \sin \vartheta \\ -e^{i\mu } \cos \vartheta \sin \vartheta &{} \cos ^2\vartheta \\ \end{matrix}\right] \\= & {} \left[ \begin{matrix} 0 &{} 1 \\ -1 &{} 0 \\ \end{matrix}\right] (P_{(\cos \vartheta , e^{i\mu } \sin \vartheta )})^t \left[ \begin{matrix} 0 &{} 1 \\ -1 &{} 0 \\ \end{matrix}\right] ^* \end{aligned}$$ where \(\cdot ^t\) stands for the transposition, and we used the notation of the Busch–Gudder theorem. It is well-known, and can be verified by an easy computation, that we have \(A^t = KAK^*\) for every qubit effect A, where K is the coordinate-wise conjugation antiunitary operator: \(K(z_1,z_2) = (\overline{z_1},\overline{z_2})\) \((z_1,z_2\in {\mathbb {C}})\). Therefore from now on we may assume without loss of generality that we have $$\begin{aligned} \phi (P) = P \quad (P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)), \end{aligned}$$ i.e. \(\phi \) fixes all rank-one projections. Finally, observe that (30) and (31) implies $$\begin{aligned} f\left( \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2}) \tau }\right) = \tfrac{\sqrt{2}-1}{1-(2-\sqrt{2}) \tau } \quad (0<\tau <1), \end{aligned}$$ thus we obtain \(\phi (tP) = tP\) for all \(P\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\) and \(\sqrt{2}-1< t < 1\). But this further implies $$\begin{aligned}&\left\{ \tfrac{1-t}{1-t\Vert P-Q\Vert ^2} Q:Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\setminus \{P,P^\perp \} \right\} = \ell _{tP} = \phi \left( \ell _{tP}\right) \\&\quad = \left\{ f\left( \tfrac{1-t}{1-t\Vert P-Q\Vert ^2}\right) Q:Q\in {{\mathcal {P}}}_1({\mathbb {C}}^2)\setminus \{P,P^\perp \} \right\} \end{aligned}$$ for all \(\sqrt{2}-1< t < 1\), from which we conclude $$\begin{aligned} \phi (tP) = tP \quad (0<t<1), \end{aligned}$$ i.e. \(\phi \) fixes all rank-one effects. From here we only need to apply Corollary 2.11 and transform back to our original \(\phi \) to complete the proof. \(\square \) Proof of Theorem 1.3 in the General Case Here we prove the general case of our main theorem, utilising the above proved low-dimensional case. We start with two lemmas. Let \(P \in {{\mathcal {P}}}(H)\setminus \mathcal {SC}(H)\) and \(A \in {{\mathcal {E}}}(H) \setminus \{ P, P^\perp \}\). Then there exists a rank-one effect \(R\in {{\mathcal {F}}}_1(H)\) such that \(R \sim A\) but \(R \not \sim P\). Assume that \(A \in {{\mathcal {E}}}(H)\) such that \(A^\sim \cap {{\mathcal {F}}}_1(H) \subseteq P^\sim = P^c\) holds. We have to show that then either \(A=P\), or \(A=P^\perp \). Clearly, A is not a scalar effect. By Corollary 2.7 we obtain that $$\begin{aligned} \Lambda (A,Q) + \Lambda (A^\perp ,Q) \le \Lambda (P,Q) + \Lambda (P^\perp ,Q) \qquad (Q\in {{\mathcal {P}}}_1(H)). \end{aligned}$$ Notice that the set $$\begin{aligned} \mathrm{supp}\left( \Lambda (P,\cdot ) + \Lambda (P^\perp ,\cdot )\right) := \left\{ Q\in {{\mathcal {P}}}_1(H) :\Lambda (P,Q) + \Lambda (P^\perp ,Q) > 0 \right\} \end{aligned}$$ has two connected components (with respect to the operator norm topology), namely $$\begin{aligned} \left\{ Q\in {{\mathcal {P}}}_1(H) :\mathrm{Im\,}Q \subset \mathrm{Im\,}P \right\} \;\; \text {and} \;\; \left\{ Q\in {{\mathcal {P}}}_1(H) :\mathrm{Im\,}Q \subset \mathrm{Ker}P \right\} . \end{aligned}$$ However, by the Busch–Gudder theorem we obtain that $$\begin{aligned}&\left\{ Q\in {{\mathcal {P}}}_1(H) :\mathrm{Im\,}Q \subset \mathrm{Im\,}A \cup \mathrm{Im\,}(I-A) \right\} \\&\quad \subseteq \left\{ Q\in {{\mathcal {P}}}_1(H) :\mathrm{Im\,}Q \subset \mathrm{Im\,}A^{1/2} \cup \mathrm{Im\,}(I-A)^{1/2} \right\} \\&\quad \subseteq \mathrm{supp}\left( \Lambda (A,\cdot ) + \Lambda (A^\perp ,\cdot )\right) \subseteq \mathrm{supp}\left( \Lambda (P,\cdot ) + \Lambda (P^\perp ,\cdot )\right) . \end{aligned}$$ Since \(\mathrm{supp}\left( \Lambda (P,\cdot ) + \Lambda (P^\perp ,\cdot )\right) \) is a closed set, we obtain $$\begin{aligned}&\left\{ Q\in {{\mathcal {P}}}_1(H) :\mathrm{Im\,}Q \subset (\mathrm{Im\,}A)^- \cup (\mathrm{Im\,}(I-A))^- \right\} \nonumber \\&\quad = \left\{ Q\in {{\mathcal {P}}}_1(H) :\mathrm{Im\,}Q \subset (\mathrm{Ker}A)^\perp \cup (\mathrm{Ker}(I-A))^\perp \right\} \nonumber \\&\quad \subseteq \mathrm{supp}\left( \Lambda (P,\cdot ) + \Lambda (P^\perp ,\cdot )\right) . \end{aligned}$$ Notice that the left-hand side of (34) is connected if and only if A is not a projection, in which case it must be a subset of one of the components of the right-hand side. However, this is impossible because the left-hand side contains a maximal set of pairwise orthogonal rank-one projections. Therefore \(A\in {{\mathcal {P}}}(H)\), and in particular \(\mathrm{supp}\left( \Lambda (A,\cdot ) + \Lambda (A^\perp ,\cdot )\right) \) has two connected components. From here using (33) for both A and P we easily complete the proof. \(\square \) We introduce a new relation on \({{\mathcal {E}}}(H)\setminus \mathcal {SC}(H)\). For \(A,B \in {{\mathcal {E}}}(H)\setminus \mathcal {SC}(H)\) we write \(A \prec B\) if and only if for every \(C\in A^\sim \setminus \mathcal {SC}(H)\) there exists a \(D\in B^\sim \setminus \mathcal {SC}(H)\) such that \(C^\sim \subseteq D^\sim \). Clearly, for every non-scalar effect B we have \(B \prec B\) and \(B^\perp \prec B\). In particular \(\prec \) is a reflexive relation, but it is not antisymmetric. It is also straightforward from the definition that \(\prec \) is a transitive relation, i.e. \(A \prec B\) and \(B \prec C\) imply \(A \prec C\). We proceed with characterising non-trivial projections in terms of the relation of coexistence. Assume that \(A \in {{\mathcal {E}}}(H)\setminus \mathcal {SC}(H)\). Then the following two statements are equivalent: \(A \in {{\mathcal {P}}}(H)\), \(\# \{ B \in {{\mathcal {E}}}(H)\setminus \mathcal {SC}(H) :B \prec A \} = 2\). (i)\(\Longrightarrow \)(ii): Suppose that \(B \in {{\mathcal {E}}}(H)\setminus \mathcal {SC}(H)\), \(B\ne A\), \(B\ne A^\perp \) and \(B \prec A\). We need to show that this assumption leads to a contradiction. By Lemma 4.1 there exists a rank one effect tQ, with some \(Q\in {{\mathcal {P}}}_1(H)\) and \(t \in (0,1]\), such that \(tQ \sim B\) but \(tQ\not \sim A\). From \(B \prec A\) we know that there exists a non-scalar effect D such that $$\begin{aligned} (tQ)^\sim \subseteq D^\sim \ \ \ \mathrm{and} \ \ \ D \sim A. \end{aligned}$$ By Lemma 2.12 (a) we have $$\begin{aligned} D \in (tQ)''\cap {{\mathcal {E}}}(H) = Q''\cap {{\mathcal {E}}}(H) = \left\{ sQ + r Q^\perp \in {{\mathcal {E}}}(H) :s,r \in [0,1] \right\} , \end{aligned}$$ where the latter equation is easy to see (even in non-separable Hilbert spaces). Since we also have \(D\in A^c\), we obtain \(Q\in A^c\), hence the contradiction \(tQ\in A^c = A^\sim \). (ii)\(\Longrightarrow \)(i): Here we use contraposition, so let us assume that \(A \in \left( {{\mathcal {E}}}(H)\setminus {{\mathcal {P}}}(H)\right) \setminus \mathcal {SC}(H)\). We shall construct a non-trivial projection P (which is obviously different from both A and \(A^\perp \)) such that \(P\prec A\). First, notice that there exists an \(0< \varepsilon < \tfrac{1}{2}\) such that \(H_A\left( \left( \varepsilon , 1-\varepsilon \right] \right) \notin \left\{ \{0\}, H \right\} \). Indeed, otherwise an elementary examination of the spectrum gives that \(\sigma (A) \subseteq \{\varepsilon _0,1-\varepsilon _0\}\) holds with some \(0< \varepsilon _0 < \tfrac{1}{2}\). As A is non-scalar, we actually get \(\sigma (A) = \{\varepsilon _0,1-\varepsilon _0\}\), which implies that \(H_A\left( \left( \varepsilon _0, 1-\varepsilon _0 \right] \right) \) is a non-trivial subspace. Let us now consider the orthogonal decomposition \(H = H_1 \oplus H_2 \oplus H_3\) where $$\begin{aligned} H_1 = H_A\left( \left[ 0, \varepsilon \right] \right) , \;\; H_2 = H_A\left( \left( \varepsilon , 1-\varepsilon \right] \right) \;\; \text {and} \;\; H_3 = H_A\left( \left( 1-\varepsilon , 1 \right] \right) . \end{aligned}$$ With respect to this orthogonal decomposition we have $$\begin{aligned} A = \left[ \begin{matrix} A_1 &{} 0 &{} 0 \\ 0 &{} A_2 &{} 0 \\ 0 &{} 0 &{} A_3 \end{matrix} \right] \in {{\mathcal {E}}}(H_1 \oplus H_2 \oplus H_3). \end{aligned}$$ Since coexistence is invariant under taking the ortho-complements, we may assume without loss of generality that \(H_3\ne \{0\}\). Let us set $$\begin{aligned} P = \left[ \begin{matrix} I &{} 0 &{} 0 \\ 0 &{} I &{} 0 \\ 0 &{} 0 &{} 0 \end{matrix} \right] \notin \mathcal {SC}(H_1 \oplus H_2 \oplus H_3). \end{aligned}$$ Our goal is to show that \(P \prec A\). Let C be an arbitrary non-scalar effect coexistent with P. Then, since C and P commute, the matrix form of C is $$\begin{aligned} C = \left[ \begin{matrix} C_{11} &{} C_{12} &{} 0 \\ C_{12}^* &{} C_{22} &{} 0 \\ 0 &{} 0 &{} C_{33} \end{matrix} \right] \in {{\mathcal {E}}}(H_1 \oplus H_2 \oplus H_3). \end{aligned}$$ Consider the effect \(D := \varepsilon \cdot C\) and notice that $$\begin{aligned} \varepsilon \cdot \left[ \begin{matrix} C_{11} &{} C_{12} &{} 0 \\ C_{12}^* &{} C_{22} &{} 0 \\ 0 &{} 0 &{} 0 \end{matrix} \right] \le I-A \;\; \text {and} \;\; \varepsilon \cdot \left[ \begin{matrix} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} C_{33} \end{matrix} \right] \le A. \end{aligned}$$ Clearly, by Lemmas 2.3 and 2.12 we have \(D\sim A\) and \(C^\sim \subseteq D^\sim \), which completes the proof. \(\square \) Next, we characterise commutativity preservers on \({{\mathcal {P}}}(H)\). We note that the following theorem has been proved before implicitly in [23] for separable spaces, and was stated explicitly in [24, Theorem 2.8]. In order to prove the theorem for general spaces, one only has to use the ideas of [23], however, we decided to include the proof for the sake of completeness and clarity. Let H be a Hilbert space of dimension at least three and \(\phi :{{\mathcal {P}}}(H)\rightarrow {{\mathcal {P}}}(H)\) be a bijective mapping that preserves commutativity in both directions, i.e. $$\begin{aligned} PQ = QP \;\; \iff \;\; \phi (P)\phi (Q) = \phi (Q)\phi (P) \qquad (P,Q \in {{\mathcal {P}}}(H)). \end{aligned}$$ Then there exists a unitary or antiunitary operator \(U:H\rightarrow H\) such that $$\begin{aligned} \phi (P) \in \{ UPU^*, UP^\perp U^* \} \qquad (P \in {{\mathcal {P}}}(H)). \end{aligned}$$ For an arbitrary set \({\mathcal {M}} \subseteq {{\mathcal {P}}}(H)\) let us use the following notations: \({\mathcal {M}}^{\mathfrak {c}}:= {\mathcal {M}}^c \cap {{\mathcal {P}}}(H)\) and \({\mathcal {M}}^{{\mathfrak {c}}{\mathfrak {c}}} := ({\mathcal {M}}^{\mathfrak {c}})^{\mathfrak {c}}\). By the properties of \(\phi \) we immediately get \(\phi ({\mathcal {M}}^{\mathfrak {c}}) = \phi ({\mathcal {M}})^{\mathfrak {c}}\) and \(\phi ({\mathcal {M}}^{{\mathfrak {c}}{\mathfrak {c}}}) = \phi ({\mathcal {M}})^{{\mathfrak {c}}{\mathfrak {c}}}\) for all subset \({\mathcal {M}}\). Next, let P and Q be two arbitrary commuting projections. Then (for instance by the Halmos's two projections theorem) we have $$\begin{aligned} P = \left[ \begin{matrix} I &{} 0 &{} 0 &{} 0 \\ 0 &{} I &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 \\ \end{matrix}\right] \;\; \text {and} \;\; Q = \left[ \begin{matrix} I &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} I &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 \\ \end{matrix}\right] \in {{\mathcal {B}}}(H_1\oplus H_2 \oplus H_3 \oplus H_4) \end{aligned}$$ where \(H_1 = \mathrm{Im\,}P \cap \mathrm{Im\,}Q\), \(H_2 = \mathrm{Im\,}P \cap \mathrm{Ker}Q\), \(H_3 = \mathrm{Ker}P \cap \mathrm{Im\,}Q\), \(H_4 = \mathrm{Ker}P \cap \mathrm{Ker}Q\) and \(H = H_1\oplus H_2 \oplus H_3 \oplus H_4\). Note that some of these subspaces might be trivial. We observe that $$\begin{aligned} \{P,Q\}^{{\mathfrak {c}}{\mathfrak {c}}} = (\{P,Q\}^{\mathfrak {c}})^{\mathfrak {c}}&= \left\{ \left[ \begin{matrix} R_1 &{} 0 &{} 0 &{} 0 \\ 0 &{} R_2 &{} 0 &{} 0 \\ 0 &{} 0 &{} R_3 &{} 0 \\ 0 &{} 0 &{} 0 &{} R_4 \\ \end{matrix}\right] :R_j \in {{\mathcal {P}}}(H_j),\; j=1,2,3,4 \right\} ^{\mathfrak {c}}\\&= \left\{ \left[ \begin{matrix} \lambda _1 I &{} 0 &{} 0 &{} 0 \\ 0 &{} \lambda _2 I &{} 0 &{} 0 \\ 0 &{} 0 &{} \lambda _3 I &{} 0 \\ 0 &{} 0 &{} 0 &{} \lambda _4 I \\ \end{matrix}\right] :\lambda _j \in \{0,1\},\; j=1,2,3,4 \right\} . \end{aligned}$$ Hence we conclude that \(\#\{P,Q\}^{{\mathfrak {c}}{\mathfrak {c}}} = 2^{\#\{j:H_j \ne \{0\}\}}\). In particular, \(\#\{P,Q\}^{{\mathfrak {c}}{\mathfrak {c}}} = 2\) if and only if \(P,Q\in \{0,I\}\), and \(\#\{P,Q\}^{{\mathfrak {c}}{\mathfrak {c}}} = 4\) if and only if either \(P\notin \{0,I\}\) and \(Q\in \{I,0,P,P^\perp \}\), or \(Q\notin \{0,I\}\) and \(P\in \{I,0,Q,Q^\perp \}\). Now, we easily conclude the following characterisation of rank-one and co-rank-one projections: $$\begin{aligned} P \;\text {or}\; P^\perp \in {{\mathcal {P}}}_1(H) \;\;\iff \;\; \#\{P,Q\}^{{\mathfrak {c}}{\mathfrak {c}}} \in \{4,8\} \;\;\text {holds for all}\; Q\in P^{\mathfrak {c}}. \end{aligned}$$ This implies that $$\begin{aligned} \phi (\{P :P \;\text {or}\; P^\perp \in {{\mathcal {P}}}_1(H)\}) = \{P :P \;\text {or}\; P^\perp \in {{\mathcal {P}}}_1(H)\}. \end{aligned}$$ Note that we also have \(\phi (P^\perp ) = \phi (P)^\perp \) for every \(P\in {{\mathcal {P}}}(H)\), as \(P^{\mathfrak {c}}= Q^{\mathfrak {c}}\) holds exactly when \(P=Q\) or \(P+Q = I\). Since changing the images of some pairs of ortho-complemented projections to their orto-complementations does not change the property (35), we may assume without loss of generality that \(\phi ({{\mathcal {P}}}_1(H)) = {{\mathcal {P}}}_1(H)\). It is easy to see that two rank-one projections commute if and only if either they coincide, or they are orthogonal to each other. Thus, as \(\dim H \ge 3\), Uhlhorn's theorem [32] gives that there exist a unitary or antiunitary operator \(U:H\rightarrow H\) such that $$\begin{aligned} \phi (P) = UPU^* \qquad (P \in {{\mathcal {P}}}_1(H)). \end{aligned}$$ Finally, note that for every projection \(Q\in {{\mathcal {P}}}(H)\) we have $$\begin{aligned} Q^{\mathfrak {c}}\cap {{\mathcal {P}}}_1(H) = \{ P\in {{\mathcal {P}}}_1(H) :\mathrm{Im\,}P \subset \mathrm{Im\,}Q \cup \mathrm{Ker}Q \}, \end{aligned}$$ from which we easily complete the proof. \(\square \) Before we prove Theorem 1.3 in the general case, we need one more technical lemma for non-separable Hilbert spaces. We will use the notation \({{\mathcal {E}}}_{fs}(H)\) for the set of all effects whose spectrum has finitely many elements. For all \(A\in {{\mathcal {E}}}_{fs}(H)\) we have $$\begin{aligned} A^{cc} = A'' \cap {{\mathcal {E}}}(H) = \{ p(A)\in {{\mathcal {E}}}(H) :p \text { is a polynomial} \}. \end{aligned}$$ We only have to observe the following for all \(A\in {{\mathcal {E}}}(H)\) with \(\#\sigma (A) = n \in {\mathbb {N}}\), where \(E_1, \dots E_n\) are the spectral projections and \(H_j = \mathrm{Im\,}E_j\) \((j=1,2,\dots n)\): $$\begin{aligned} A^{cc}&= \left( \bigcap _{j=1}^n E_j^c \right) ^c = \left\{ \bigoplus _{j=1}^n B_j:B_j \in {{\mathcal {E}}}\left( H_j\right) \text { for all } j \right\} ^c \\&= \left\{ \sum _{j=1}^n \mu _j E_j :\mu _j \in [0,1] \text { for all } j \right\} \\&= \left\{ \bigoplus _{j=1}^n T_j :T_j \in {{\mathcal {B}}}(H_j) \text { for all } j \right\} ' \cap {{\mathcal {E}}}(H) = \left( \bigcap _{j=1}^n E_j' \right) ' \cap {{\mathcal {E}}}(H) = A'' \cap {{\mathcal {E}}}(H). \end{aligned}$$ \(\square \) Now, we are in the position to prove our second main theorem in the general case. Proof of Theorem 1.3 for spaces of dimension at least three The proof will be divided into the following steps: we show that \(\phi \) maps \({{\mathcal {E}}}_{fs}(H)\) onto itself, we prove that \(\phi \) has the form (2) on \({{\mathcal {E}}}_{fs}(H)\setminus \mathcal {SC}(H)\), we show that \(\phi \) has the form (2) on \({{\mathcal {E}}}(H)\setminus \mathcal {SC}(H)\). STEP 1: First, similarly as in the previous section, we easily get the existence of a bijective function \(g:[0,1] \rightarrow [0,1]\) such that $$\begin{aligned} \phi (t I) = g(t)I \qquad (t\in [0,1]). \end{aligned}$$ Of course, the properties of \(\phi \) imply \(\phi (A)^\sim = \phi (A^\sim )\) for all \(A\in {{\mathcal {E}}}(H)\), and also $$\begin{aligned} B^\sim \subseteq A^\sim \;\;\iff \;\; \phi (B)^\sim \subseteq \phi (A)^\sim \qquad (A,B\in {{\mathcal {E}}}(H)). \end{aligned}$$ From the latter it follows that $$\begin{aligned} B \prec A \;\;\iff \;\; \phi (B) \prec \phi (A) \qquad (A,B\in {{\mathcal {E}}}(H)\setminus \mathcal {SC}(H)). \end{aligned}$$ Hence by Lemma 4.2 we obtain $$\begin{aligned} \phi (P(H)\setminus \{0,I\}) = P(H)\setminus \{0,I\}, \end{aligned}$$ and therefore Lemma 2.1 (b) implies that the restriction \(\phi |_{P(H)\setminus \{0,I\}}\) preserves commutativity in both directions. Applying Theorem 4.3 then gives that up to unitary–antiunitary equivalence and element-wise ortho-complementation, we have $$\begin{aligned} \phi (P) = P \qquad (P\in P(H)\setminus \{0,I\}). \end{aligned}$$ From now on we may assume without loss of generality that this is the case. Next, by the spectral theorem [7, Theorem IX.2.2] we have $$\begin{aligned} A^c = \bigcap _{\Delta \in \mathcal {B}_{[0,1]}} E_A(\Delta )^c = \bigcap _{\Delta \in \mathcal {B}_{[0,1]}} E_A(\Delta )^\sim \qquad (A\in {{\mathcal {E}}}(H)). \end{aligned}$$ Therefore we obtain $$\begin{aligned} \phi (A^c) = \bigcap _{\Delta \in \mathcal {B}_{[0,1]}} \phi (E_A(\Delta ))^\sim = \bigcap _{\Delta \in \mathcal {B}_{[0,1]}} E_A(\Delta )^\sim = A^c \quad (A\in {{\mathcal {E}}}(H)), \end{aligned}$$ and thus also $$\begin{aligned} \phi (A^{cc}) = \phi \left( \bigcap _{B\in A^c} B^c\right) = \bigcap _{B\in A^c} \phi \left( B^c\right) = \bigcap _{B\in A^c} B^c = A^{cc} \qquad (A\in {{\mathcal {E}}}(H)). \end{aligned}$$ In particular, we have $$\begin{aligned} \phi (A)\in A^{cc} \qquad (A\in {{\mathcal {E}}}(H)). \end{aligned}$$ Hence for all \(A\in {{\mathcal {E}}}_{fs}(H)\) there exists a polynomial \(p_A\) such that \(p_A(\sigma (A)) \subset [0,1]\) and $$\begin{aligned} \phi (A) = p_A(A) \qquad (A\in {{\mathcal {E}}}_{fs}(H)). \end{aligned}$$ As a similar statement holds for \(\phi ^{-1}\), we immediately get \(\phi ({{\mathcal {E}}}_{fs}(H)) = {{\mathcal {E}}}_{fs}(H)\). Also, notice that \(\#\sigma (\phi (A)) = \#\sigma (p_A(A)) \le \#\sigma (A)\) and \(\#\sigma (\phi ^{-1}(A)) \le \#\sigma (A)\) hold for all \(A\in {{\mathcal {E}}}_{fs}(H)\). Whence we obtain $$\begin{aligned} \#\sigma (\phi (A))= \#\sigma (A) \qquad (A\in {{\mathcal {E}}}_{fs}(H)). \end{aligned}$$ In particular, the restriction \(p_A|_{\sigma (A)}\) is injective. STEP 2: Now, let M be an arbitrary two-dimensional subspace of H and let \(P_M\in {{\mathcal {P}}}(H)\) be the orthogonal projection onto M. Consider two arbitrary effects \(A,B\in (P_M)^\sim \cap {{\mathcal {E}}}_{fs}(H)\) which therefore have the following matrix representations: $$\begin{aligned} A = \left[ \begin{matrix} A_M &{} 0 \\ 0 &{} A_{M^\perp } \end{matrix}\right] \quad \text {and}\quad B = \left[ \begin{matrix} B_M &{} 0 \\ 0 &{} B_{M^\perp } \end{matrix}\right] \in {{\mathcal {E}}}_{fs}(M \oplus M^\perp ). \end{aligned}$$ $$\begin{aligned} \phi (A) = p_A(A) = \left[ \begin{matrix} p_A(A_M) &{} 0 \\ 0 &{} p_A(A_{M^\perp }) \end{matrix}\right] \quad \text {and}\quad \phi (B) = p_B(B) = \left[ \begin{matrix} p_B(B_M) &{} 0 \\ 0 &{} p_B(B_{M^\perp }) \end{matrix}\right] . \end{aligned}$$ Note that by (39), the polynomial \(p_A\) acts injectively on \(\sigma (A)\), therefore $$\begin{aligned} A_M\in \mathcal {SC}(M) \;\;\iff \;\; p_A(A_M)\in \mathcal {SC}(M), \end{aligned}$$ and of course, similarly for B. We observe that by Lemma 2.2 the following two equations hold: $$\begin{aligned} A^\sim \bigcap \left[ \begin{matrix} I &{} 0 \\ 0 &{} 0 \end{matrix}\right] ^\sim \bigcap \left( \bigcap _{P\in {{\mathcal {P}}}_1(M^\perp )} \left[ \begin{matrix} 0 &{} 0 \\ 0 &{} P \end{matrix}\right] ^\sim \right) = \left\{ \left[ \begin{matrix} D &{} 0 \\ 0 &{} \lambda I \end{matrix}\right] :D\sim A_M, \lambda \in [0,1] \right\} \end{aligned}$$ $$\begin{aligned} \phi (A)^\sim \bigcap \left[ \begin{matrix} I &{} 0 \\ 0 &{} 0 \end{matrix}\right] ^\sim \bigcap \left( \bigcap _{P\in {{\mathcal {P}}}_1(M^\perp )} \left[ \begin{matrix} 0 &{} 0 \\ 0 &{} P \end{matrix}\right] ^\sim \right) = \left\{ \left[ \begin{matrix} D &{} 0 \\ 0 &{} \lambda I \end{matrix}\right] :D\sim p_A(A_M), \lambda \in [0,1] \right\} . \end{aligned}$$ It is important to observe that by (38) the set in (41) is the \(\phi \)-image of (40). Thus we obtain the following equivalence if \(A_M \notin \mathcal {SC}(M)\): $$\begin{aligned} B_M \in \{A_M, A_M^\perp \} \;&\iff \; A_M^\sim = B_M^\sim \nonumber \\&\iff \; \left\{ \left[ \begin{matrix} D &{} 0 \\ 0 &{} \lambda I \end{matrix}\right] :D\sim A_M, \lambda \in [0,1]\right\} \nonumber \\&= \left\{ \left[ \begin{matrix} E &{} 0 \\ 0 &{} \mu I \end{matrix}\right] :E\sim B_M, \mu \in [0,1]\right\} \nonumber \\&\iff \; \left\{ \left[ \begin{matrix} D &{} 0 \\ 0 &{} \lambda I \end{matrix}\right] :D\sim p_A(A_M), \lambda \in [0,1]\right\} \nonumber \\&= \left\{ \left[ \begin{matrix} E &{} 0 \\ 0 &{} \mu I \end{matrix}\right] :E\sim p_B(B_M), \mu \in [0,1]\right\} \nonumber \\&\iff \; \left( p_A(A_M)\right) ^\sim = \left( p_B(B_M)\right) ^\sim \nonumber \\&\iff \; p_B(B_M) \in \left\{ p_A(A_M), I-p_A(A_M) \right\} . \end{aligned}$$ Now, we are in the position to use the previously proved two-dimensional version. Let $$\begin{aligned} {\mathfrak {E}}(M) := \left\{ \left\{ D,D^\perp \right\} :D\in {{\mathcal {E}}}(M)\setminus \mathcal {SC}(M) \right\} \cup \{\mathcal {SC}(M)\}, \end{aligned}$$ and let us say that two elements of \({\mathfrak {E}}(M)\) are coexistent, in notation \(\approx \), if either one of them is \(\mathcal {SC}(M)\), or the two elements are \(\{D,D^\perp \}\) and \(\{E,E^\perp \}\) with \(D\sim E\). Clearly, the bijective restriction $$\begin{aligned} \phi |_{(P_M)^\sim \cap {{\mathcal {E}}}_{fs}(H)}:(P_M)^\sim \cap {{\mathcal {E}}}_{fs}(H) \rightarrow (P_M)^\sim \cap {{\mathcal {E}}}_{fs}(H) \end{aligned}$$ induces a well-defined bijection on \({\mathfrak {E}}(M)\) by $$\begin{aligned} \mathcal {SC}(M)\mapsto \mathcal {SC}(M), \; \{A_M,A_M^\perp \}\mapsto \{p_A(A_M), p_A(A_M)^\perp \} \qquad (A_M \notin \mathcal {SC}(M)). \end{aligned}$$ Notice that this map also preserves the relation \(\approx \) in both directions. Indeed, for all \(A,B\in (P_M)^\sim \cap {{\mathcal {E}}}_{fs}(H)\), \(A_M,B_M \notin \mathcal {SC}(H)\) we have $$\begin{aligned} \{A_M, A_M^\perp \} \approx \{B_M, B_M^\perp \}&\;\iff \; {\hat{A}} := \left[ \begin{matrix} A_M &{} 0 \\ 0 &{} 0 \end{matrix}\right] \sim {\hat{B}} := \left[ \begin{matrix} B_M &{} 0 \\ 0 &{} 0 \end{matrix}\right] \\&\;\iff \; \left[ \begin{matrix} p_{{\hat{A}}}(A_M) &{} 0 \\ 0 &{} p_{{\hat{A}}}(0) I \end{matrix}\right] \sim \left[ \begin{matrix} p_{{\hat{B}}}(B_M) &{} 0 \\ 0 &{} p_{{\hat{B}}}(0) I \end{matrix}\right] \\&\;\iff \; \{p_{{\hat{A}}}(A_M), p_{{\hat{A}}}(A_M)^\perp \} \approx \{p_{{\hat{B}}}(B_M), p_{{\hat{B}}}(B_M)^\perp \} \\&\;\iff \; \{p_{A}(A_M), p_{A}(A_M)^\perp \} \approx \{p_{B}(B_M), p_{B}(B_M)^\perp \}. \end{aligned}$$ Therefore, using the two-dimensional version of Theorem 1.3, we obtain a unitary or antiunitary operator \(U_M:M\rightarrow M\) such that $$\begin{aligned} p_A(A_M) \in \{U_M (A_M) U_M^*, U_M (A_M)^\perp U_M^*\} \qquad (A\in (P_M)^\sim \cap {{\mathcal {E}}}_{fs}(H), \; A_M\notin \mathcal {SC}(M)) \end{aligned}$$ $$\begin{aligned} p_A(A_M) \in \mathcal {SC}(M) \qquad (A\in (P_M)^\sim \cap {{\mathcal {E}}}_{fs}(H), \; A_M\in \mathcal {SC}(M)). \end{aligned}$$ Observe that this implies the following: for any pair of orthogonal unit vectors \(x,y\in M\) we must have either \(U_M({\mathbb {C}}\cdot x) = {\mathbb {C}}\cdot x\) and \(U_M({\mathbb {C}}\cdot y) = {\mathbb {C}}\cdot y\), or \(U_M({\mathbb {C}}\cdot x) = {\mathbb {C}}\cdot y\) and \(U_M({\mathbb {C}}\cdot y) = {\mathbb {C}}\cdot x\). As \(U_M\) is continuous, we have either the first case for all orthogonal pairs \({\mathbb {C}}\cdot x,{\mathbb {C}}\cdot y\), or the second for every such pair. But a similar statement holds for all two-dimensional subspaces, therefore it is easy to show that the second possibility cannot occur. Consequently, we have \(U_M({\mathbb {C}}\cdot x) = {\mathbb {C}}\cdot x\) for all unit vectors \(x\in M\), from which it follows that \(U_M\) is a scalar multiple of the identity operator. Thus we obtain the following for every two-dimensional subspace M: $$\begin{aligned} p_A(A_M) \in \{A_M, A_M^\perp \} \qquad (A\in (P_M)^\sim \cap {{\mathcal {E}}}_{fs}(H), \; A_M\notin \mathcal {SC}(M)). \end{aligned}$$ From here it is rather straightforward to obtain $$\begin{aligned} \phi (A) = p_A(A) \in \{A, A^\perp \} \qquad (A\in {{\mathcal {E}}}_{fs}(H)\setminus \mathcal {SC}(M)). \end{aligned}$$ STEP 3: Observe that (43) holds for every \(A\in {{\mathcal {F}}}(H)\), therefore an application of Theorem 1.1 and Corollary 2.5 completes the proof in the separable case. As for the general case, let us consider an arbitrary effect \(A\in {{\mathcal {E}}}(H)\setminus {{\mathcal {E}}}_{fs}(H)\) and an orthogonal decomposition \(H = \oplus _{i\in \mathcal {I}} H_i\) such that each \(H_i\) is a separable invariant subspace of A. By (38) and Lemma 2.1 (b), each \(H_i\) is an invariant subspace also for \(\phi (A)\), in particular, we have $$\begin{aligned} A = \oplus _{i\in \mathcal {I}} A_i, \;\;\text {and}\;\; \phi (A) = \oplus _{i\in \mathcal {I}} \mathcal {A}_i \in {{\mathcal {E}}}(\oplus _{i\in \mathcal {I}} H_i). \end{aligned}$$ Without loss of generality we may assume from now on that there exists an \(i_0\in \mathcal {I}\) so that \(A_{i_0}\) is not a scalar effect. Now, let \(i\in \mathcal {I}\), \(F\in {{\mathcal {F}}}(H)\) and \(\mathrm{Im\,}F \subseteq H_{i}\) be arbitrary. Then by (43) we have $$\begin{aligned} A_{i} \sim P_{i}F|_{H_{i}} \;\;\iff \;\; A\sim F \;\;\iff \;\; \phi (A)\sim F \;\;\iff \;\; \mathcal {A}_{i} \sim P_{i}F|_{H_{i}}. \end{aligned}$$ In particular, \(A_{i}^\sim \cap {{\mathcal {F}}}(H_{i}) = \mathcal {A}_{i}^\sim \cap {{\mathcal {F}}}(H_{i})\), therefore by Theorem 1.1 we get that for all i we have either \(A_i, \mathcal {A}_i\in \mathcal {SC}(H)\), or \(A_i = \mathcal {A}_i\), or \(\mathcal {A}_i = A_i^\perp \). By considering \(A^\perp \) instead of A if necessary, we may assume that we have \(A_{i_0} = \mathcal {A}_{i_0}\). Finally, for any \(i_1\in \mathcal {I}\setminus \{i_0\}\) let us consider the orthogonal decomposition \(H = \oplus _{i\in \mathcal {I}\setminus \{i_0,i_1\}} H_i \oplus (H_{i_0} \oplus H_{i_1}) \). Similarly as above, we then get \(A_{i_0}\oplus A_{i_1} = \mathcal {A}_{i_0} \oplus \mathcal {A}_{i_1}\), and the proof is complete. \(\square \) A Remark on the Qubit Case Here we visualise the set \(A^\sim \cap {{\mathcal {F}}}_1({\mathbb {C}}^2)\) for a general rank-one qubit effect A. First, let us introduce Bloch's representation. Consider the following vector space isomorphism between the space of all \(2 \times 2\) Hermitian matrices \({{\mathcal {B}}}_{sa}({\mathbb {C}}^2)\) and \({\mathbb {R}}^4\), see also [4]: $$\begin{aligned} \rho :{{\mathcal {B}}}_{sa}({\mathbb {C}}^2) \rightarrow {\mathbb {R}}^4, \quad \rho (A) = \rho (x_0\sigma _0+x_1\sigma _1+x_2\sigma _2+x_3\sigma _3) = (x_0,x_1,x_2,x_3), \end{aligned}$$ $$\begin{aligned} \sigma _0 = \left[ \begin{matrix} 1 &{} 0 \\ 0 &{} 1 \end{matrix}\right] , \; \sigma _1 = \left[ \begin{matrix} 0 &{} 1 \\ 1 &{} 0 \end{matrix}\right] , \; \sigma _2 = \left[ \begin{matrix} 0 &{} -i \\ i &{} 0 \end{matrix}\right] , \; \sigma _3 = \left[ \begin{matrix} 1 &{} 0 \\ 0 &{} -1 \end{matrix}\right] \end{aligned}$$ are the Pauli matrices. Clearly, we have \(\rho (0) = (0,0,0,0)\), \(\rho (I) = (1,0,0,0)\). The Bloch representation is usually defined as the restriction \(\rho |_{{{\mathcal {P}}}_1({\mathbb {C}}^2)}\) which maps \({{\mathcal {P}}}_1({\mathbb {C}}^2)\) onto a sphere of the three-dimensional affine subspace \(\{ (1/2,x_1,x_2,x_3) :x_j\in {\mathbb {R}}, j=1,2,3 \}\) with centre at (1/2, 0, 0, 0) and radius 1/2. Indeed, as the general form of a rank-one projection in \({\mathbb {C}}^2\) is $$\begin{aligned} P_{(\cos \vartheta , e^{i\mu } \sin \vartheta )} = \left[ \begin{matrix} \cos \vartheta \\ e^{i\mu } \sin \vartheta \end{matrix}\right] \left[ \begin{matrix} \cos \vartheta \\ e^{i\mu } \sin \vartheta \end{matrix}\right] ^* = \left[ \begin{matrix} \cos ^2\vartheta &{} e^{-i\mu } \cos \vartheta \sin \vartheta \\ e^{i\mu } \cos \vartheta \sin \vartheta &{} \sin ^2\vartheta \\ \end{matrix}\right] \end{aligned}$$ where \(0\le \vartheta \le \tfrac{\pi }{2}\) and \(0\le \mu < 2\pi \), a not too hard calculation gives that $$\begin{aligned} \rho (P_{(\cos \vartheta , e^{i\mu } \sin \vartheta )}) = \tfrac{1}{2} \cdot ( 1, \cos \mu \sin 2\vartheta , \sin \mu \sin 2\vartheta , \cos 2\vartheta ). \end{aligned}$$ Recall the remarkable angle doubling property of the Bloch representation, namely, we have \(\Vert P-Q\Vert = \sin \theta \) if and only if the angle between the vectors \(\rho (P) - \tfrac{1}{2}e_0\) and \(\rho (Q) - \tfrac{1}{2}e_0\) is exactly \(2\theta \). Next, we call a positive (semi-definite) element of \({{\mathcal {B}}}_{sa}({\mathbb {C}}^2)\) a density matrix if its trace is 1, or in other words, if it is a convex combination of some rank-one projections. Therefore \(\rho \) maps the set of all \(2\times 2\) density matrices onto the closed ball of the three-dimensional affine subspace \(\{ (1/2,x_1,x_2,x_3) :x_j\in {\mathbb {R}}, j=1,2,3 \}\) with centre at (1/2, 0, 0, 0) and radius 1/2. Hence, we see that the cone of all positive (semi-definite) \(2\times 2\) matrices is mapped onto the infinite cone spanned by (0, 0, 0, 0) and the aforementioned ball. Thus \(\rho \) maps \({{\mathcal {E}}}({\mathbb {C}}^2)\) onto the intersection of this cone and its reflection through the point \(\rho (\tfrac{1}{2}I) = (\tfrac{1}{2},0,0,0)\). We can re-write (44) as follows: $$\begin{aligned} \rho (P_{(\cos \vartheta , e^{i\mu } \sin \vartheta )}) = \tfrac{1}{2} \cdot ( e_0 + \sin 2\vartheta \cdot e_\mu + \cos 2\vartheta \cdot e_3), \end{aligned}$$ $$\begin{aligned} e_0 := (1,0,0,0), \; e_\mu := (0, \cos \mu , \sin \mu , 0), \; e_3 := (0, 0, 0, 1) \end{aligned}$$ is an orthonormal system in \({\mathbb {R}}^4\). Let \(S_\mu \) be the three-dimensional subspace spanned by \(e_0, e_\mu , e_3\). Then the set \(\rho ({{\mathcal {E}}}({\mathbb {C}}^2))\cap S_\mu \) can be visualised as a double cone of \({\mathbb {R}}^3\), by regarding \(e_0, e_\mu , e_3\) as the standard basis of \({\mathbb {R}}^3\), see Figure 2. Note that \(\rho ({{\mathcal {P}}}_1({\mathbb {C}}^2))\cap S_\mu \) is the circle where the boundaries of the two cones meet. Illustration of \(\rho ({{\mathcal {E}}}({\mathbb {C}}^2))\cap S_\mu \). The circle is \(\rho ({{\mathcal {P}}}_1({\mathbb {C}}^2))\cap S_\mu \) We continue with visualising the set \((tP_{(1, 0)})^\sim \) for an arbitrary \(0<t<1\). Note that then visualising \((tP)^\sim \) for a general rank-one projection P is very similar, we simply have to apply a unitary similarity (which by well-known properties of the Bloch representation, acts as a rotation on the sphere \(\rho ({{\mathcal {P}}}_1({\mathbb {C}}^2))\)). Equation (9) gives the following: $$\begin{aligned}&\Lambda \left( tP_{(1, 0)}, P_{(\cos \vartheta , e^{i\mu } \sin \vartheta )} \right) + \Lambda \left( I-tP_{(1,0)}, P_{(\cos \vartheta , e^{i\mu } \sin \vartheta )} \right) \nonumber \\&\quad = \frac{1}{\tfrac{1}{t} \cos ^2\vartheta + \left( \tfrac{1}{0}\right) \sin ^2\vartheta } + \frac{1}{\tfrac{1}{1-t} \cos ^2\vartheta + \sin ^2\vartheta } = \left\{ \begin{array}{ll} \frac{1}{\tfrac{1}{1-t} \cos ^2\vartheta + \sin ^2\vartheta } &{} \text {if } \vartheta > 0 \\ 1 &{} \text {if } \vartheta = 0 \\ \end{array} \right. . \end{aligned}$$ Now, let us consider the vector $$\begin{aligned} u = (2-t) \cdot e_0 + t \cdot e_3, \end{aligned}$$ which is orthogonal to $$\begin{aligned} \rho \left( (1-t)P_{(1,0)}-P_{(1,0)}^\perp \right) = -\tfrac{1}{2}\left[ t\cdot e_0 + (t-2)\cdot e_3\right] . \end{aligned}$$ From here a bit tedious computation gives $$\begin{aligned} \left\langle u, \; \frac{1}{ \tfrac{1}{1-t} \cos ^2\vartheta + \sin ^2\vartheta } \cdot \rho \left( P_{(\cos \vartheta , e^{i\mu } \sin \vartheta )} \right) - \rho \left( P_{(1,0)}^\perp \right) \right\rangle = 0 \quad (0\le \vartheta \le \tfrac{\pi }{2}). \end{aligned}$$ Therefore by Corollary 2.7 and (46) we conclude that \(\rho \left( (t P_{(1,0)})^\sim \cap {{\mathcal {F}}}_1({\mathbb {C}}^2)\right) \) is the union of the line segment \(\{\rho \left( s P_{(1,0)}\right) :0<s\le 1 \} = \{ \tfrac{s}{2} e_0 + \tfrac{s}{2} e_3 :0<s\le 1 \}\) and of the area on the boundary of \(\rho ({{\mathcal {E}}}({\mathbb {C}}^2))\) which is either on, or below the affine hyperplane whose normal vector is u and which contains \(\rho (P_{(1,0)}^\perp )\), see Figure 3. We note that using the notation of (22), the ellipse on the boundary is exactly the set $$\begin{aligned} \left( \rho (\ell _{tP_{(1,0)}})\cup \big \{(1-t)\cdot \rho (P_{(1,0)}) ,\rho (P_{(1,0)}^\perp )\big \}\right) \cap S_\mu . \end{aligned}$$ Therefore \(\rho (\ell _{tP_{(1,0)}})\) is a punctured ellipsoid. Illustration of \(\rho \left( (tP_{(1, 0)})^\sim \right) \cap \rho \left( {{\mathcal {F}}}_1({\mathbb {C}}^2)\right) \cap S_\mu \) (thick ellipse, thick line segment and the shaded area). The dotted circle is \(\rho ({{\mathcal {P}}}_1({\mathbb {C}}^2))\cap S_\mu \) If one illustrates the set \(\rho \left( (A)^\sim \right) \cap \rho \left( {{\mathcal {F}}}_1({\mathbb {C}}^2)\right) \cap S_\mu \) with \(A, A^\perp \notin \mathcal {SC}({\mathbb {C}}^2) \cup {{\mathcal {F}}}_1({\mathbb {C}}^2)\) in the way as above, then one gets a set on the boundary of the cone which is bounded by a continuous closed curve containing the \(\rho \)-images of the spectral projections. Final Remarks and Open Problems First, we prove the analogue of Lemma 3.1 for finite dimensional spaces of dimension at least three. Let H be a Hilbert space with \(2\le \dim H < \infty \) and \(A\in {{\mathcal {E}}}(H)\). Then the following are equivalent: \(0,1 \in \sigma (A)\), there exists no effect \(B\in {{\mathcal {E}}}(H)\) such that \(B^\sim \subsetneq A^\sim \). If \(\dim H = 2\), then (i)\(\iff \)(ii) was proved in Lemma 3.1, so from now on we will assume \(2< \dim H < \infty \). Also, as the case when \(A\in \mathcal {SC}(H)\) is trivial, we assume otherwise throughout the proof. (i)\(\Longrightarrow \)(ii): Suppose that \(0,1 \in \sigma (A)\) and consider an arbitrary effect B with \(B^\sim \subseteq A^\sim \). By Lemma 2.12, A and B commute. If \(0 = \lambda _1 \le \lambda _2 \le \dots \le \lambda _{n-1} \le \lambda _n = 1\) are the eigenvalues of A, then the matrices of A and B written in an orthonormal basis of joint eigenvectors are the following: $$\begin{aligned} A = \left[ \begin{matrix} 0 &{} 0 &{} \dots &{} 0 &{} 0 \\ 0 &{} \lambda _2 &{} \dots &{} 0 &{} 0 \\ \vdots &{} &{} \ddots &{} &{} \vdots \\ 0 &{} 0 &{} \dots &{} \lambda _{n-1} &{} 0 \\ 0 &{} 0 &{} \dots &{} 0 &{} 1 \\ \end{matrix}\right] \quad \text {and} \quad B = \left[ \begin{matrix} \mu _1 &{} 0 &{} \dots &{} 0 &{} 0 \\ 0 &{} \mu _2 &{} \dots &{} 0 &{} 0 \\ \vdots &{} &{} \ddots &{} &{} \vdots \\ 0 &{} 0 &{} \dots &{} \mu _{n-1} &{} 0 \\ 0 &{} 0 &{} \dots &{} 0 &{} \mu _n \\ \end{matrix}\right] \end{aligned}$$ with some \(\mu _1,\dots \mu _n \in [0,1]\). Notice that by Corollary 2.8, for all \(1\le i < j \le n\) we have $$\begin{aligned} \left[ \begin{matrix} \mu _i &{} 0 \\ 0 &{} \mu _j \end{matrix}\right] ^\sim \subseteq \left[ \begin{matrix} \lambda _i &{} 0 \\ 0 &{} \lambda _j \end{matrix}\right] ^\sim . \end{aligned}$$ In particular, choosing \(i = 1, j = n\) implies either \(\mu _1 = 0\) and \(\mu _n = 1\), or \(\mu _1 = 1\) and \(\mu _n = 0\). Assume the first case. If we set \(i=1\), then Lemma 3.2 and (47) imply \(\mu _j \ge \lambda _j\) for all \(j = 2,\dots , n-1\). But on the other hand, setting \(j=n\) implies \(\mu _i \le \lambda _i\) for all \(i = 2,\dots , n-1\). Therefore we conclude \(B = A\). Similarly, assuming the second case implies \(B=A^\perp \). (ii)\(\Longrightarrow \)(i): Assume (i) does not hold, then there exists a positive number \(\varepsilon \) such that \(\sigma (A) \subseteq [0,1-\varepsilon ]\) or \(\sigma (A^\perp ) \subseteq [0,1-\varepsilon ]\). Suppose the first possibility holds, then \(\tfrac{1}{1-\varepsilon }A \notin \{A,A^\perp \}\) and \(\left( \tfrac{1}{1-\varepsilon }A\right) ^\sim \subseteq A^\sim \). The second case is very similar. \(\square \) We only proved the above lemma and Corollary 2.11 in the finite dimensional case. The following two questions would be interesting to examine: Question 6.2 Does the statement of Corollary 2.11 remain true for general infinite dimensional Hilbert spaces? Does the statement of Lemma 6.1 hold if \(\dim H \ge \aleph _0\)? Finally, our first main theorem characterises completely when \(A^\sim = B^\sim \) happens for two effects A and B. However, we gave only some partial results about when \(A^\sim \subseteq B^\sim \) occurs, e.g. Lemma 2.12. How can we characterise the relation \(A^\sim \subseteq B^\sim \) for effects A, B? We believe that a complete answer to this latter question would represent a substantial step towards the better understanding of coexistence. Ando, T.: Problem of Infimum in the Positive Cone, Analytic and Geometric Inequalities and Applications, Mathematical Application., vol. 478, pp. 1–12. Kluwer Academic Publication, Dordrecht (1999) Busch, P., Gudder, S.: Effects as functions on projective Hilbert space. Lett. Math. Phys. 47, 329–337 (1999) Busch, P., Lahti, P., Pellonpää, J.-P., Ylinen, K.: Quantum Measurement. Springer, Berlin (2016) Busch, P., Schmidt, H.-J.: Coexistence of qubit effects. Quantum Inf. Process. 9, 143–169 (2010) Cassinelli, G., De Vito, E., Lahti, P., Leverero, A.: A theorem of Ludwig revisited. Found. Phys. 8, 921–941 (2000) MathSciNet Google Scholar Chevalier, G.: Wigner's Theorem and Its Generalizations, Handbook of Quantum logic and quantum structures, 429–475. Elsevier, Amsterdam (2007) Conway, J.B.: A Course in Functional Analysis, Graduate Texts in Mathematics, vol. 96, 2nd edn. Springer, New York (1990) Davies, E.B.: Quantum Theory of Open Systems. Academic Press, New York (1976) MATH Google Scholar Faure, C.-A.: An elementary proof of the fundamental theorem of projective geometry. Geom. Dedicata 90, 145–151 (2002) Foulis, D.J., Bennett, M.K.: Effect algebras and unsharp quantum logics. Found. Phys. 24, 1331–1352 (1994) Gehér, G.P.: An elementary proof for the non-bijective version of Wigner's theorem Phys. Lett. A 378, 2054–2057 (2014) Gehér, G.P.: Symmetries of projective spaces and spheres. Int. Math. Res. Not. IMRN 7, 2205–2240 (2020) Gudder, S.: Sharp and unsharp quantum effects. Adv. Appl. Math. 20, 169–187 (1998) Heinosaari, T., Kiukas, J., Reitzner, D.: Coexistence of effects from an algebra of two projections. J. Phys. A Math. Theor. 47, 225301 (2014) Houtappel, R.M.F., van Dam, H., Wigner, E.P.: The conceptual basis and use of the geometric invariance principles. Rev. Modern Phys. 37, 595–632 (1965) Kadison, R.V.: Order properties of bounded self-adjoint operators. Proc. Am. Math. Soc. 2, 505–510 (1951) Kraus, K.: States, effects, and operations, fundamental notions of quantum theory. In: Böhm, A., Dollard, J.D., Wootters, W.H. (eds.) Lecture Notes in Physics, vol. 190. Springer, Berlin (1983) Ludwig, G.: Foundations of Quantum Mechanics, vol. I, (Translated from the German by Carl A. Hein), Springer-Verlag, New York (1983) Ludwig, G.: Foundations of Quantum Mechanics, vol. II, (Translated from the German by Carl A. Hein), Springer-Verlag, New York (1985) Molnár, L.: Characterization of the automorphisms of Hilbert space effect algebras. Commun. Math. Phys. 223, 437–450 (2001) Molnár, L.: Selected Preserver Problems on Algebraic Structures of Linear, Operators and on Function Spaces. Lecture notes in mathematics, vol. 1895. Springer, Berlin (2007) Molnár, L., Páles, Zs: \(^\perp \)-order automorphisms of Hilbert space effect algebras: the two-dimensional case. J. Math. Phys. 42, 1907–1912 (2001) Molnár, L., Šemrl, P.: Nonlinear commutativity preserving maps on self-adjoint operators. Quart. J. Math. 56, 589–595 (2005) Molnár, L., Šemrl, P.: Transformations of the unitary group on a Hilbert space. J. Math. Anal. Appl. 388, 1205–1217 (2012) Moreland, T.J., Gudder, S.P.: Infima of Hilbert space effects. Linear Algebra Appl. 286, 1–17 (1999) Šemrl, P.: Comparability preserving maps on Hilbert space effect algebras. Commun. Math. Phys. 313, 375–384 (2012) Šemrl, P.: Automorphisms of Hilbert space effect algebras. J. Phys. A 48, 195301 (2015) Šemrl, P.: Groups of order automorphisms of operator intervals. Acta Sci. Math. (Szeged) 84, 125–136 (2018) Simon, B.: Quantum dynamics: from automorphism to Hamiltonian. In: Lieb, E.H., Simon, B., Wightman, A.S. (eds.) Studies in Mathematical Physics, Essays in Honor of Valentine Bargmann, Princeton Series in Physics, pp. 327–349. University Press, Princeton (1976) Stano, P., Reitzner, D., Heinosaari, T.: Coexistence of qubit effects. Phys. Rev. A 78, 012315 (2008) Titkos, T.: Ando's theorem for nonnegative forms. Positivity 16, 619–626 (2012) Uhlhorn, U.: Representation of symmetry transformations in quantum mechanics. Ark. Fysik 23, 307–340 (1963) Uhlmann, A.: Transition probability (fidelity) and its relatives. Found Phys. 41, 288–298 (2011) Wigner, E.P.: Gruppentheorie und ihre Anwendung auf die Quantenmechanik der Atomspektrum. Fredrik Vieweg und Sohn, Brunswick (1931) Wolf, M.M., Perez-Garcia, D., Fernandez, C.: Measurements incompatible in quantum theory cannot be measured jointly in any other no-signaling theory. Phys. Rev. Lett. 103, 230402 (2009) Yu, S., Liu, N.-L., Li, L., Oh, C.H.: Joint measurement of two unsharp observables of a qubit. Phys. Rev. A 81, 062116 (2008) Department of Mathematics and Statistics, University of Reading, Whiteknights, P.O. Box 220, Reading, RG6 6AX, UK György Pál Gehér Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI-1000, Ljubljana, Slovenia Peter Šemrl Institute of Mathematics, Physics, and Mechanics, Jadranska 19, SI-1000, Ljubljana, Slovenia Correspondence to György Pál Gehér. Communicated by H. T. Yau György Pál Gehér was supported by the Leverhulme Trust Early Career Fellowship, ECF-2018-125. He was also partly supported by the Hungarian National Research, Development and Innovation Office-NKFIH (K115383) Peter Šemrl was supported by Grants N1-0061, J1-8133, and P1-0288 from ARRS, Slovenia. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Gehér, G.P., Šemrl, P. Coexistency on Hilbert Space Effect Algebras and a Characterisation of Its Symmetry Transformations. Commun. Math. Phys. 379, 1077–1112 (2020). https://doi.org/10.1007/s00220-020-03873-3
CommonCrawl
Write down all the morphisms in the category D. Apparently there is a difference between paths and morphisms. I believe the morphisms form an equivalence group over the paths. For example, to which morphisms do the following paths belong? $$ id_z $$ $$ s $$ $$ s.s $$ $$ s.id_z.s.s $$ Comment Source:Apparently there is a difference between paths and morphisms. I believe the morphisms form an equivalence group over the paths. For example, to which morphisms do the following paths belong? $$ id_z $$ $$ s $$ $$ s.s $$ $$ s.id_z.s.s $$ Since the set of paths is isomorphic to \(\mathbb{N}\), let's see what the equation means in \(\mathbb{N}\). \(s^4=s^2\) corresponds to \(n\equiv n-2\) for \(n\geq4\). This means that every even number except 0 is equivalent to 2, and every odd number except 1 is equivalent to 3. So, there are 4 morphisms in \(\mathcal{D}\): \(\mathrm{id}_z,s,s^2,\) and \(s^3\). Fredrick Eisele wrote in #1: I believe the morphisms form an equivalence group over the paths. They certainly form equivalence classes. They don't quite form a group, though the paths of length at least 2 have a \(\mathbb{Z}/2\mathbb{Z}\)-like structure (with composition corresponding to addition modulo 2, and the even numbers corresponding to the identity). Comment Source:Since the set of paths is isomorphic to \\(\mathbb{N}\\), let's see what the equation means in \\(\mathbb{N}\\). \\(s^4=s^2\\) corresponds to \\(n\equiv n-2\\) for \\(n\geq4\\). This means that every even number except 0 is equivalent to 2, and every odd number except 1 is equivalent to 3. So, there are 4 morphisms in \\(\mathcal{D}\\): \\(\mathrm{id}_z,s,s^2,\\) and \\(s^3\\). Fredrick Eisele wrote in [#1](https://forum.azimuthproject.org/discussion/comment/18569/#Comment_18569): >I believe the morphisms form an equivalence group over the paths. They certainly form equivalence classes. They don't quite form a group, though the paths of length at least 2 have a \\(\mathbb{Z}/2\mathbb{Z}\\)-like structure (with composition corresponding to addition modulo 2, and the even numbers corresponding to the identity).
CommonCrawl
Feasibility analysis of GNSS-based ionospheric observation on a fast-moving train platform (GIFT) Shiwei Yu1 & Zhizhao Liu ORCID: orcid.org/0000-0001-6822-92481 The ionospheric effect plays a crucial role in the radio communications. For ionospheric observing and monitoring, the Global Navigation Satellite System (GNSS) has been widely utilized. The ionospheric condition can be characterized by the Total Electron Contents (TEC) and TEC Rate (TECR) calculated from the GNSS measurements. Currently, GNSS-based ionospheric observing and monitoring largely depend on a global fiducial network of GNSS receivers such as the International GNSS Service (IGS) network. We propose a new approach to observe the ionosphere by deploying a GNSS receiver on a Hong Kong Mass Transit Railway (MTR) train. We assessed the TECR derived from the MTR-based GNSS receiver by comparing it with the TECR derived from a static GNSS receiver. The results show that the Root-Mean-Squares (RMS) errors of the TECR derived from the MTR-based GNSS receiver is consistently approximately 23% higher than that derived from the static GNSS receiver. Despite the increased error, the findings suggest that the GNSS observation on a fast-moving platform is a feasible approach to observe the ionosphere over a large region in a rapid and cost-effective way. The ionosphere has a significant impact on satellite-based navigation and positioning. Due to the abundance of charged particles in the ionosphere, radio signals experience phase fluctuations, amplitude fluctuations, group delay, absorption, scattering, and frequency shifts when they travel through the ionosphere (Bernhardt et al., 2006; Chen et al., 2008). As a result, the Earth observation systems, such as the Global Navigation Satellite Systems (GNSS), the Doppler Orbit and Radio Positioning Integration by Satellite (DORIS), are adversely affected by the ionospheric situation (Aquino et al., 2005; Sreeja et al., 2011). Fortunately, the ionospheric situation can be characterized by the Total Electron Contents (TEC) (Davies, 1990), and the GNSS measurements can be used to compute the ionospheric TEC (Arikan et al., 2003). Therefore, the in-situ ionospheric situation along the GNSS signal paths can be manifested in the TEC variations. A considerable number of studies have been conducted to analyze the ionospheric anomalies during powerful solar and geomagnetic space weather based on the GNSS TEC (Aarons & Lin, 1999; Adeniyi et al., 2014; Chen et al., 2017; Tariku, 2015). Furthermore, ionospheric irregularities during tropical cyclones and earthquakes have been studied by analyzing the GNSS TEC (Jin et al., 2015; Yang & Liu, 2016a). The TEC Rate (TECR or Rate of TEC (ROT)) was first introduced to study ionospheric irregularities (Wanninger, 1993). The TECR can be retrieved by differentiating the TEC between two consecutive observations. Compared with the TEC, the TECR provides a more direct and informative description of the ionospheric situation in the time domain (Mendillo et al., 2000; Pi et al., 1997). Moreover, the computation of the TECR is more efficient as it directly cancels the hardware delays of GNSS satellites and receivers (Cai et al., 2013; Liu, 2011). Benefiting from the GNSS TECR, the ionospheric situation can be well monitored. Recently, the GNSS TECR has become a research focus for studying the ionosphere (Kong et al., 2017; Yang & Liu, 2016a, 2016b). In addition, the study of the TECR can also contribute to the GNSS data quality analysis and control as the TECR information is very decisive in carrier phase cycle slip detection and repair (Liu, 2011). To monitor the ionospheric condition on a global scale, the International GNSS Service (IGS) network stations have been utilized to produce the Global Ionosphere Maps (GIMs) (Feltens, 2007; Ghoddousi-Fard et al., 2011; Hernández-Pajares et al., 2009; Li et al., 2015; Mannucci et al., 1998; Orús et al., 2005; Schaer, 1999). The IGS GIMs usually model the global TEC over each grid point with a spatial resolution of \(5^\circ \times 2.5^\circ\) in longitude and latitude, and a temporal resolution of 15 min to 2 h (Hernández-Pajares et al., 2017; Orús et al., 2005; Roma-Dollase et al., 2018). Recently, many efforts have been made to improve the IGS GIMs products, e.g. real-time products with a temporal resolution of 15 min (Li et al., 2020). However, the spatial resolution of the IGS GIMs are inadequate to observe the small-scale ionospheric situation. Moreover, its performance is reduced in some regions due to the sparse distribution of GNSS stations. Therefore, various regional GNSS networks, such as Wide Area Augmentation System (WAAS) in North America, the European Geostationary Navigation Overlay Service (EGNOS) in Europe, the Multi-functional Satellite Augmentation System (MSAS) in Japan, the Global Positioning System (GPS) Aided Geosynchronous Equatorial Orbit (GEO) Augmented Navigation (GAGAN) in India, and the Crustal Movement Observation Network of China (CMONOC), have been used to study the ionospheric condition in a more precise way with a higher spatial resolution. Liu and Gao (2004) proposed the methods of 2D grid-based and 3D tomography-based ionospheric modeling using the regional GPS networks, such as the Swedish Network of Permanent GPS Reference Stations (SWEPOS) and the South California Integrated GPS Network (SCIGN). In addition, Opperman et al. (2007) developed a regional ionospheric TEC model with \(0.5^\circ \times 0.5^\circ\) spatial resolution based on GPS data in South Africa. Bergeot et al. (2014) proposed a near real-time ionospheric monitoring method using the European Reference Frame (EUREF) Permanent Network (EPN), and the spatial resolution was \(0.5^\circ \times 0.5^\circ\) in longitude and latitude. Lastly, Yang et al. (2016) exploited the 260 CMONOC GNSS stations to study the 3D ionospheric structure over China. However, all these ionospheric studies, either on a global scale or on a regional scale, depend on the availability of GNSS reference stations, and the density of GNSS stations is still not sufficient, particularly in many rural or remote areas. In those areas with data holes or data insufficiency, the ionosphere is poorly characterized. Hence, we propose the idea of deploying GNSS receivers on the trains in the nation-wide railway network as a new method to complement the traditional GNSS monitoring networks for TECR observations. We investigate the feasibility of TECR observation using a GNSS receiver deployed on a fast-moving train, namely the Hong Kong Mass Transit Railway (MTR) train. The MTR-based TECR observations are assessed by comparing them with those derived from a static GNSS receiver installed nearby the MTR railway line. This proposed approach has the advantage of a cost-effective and dynamic TECR observation over a large area. It can compensate for the ground-based GNSS networks, which usually have a very low spatial resolution in the rural or remote areas (Grejner-Brzezinska et al., 2007; Li et al., 2014). In the following sections, we first describe the data and methodology used to retrieve the ionospheric TECR from an MTR-based GNSS receiver. Then, the properties of the MTR-based TECR are analyzed and assessed by comparing it with the TECR obtained from a static GNSS receiver. Finally, a comprehensive conclusion is provided. Data and methodology The GNSS data were collected from an experiment conducted in Hong Kong on 19 June 2017. One GNSS receiver, as a Moving Station (MS), was set up on a Hong Kong MTR train, as shown in Fig. 1a. It should be pointed out that the window glass of the train will affect the quality of the GNSS measurements (Liu et al., 2020). The other GNSS receiver, as a Static Station (SS), was deployed in the open space nearby the MTR train railway line, as depicted in Fig. 1b. a The moving station deployed beside the window in the MTR train and b the static station deployed beside the MTR train line. They were equipped with the same receiver type: the Trimble R10 receivers Figure 2 depicts the spatial distribution of the MTR railway line and SS receiver. The green circle with a cross denotes the position of the SS receiver, the red line with train marks represents the MTR railway line segment with a distance of about 3.1 km. The MTR railway line and SS receiver were carefully selected so that GNSS satellite signals are minimally affected by surrounding buildings and mountains. The length of the selected MTR railway line, running between the Shek Mun Station and the Tai Shui Hang Station, is around 3.1 km. The SS receiver is in the middle of the selected railway line, and it is about 50 m far away from the railway tracks. The MS receiver on the MTR train was placed near the window inside the train compartment. When the MTR train traveled northbound from the Shek Mun Station to the Tai Shui Hang Station or southbound from the Tai Shui Hang Station to the Shek Mun Station, the MS receiver was always put at the same side where the SS receiver is located (left side in Fig. 2). The data was collected for around 80 min from 15:08 to 16:28 Hong Kong Time (HKT) on 19 June 2017. It took the MTR train approximately 5 min to run between the two stations and one data file was generated for each journey. Ten sets of GNSS data were recorded by the MS receiver, but four of them were not used in this study due to their poor data quality. The poor data quality may be caused by the complex observation environment, like obstructions, window glass, and unpredictable artificial factors. Therefore, we cannot guarantee that all the data sets have the same quality. In addition, there is a break of about 4 min between two journeys due to the waiting time for the next train. Spatial distribution of the SS receiver and the MTR railway line where the MS receiver was placed Both the MS and SS receivers were equipped with Trimble R10 of the same parameter configuration. The elevation cutoff angle was set to zero to track as many GNSS satellites as possible. The GPS, GLObal Navigation Satellite System (GLONASS), Galileo navigation satellite system (Galileo) and BeiDou Navigation Satellite System (BDS) were used in this study. The sample rate was set to 20 Hz. The spatial ionospheric gradient under this sample rate can be negligible (Vuković & Kos, 2016). According to the GNSS pseudorange and carrier phase measurements, the observations formula can be written as the following (Leick et al., 2015): $$R_{{\text{i}}}^{{\text{s}}} = \rho^{{\text{s}}} + c \cdot \left( {{\text{d}}t - {\text{d}}T^{s} } \right) + T + I_{i} - \left( {b_{i,R} + B_{i}^{s,R} } \right) + M_{i,R} + \varepsilon_{i,R}$$ $$\lambda_{i} \Phi_{i}^{s} = \rho^{s} + c \cdot \left( {{\text{d}}t - {\text{d}}T^{s} } \right) + T - I_{i} + \lambda_{i} N_{i}^{s} - \left( {b_{i,\Phi } + B_{i}^{s,\Phi } } \right) - \left( {w_{i,\Phi } + W_{i}^{s,\Phi } } \right) + M_{i,\Phi } + \varepsilon_{i,\Phi }$$ where \(R\) and \(\Phi\) denote the pseudorange and carrier phase measurements, respectively the units for \(R\) and \(\Phi\) are meter and cycle, respectively. \({\lambda }_{i}\) is the wavelength of carrier phase measurement in meter. the superscript \(s\) denotes the GNSS satellite. the subscript \(i\) is the frequency number, such as \(i=1\) for the GPS L1 signal, \(i=2\) for the GPS L2 signal. \(\rho\) is the geometrical distance between the receiver and the satellite in meter. \(\mathrm{d}t\) and \(\mathrm{d}{T}^{s}\) are the clock errors of the receiver and satellite in second, respectively. \(c\) is the speed of light in vacuum in m/s. \(I\) is the ionospheric range delay in meter, \(T\) is the tropospheric range delay in meter. \(N\) is the integer ambiguity. \({b}_{i,R}\) and \({b}_{i,\Phi }\) are the hardware delay of the receiver on the pseudorange and carrier phase measurements, respectively, in meter. \({B}^{s,R}\) and \({B}^{s,\Phi }\) in meter are the hardware delay of the satellite on the pseudorange and carrier phase measurements, respectively. \({w}_{i,\Phi }\) and \({W}_{i}^{s,\Phi }\) in meter are phase windup terms at the receiver and satellite, respectively. The windup terms in the phase polarized signals due to the moving platform are neglected in this study. \({M}_{i,R}\) and \({M}_{i,\Phi }\) are the multipath error on the pseudorange and carrier phase measurements, respectively. \({\varepsilon }_{i,R}\) and \({\varepsilon }_{i,\Phi }\) signify the noise on the pseudorange and carrier phase measurements, respectively. Assuming the multipath errors and observation noise are of equal values on two frequencies, the ionospheric TEC from dual-frequency pseudorange measurements can be retrieved as the following: $$TEC_{R} = \frac{{f_{1}^{2} \left[ {R_{1}^{s} - R_{2}^{s} + \left( {b_{1,R} - b_{2,R} } \right) + \left( {B_{1}^{s,R} - B_{2}^{s,R} } \right)} \right]}}{{40.3 \times 10^{16} \left( {1 - \gamma } \right)}}$$ where the term \(\gamma\) is calculated as: $$\gamma = \frac{{f_{1}^{2} }}{{f_{2}^{2} }}$$ TECR can be calculated by differentiating the TEC values retrieved from two consecutive epochs. The TECR equation can be written as below: $$TECR_{R} = \frac{{TEC_{R} \left( k \right) - TEC_{R} \left( {k - 1} \right)}}{\Delta t}$$ where \(TEC_{R} \left( \cdot \right)\) is the TEC value on the pseudorange measurements at a given epoch; \(\Delta t\) is the data sampling interval. Assuming the hardware delays of the receiver and satellite are constants for a short duration, the TECR equation on the pseudorange measurements can be simplified as below: $$TECR_{R} \left( k \right) = \frac{{f_{1}^{2} }}{{40.3 \times 10^{16} \left( {1 - \gamma } \right)\Delta t}}\left[ {R_{1}^{s} \left( k \right) - R_{2}^{s} \left( k \right) - R_{1}^{s} \left( {k - 1} \right) + R_{2}^{s} \left( {k - 1} \right)} \right]$$ Similarly, the TECR on the carrier phase measurements can be derived as a similar equation: $$TECR_{\Phi } \left( k \right) = \frac{{f_{1}^{2} }}{{40.3 \times 10^{16} \left( {\gamma - 1} \right)\Delta t}}\left\{ {\lambda_{1} \left[ {\Phi_{1}^{s} \left( k \right) - \Phi_{1}^{s} \left( {k - 1} \right)} \right] - \lambda_{2} \left[ {\Phi_{2}^{s} \left( k \right) - \Phi_{2}^{s} \left( {k - 1} \right)} \right]} \right\}$$ It should be noted that the TECR on carrier phase measurements is calculated under the condition of no cycle slips between the two consecutive epochs. In other words, cycle slip detection and repair should be performed prior to the calculation of TECR on carrier phase measurements. The well-performed algorithms of cycle slip detection and repair used in this study are proposed by Cai et al. (2013). In addition, if the Doppler measurements are available in GNSS measurements, the Doppler-aided cycle slip detection and repair can be also utilized, particularly for such high-rate GNSS observations (Zhao et al., 2020). IPP spatial distance due to moving platform The GNSS TECR calculated from the SS receiver contains the spatial gradient information due to the motion of GNSS satellites. Regarding the MS receiver, the spatial gradient is a function of the motions of both GNSS satellites and the MTR train. The clear geometric relationship between the ground motion point and the corresponding Ionospheric Pierce Point (IPP), which is the intersection between the GNSS signal path and the ionospheric single layer (Schaer, 1999), is shown in Fig. 3. Geometric relationship between the ground motion points and IPPs The geometric relationship can be written as below: $$\frac{D}{d} = \frac{H - h}{H}$$ where \(d\) is the ground distance between the two MTR railway stations, which is about 3.1 km; \(D\) is the distance between two pierce points on the ionospheric single layer corresponding to the two stations on the ground; \(H\) denotes the orbit height of the satellite, i.e. 20,100 km for GPS satellites; and \(h\) denotes the height of the ionospheric single layer, i.e. 350 km; \(\mathrm{O}\) and \(R\) represent Earth mass center and radius, respectively. Therefore, the spatial distance on the ionospheric single layer due to the train's movement can be calculated. The distance of the two subway stations on the ionospheric single layer, after mapping the ground distance of about 3.1 km to the ionospheric layer, is about 3.0 km. In addition, taking the average speed of the train, e.g. around 38 km/h provided by the MTR company, into consideration, it is about 10.4 m on the ionospheric layer for each second under the assumption of satellite being stationary. The distance between IPPs in one second for a moving satellite (with the speed of around 4 km/s) is approximately 70.0 m with respect to a stationary station on the ground. Therefore, the spatial distance caused by the movement of the train (with speed of about 38 km/h) is at the same order with that caused by the movement of satellites. Accuracy analysis of TECR observations Currently, most GNSS receivers can produce high-quality measurements of both pseudorange and carrier phase. In detail, carrier phase can be measured with the accuracy of around 1.0 mm or even higher, and pseudorange measurements have the accuracy of around 30.0 cm or even higher (Czerniak & Reilly, 1998). Table 1 summarizes the accuracy of pseudorange and carrier phase measurements for six typical models of receivers. It is clear that the accuracy of pseudorange measurements for these receivers is around a few decimeters, and the accuracy of carrier phase measurements is higher than one millimeter. Table 1 Typical accuracy of pseudorange and carrier phase measurements of typical receiver models from major GNSS receiver manufacturers We assume that all the pseudorange measurements have equal accuracy, and their variances are denoted as \({\delta }_{R}^{2}\). In addition, the measurements at epochs \(\left(k\right)\) and \((k-1)\) are independent. According to the error propagation law, the accuracy of the TECR values derived from the pseudorange measurements can be estimated with the following equation: $$m_{{TECR_{R} }} = \pm \sqrt {\left( {\frac{\partial f}{{\partial R_{1}^{S} \left( k \right)}}} \right)^{2} \delta_{R}^{2} + \left( {\frac{\partial f}{{\partial R_{2}^{S} \left( k \right)}}} \right)^{2} \delta_{R}^{2} + \left( {\frac{\partial f}{{\partial R_{1}^{S} \left( {k - 1} \right)}}} \right)^{2} \delta_{R}^{2} + \left( {\frac{\partial f}{{\partial R_{2}^{S} \left( {k - 1} \right)}}} \right)^{2} \delta_{R}^{2} }$$ where \(m_{{TECR_{R} }}\) is the accuracy of the TECR derived from pseudorange measurements; \(f\) denotes the TECR function. The accuracy of the TECR derived from pseudorange and carrier phase measurements can be estimated as: $$m_{{TECR_{R} }} = \pm \frac{{\sqrt 4 \times f_{1}^{2} }}{{40.3 \times 10^{16} \left( {1 - \gamma } \right)\Delta t}}\delta_{R}$$ $$m_{{TECR_{\Phi } }} = \pm \sqrt 2 \left( {\frac{{f_{1}^{2} }}{{40.3 \times 10^{16} \left( {\gamma - 1} \right)\Delta t}}} \right)\sqrt {\left( {\lambda_{1}^{2} + \lambda_{2}^{2} } \right)} \delta_{\Phi }$$ The accuracy of GNSS TECR values is shown in Table 2. It should be noted that the examples for the GLONASS in Table 2, i.e., row 2 and 6, are calculated for the GLONASS carrier base frequency considering its Frequency Division Multiple Access (FDMA) mode. Table 2 The accuracy of the TECR obtained from the pseudorange and carrier phase measurements of different GNSS systems. The GNSS satellites above the SS and MS receivers during the test period share the same spatial distribution because of the short distance between the two stations. The satellite sky views for the GPS, GLONASS, Galileo, and BDS at the SS receiver are shown in Fig. 4a–d,. The gray mask represents the signal obstruction area for the MS receiver onboard the MTR train, which means the satellites in this area are not visible for the MS receiver. The mask has an azimuth angle of about 45°, which is consistent with the MTR railway orientation in Fig. 2. It should be noted that only single-frequency data were observed from the GLONASS R12 satellite. Therefore, this satellite will be excluded from the following analysis. Satellite distribution in sky view for different GNSS systems, i.e., GPS in (a), GLONASS in (b), Galileo in (c), and BDS in (d). Its time duration is from 15:08 to 16:28 HKT on 19 June 2017. The gray mask indicates that satellites in the mask are not visible for the moving station TECR analysis The TECR values retrieved from the pseudorange and carrier phase measurements are evaluated in this section. Figure 5 demonstrates the overall TECR values derived from the SS receiver. For display purposes, the TECR values in Fig. 5, however, have been downsampled by 20 times to a rate of 1 Hz. TECR derived from pseudorange (a) and carrier phase measurements (b) from GNSS satellites (GPS, GLONASS, Galileo, and BDS). The data were collected at the SS receiver beside the MTR railway line from 15:08 to 16:28 HKT on 19 June 2017 In Fig. 5a, the TECR accuracy on the pseudoranges, about 250 TECU/sec, is apparently overestimated compared with the theoretical accuracy, about 100 TECU/sec, presented in Table 2, while the TECR accuracy on the carrier phase, around 0.4 TECU/sec, is consistent with the theoretical value presented in Table 2. The overestimation of the pseudorange TECR is due to the large thermal noise and multipath effect on the pseudorange measurements. In addition, some anomalies are present in the carrier-phase-derived TECR from GPS satellites in Fig. 5a. They are mainly caused by measurement noise under a clear geomagnetic and solar condition. The geomagnetic and solar conditions are represented by Kp, Dst, and F10.7 indices in the Fig. 6, where Kp is less than 3 (Mungufeni et al., 2016), Dst is larger than -20 nT (Gulyaeva & Arikan, 2017), and F10.7 is less than 80 sfu (solar flux units) (Tapping, 2013). All the indices imply a clear geomagnetic and solar condition on the experiment day. The geomagnetic indices, i.e. Kp (a), Dst (b), and solar index F10.7 (c) in June 2017. The red vertical dashed line denotes the experiment time at 16:00 HKT on 19 June 2017 The GNSS signals tracked by the MS receiver must pass through the windows of the MTR train. Therefore, the observation condition is much worse than the open-sky circumstance. The TECR values derived from the MS receiver are displayed in Fig. 7. Figure 5 shows that the TECR results from pseudorange measurements have a significantly lower accuracy than those of carrier phase measurements. We therefore just focus on carrier phase-derived TECR only for the moving receiver. TECR derived from six test datasets on carrier phase measurements. They were collected on the moving MTR train and showed in (a–f). The MTRS170x implies that TECR was observed on the MTR train and were derived from the x-th session on the 170th day of year The results show that the TECR values at the beginning of every journey have a small variation because of a low moving speed. An obvious gap can be observed for every journey, that is because GNSS satellite signals are blocked by station buildings when the train passes through the MTR station. Most of the TECR values for GPS, GLONASS, Galileo, and BDS fluctuate within \(\pm\) 1 TECU/sec. However, there are a considerable number of TECR outliers for every journey. The details to deal with these anomalies will be discussed in the following section. Outlier analysis Figure 8 shows the TECR values of the six satellites selected from six journeys. The satellites were randomly selected from the six datasets. The results show that TECR values derived from six different satellites are basically stable in a consecutive tracking arc. The TECR anomalies always appear in the re-tracking stage due to the blockage of buildings, poles, and trees. According to the previous studies from other researchers, the receiver architecture design is responsible for these TECR anomalies (Xie & Petovello, 2010). When a receiver loses tracking of a satellite, the acquisition process must be performed again to reacquire the satellite (Gardner, 2005). An initial error, namely phase error, will be introduced into the carrier phase measurements in the tracking loop. As a result, the TECR values are severely contaminated by the phase error at the beginning of every new tracking arc. TECR derived from six different satellites with the corresponding satellite elevations. The results are presented in (a–f). The time format is HH:MM (or HH:MM:SS because of different time durations) Furthermore, the carrier-to-noise density ratio (C/N0) and multipath effects on the GNSS signals are presented in the Figs. 9 and 10, respectively. The results show that the C/N0 of the GNSS signals collected at the moving receiver is always lower than that at the static receiver, especially when the satellite elevation angles are higher than 20°. Specifically, the C/N0 value at the moving receiver is smaller than that at the static receiver by 9 dB·Hz on average for GPS satellites, 7 dB·Hz for GLONASS, 12 dB·Hz for Galileo, and 5 dB·Hz for BDS. In terms of multipath effects, there is no distinct difference between the stationary and moving receivers. Both have the similar mean and standard deviation values, i.e. 0.00 and 0.35 m, respectively. This result confirms that the complicated observation environment in the train leads to the decay or even loss of tracked signals, which results in the anomalies in the TECR values. Averaged carrier-to-noise density ratio (C/N0) with the standard deviation at different elevation angles for GPS in (a), GLONASS in (b), Galileo in (c), and BDS in (d) for the stationary receiver (blue line) and the moving receiver (red line). The error bars around the data points represent the ± 1σ standard deviation Averaged multipath effects with standard deviation at different elevation angles for GPS in (a), GLONASS in (b), Galileo in (c), and BDS in (d) for the stationary receiver (blue line) and the moving receiver (red line). The error bars around the data points represent the ± 1σ standard deviation Outlier elimination using the generalized extreme studentized deviate test In Fig. 9, a considerable number of TECR anomalies present in the TECR results of each satellite. In order to eliminate these TECR anomalies, the generalized Extreme Studentized Deviate (ESD) test was introduced due to its well-performed capability of detecting one or more anomalies in a univariate data set that follows an approximately normal distribution (Rosner, 1983). The generalized ESD test can be defined for the hypothesis: \({\mathrm{H}}_{0}:\mathrm{There\ are\ no\ outliers\ in\ the\ data\ set}.\) \({\mathrm{H}}_{1}:\mathrm{There\ are\ up\ to\ r\ outliers\ in\ the\ data\ set}.\) The test statistic can be calculated by: $$R_{i} = \frac{{max_{i} \left| {x_{i} - \overline{x}} \right|}}{s}$$ where \(\overline{x}\) and \(s\) denote the sample mean and sample standard deviation, respectively. Under the assumption of approximately normal distribution, the test statistic \(R_{i} \left( {i = 1, 2, \ldots , r } \right)\) follows a t-distribution. Given a significance level \(\alpha\), the critical values \(\lambda_{i}\) can be computed as: $$\lambda_{i} = \frac{{\left( {n - i} \right)t_{p,n - i - 1} }}{{\sqrt {\left( {n - i - 1 + t_{p,n - i - 1}^{2} } \right)\left( {n - i + 1} \right)} }} i = 1,2, \ldots ,r$$ where \({t}_{p,n-i-1}\) is the \(100p\) percentage point from the t-distribution with \(\left(n-i-1\right)\) degrees of freedom and \(p=1-\frac{\alpha }{2(n-i+1)}\). The number of outliers is determined by finding the largest \(i\) which satisfies \({R}_{i}>{\lambda }_{i}\). In the generalized ESD test, the maximum value of \(r\) (total of outliers) is determined according to the dataset. Before the test, we set the maximum value of \(r\) as 40. The analysis shows the actual number of outliers was smaller than the predefined 40. Assuming the carrier phase measurements have the normal distribution with mean \(\mu\) and variance \({\sigma }^{2}\), as well as every measurement is independent. Then, the TECR values also follow the normal distribution with mean 0 and variance \({2\left(\frac{{f}_{1}^{2}}{40.3\times {10}^{16}\left(\gamma -1\right)\Delta t}\right)}^{2}({\lambda }_{1}^{2}+{\lambda }_{2}^{2}){\sigma }^{2}\). Figure 11 illustrates the TECR results by applying the generalized ESD test to original TECR values. It is clear to see that most TECR anomalies have been removed after the implementation of the generalized ESD test. TECR derived from six randomly selected satellites after applying the generalized ESD test with \(\alpha =0.05, r=40\). The results are presented in (a-f) TECR statistic characteristics Figure 12 shows the probability distribution of the TECR derived from the SS receiver on the carrier phase measurements. It should be noted that the data have not been downsampled to 1 s, and the original data with the sampling rate of 20 Hz is used. The TECR derived from four GNSS systems have almost the same mean value of 0.000 TECU/sec, and their standard deviations are in the range of 0.102 to 0.123 TECU/sec. The TECR values derived from the Galileo satellites have the smallest standard deviation of 0.102 TECU/sec followed by BDS-based TECR, 0.114 TECU/sec. The TECR values derived from the GPS and GLONASS satellites have the same standard deviation of 0.123 TECU/sec. Figure 13 illustrates the probability density functions of the TECR derived from six datasets collected at the MS receiver. The detailed statistical results are shown in Table 3. Probability density distribution of the TECR derived from carrier phase measurements of GPS in (a), GLONASS in (b), Galileo in (c), and BDS in (d) collected at the static station Probability density functions of the TECR derived from the moving receiver for six travels (a–f) . They are calculated from the carrier phase measurements collected at the moving station on the MTR train. The probability density function for BDS in MTRS1709 is not displayed because of inadequate observations Table 3 Statistics of mean and standard deviation of the TECR values derived from carrier phase observations collected from different GNSS systems at both static station and moving MTR train Their means change within − 0.014 to 0.009 TECU/sec, and their standard deviations are within 0.118 to 0.194 TECU/sec. The standard deviation for the Galileo-based TECR in the MTRS1707 and MTRS1708 datasets is slightly larger than the standard deviation in the other datasets. One possible reason for this result is that the ionospheric situation was changing with time. Another possible reason is that observation condition inside the MTR train was different in each journey. The TECR derived from BDS has the smallest standard deviation of 0.127 TECU/sec. The TECR derived from the GPS is a little higher, with standard deviations of 0.152 TECR/sec. The standard deviations of the TECR from the GLONASS and Galileo are almost the same, being 0.139 TECU/sec and 0.140 TECR/sec, respectively. The mean TECR from the moving receiver is larger than that from the stationary receiver and varies in each travel. In terms of standard deviation, the result from the moving receiver increases by 23%, 13%, 37%, and 11% for GPS, GLONASS, Galileo, and BDS, respectively, with respect to that from the stationary receiver. Assessment of the TECR derived from the moving MTR train The properties of the TECR derived from the SS and MS receivers have been discussed. To assess the TECR calculated from the MS receiver, the assessment equation is introduced as follows: $$Ratio=\frac{{RMS}_{ms}}{{RMS}_{ss}}$$ where \({RMS}_{ms}\) and \({RMS}_{ss}\) are the RMS of the TECR derived from the MS and SS receivers. The RMS can be calculated from the following equation: $$RMS=\sqrt{\frac{\sum {({TECR}_{i})}^{2}}{N}}$$ where \(N\) is the total number of the observations within 1 s and \(i\) is the index of the observations \((i=\mathrm{1,2},\ldots ,20)\). Figure 14 demonstrates the ratio of the six datasets at the interval of 1 second. The maximum ratio of G22 can reach up to around 8.0 displayed in MTRS1708. For the results of other travels, the maximum ratios are about 6.0. However, the average ratio is about 1.23 for the six journeys, and the ratios of six journeys are 1.225, 1.235, 1.236, 1.178, 1.286 and 1.227. The ratios calculated from the six journeys suggest that the train-based kinematic TECR results are consistently higher than the ground-based static TECR results by around 23%. It is acceptable for the ionospheric monitoring. Yang and Liu (2016a) showed a rapid TECR change during the 2012 tropical cyclone Tembin passing in Hong Kong. The maximum TECR can reach up to 16 TECU/sec. Pi et al. (1997) showed that 2 TECU/min fluctuation could be observed in a low latitude region, and a GPS scintillation event can produce up to 20 TECU/min fluctuation. Therefore, it is possible to measure those TECR variations using the train-based kinematic observations. Furthermore, they can be utilized to monitor the ionospheric situation to complement the ground-based TECR observation. Ratio between the RMS derived from the moving station and static station for each satellite. The results of the six travels are presented in (a–f). The ratio value is calculated at the interval of 1 s, which means N is equal to 20. The red dashed line denotes the average value of the ratios of all the satellites Considering the insufficient and uneven distribution of GNSS ionospheric monitoring networks, we propose a novel approach to monitor the ionosphere. This new method based on a fast-moving train platform can complement the traditional ground-based TECR monitoring method. We studied the train-based dynamic TECR monitoring from three aspects: (1) analysis of the TECR derived from pseudorange and carrier phase measurements collected at both SS and MS receivers; (2) the statistical properties of TECR derived from the data of the four GNSS systems; (3) assessment of the TECR derived from the MS receiver with respect to the TECR from the SS receiver. We found that the accuracy of the TECR on the pseudorange measurements collected at the SS receiver is approximately 2.5 times larger than the theoretical value, while the accuracy of the TECR on the carrier phase measurements is similar to the theoretical one. This can probably be attributed to the multipath effect in the pseudorange measurements in the observation environment. Furthermore, the characteristic of the TECR values from the four GNSS systems was analyzed. For the SS receiver, the TECR values from different GNSS systems are almost the same. All of them had a mean value of 0.000 TECU/sec and the standard deviations were within the range of 0.102 to 0.123 TECU/sec. For the MS receiver, the TECR values from six journeys were analyzed. The TECR mean was close to 0.000 TECU/sec. However, their standard deviations were larger than those calculated from the SS receiver by 23%, 13%, 37% and 11% for the GPS, GLONASS, Galileo, and BDS, respectively. Lastly, the RMS error of the TECR data collected at the MS receiver is approximately 23% larger than that calculated from the SS receiver. This can probably be attributed to the decrease of the carrier-to-noise density ratio (C/N0) of the GNSS signals due to the window glass of the train as well as the larger multipath effects. The slightly different observation conditions could also contribute to the larger RMS error. The datasets used and analyzed during the current study are available from the corresponding author on request. Aarons, J., & Lin, B. (1999). Development of high latitude phase fluctuations during the January 10, April 10–11, and May 15, 1997 magnetic storms. Journal of Atmospheric and Solar-Terrestrial Physics, 61(3–4), 309–327. https://doi.org/10.1016/S1364-6826(98)00131-X Adeniyi, J. O., Doherty, P. H., Oladipo, O. A., & Bolaji, O. (2014). Magnetic storm effects on the variation of TEC over Ilorin an equatorial station. Radio Science, 49(12), 1245–1253. https://doi.org/10.1002/2014RS005404 Aquino, M., Moore, T., Dodson, A., Waugh, S., Souter, J., & Rodrigues, F. S. (2005). Implications of ionospheric scintillation for GNSS users in Northern Europe. Journal of Navigation, 58(2), 241–256. https://doi.org/10.1017/s0373463305003218 Arikan, F., Erol, C. B., & Arikan, O. (2003). Regularized estimation of vertical total electron content from Global Positioning System data. Journal of Geophysics Research-Space Physics, 108(A12), 12. https://doi.org/10.1029/2002ja009605 Bergeot, N., Chevalier, J.-M., Bruyninx, C., Pottiaux, E., Aerts, W., Baire, Q., Legrand, J., Defraigne, P., & Huang, W. (2014). Near real-time ionospheric monitoring over Europe at the Royal Observatory of Belgium using GNSS data. Journal of Space Weather Space Clim. https://doi.org/10.1051/swsc/2014028 Bernhardt, P. A., Siefring, C. L., Galysh, I. J., Rodilosso, T. F., Koch, D. E., MacDonald, T. L., Wilkens, M. R., & Landis, G. P. (2006). Ionospheric applications of the scintillation and tomography receiver in space (CITRIS) mission when used with the DORIS radio beacon network. Journal of Geodesy, 80(8–11), 473–485. https://doi.org/10.1007/s00190-006-0064-6 Cai, C., Liu, Z., Xia, P., & Dai, W. (2013). Cycle slip detection and repair for undifferenced GPS observations under high ionospheric activity. GPS Solution, 17(2), 247–260. https://doi.org/10.1007/s10291-012-0275-7 Chen, W., Gao, S., Hu, C., Chen, Y., & Ding, X. (2008). Effects of ionospheric disturbances on GPS observation in low latitude area. GPS Solution, 12(1), 33–41. https://doi.org/10.1007/s10291-007-0062-z Chen, W., Lee, C., Chu, F., & Teh, W. (2017). GPS TEC fluctuations over Tromso, Norway, in the solar minimum. Terrestrial Atmospheric and Ocean Science, 28(6), 993–1008. https://doi.org/10.3319/TAO.2017.04.24.01 Czerniak, R. J., & Reilly, J. P. (1998). Applications of GPS for surveying and other positioning needs in departments of transportation. National Academy Press. Davies, K. (1990). Ionospheric radio. Peter Peregrinus. Feltens, J. (2007). Development of a new three-dimensional mathematical ionosphere model at European Space Agency/European Space Operations Centre. Space Weather. https://doi.org/10.1029/2006SW000294 Gardner, F. M. (2005). Phaselock Techniques. Wiley. Ghoddousi-Fard, R., Héroux, P., Danskin, D., & Boteler, D. (2011). Developing a GPS TEC mapping service over Canada. Space Weather. https://doi.org/10.1029/2010SW000621 Grejner-Brzezinska, D. A., Kashani, I., Wielgosz, P., Smith, D. A., Spencer, P. S. J., Robertson, D. S., & Mader, G. L. (2007). Efficiency and Reliability of Ambiguity Resolution in Network-Based Real-Time Kinematic GPS. Journal of Surveying Engineering, 133(2), 56–65. https://doi.org/10.1061/(ASCE)0733-9453(2007)133:2(56) Gulyaeva, T., & Arikan, F. (2017). Statistical discrimination of global post-seismic ionosphere effects under geomagnetic quiet and storm conditions. Geomat Nat Hazards Risk, 8(2), 509–524. https://doi.org/10.1080/19475705.2016.1246483 Hernández-Pajares, M., Juan, J. M., Sanz, J., Orus, R., Garcia-Rigo, A., Feltens, J., Komjathy, A., Schaer, S. C., & Krankowski, A. (2009). The IGS VTEC maps: A reliable source of ionospheric information since 1998. Journal of Geodesy, 83(3), 263–275. https://doi.org/10.1007/s00190-008-0266-1 Hernández-Pajares, M., Roma-Dollase, D., Krankowski, A., García-Rigo, A., & Orús-Pérez, R. (2017). Methodology and consistency of slant and vertical assessments for ionospheric electron content models. Journal of Geodesy, 91(12), 1405–1414. https://doi.org/10.1007/s00190-017-1032-z Jin, S., Occhipinti, G., & Jin, R. (2015). GNSS ionospheric seismology: Recent observation evidences and characteristics. Earth Science Reviews, 147, 54–64. https://doi.org/10.1016/j.earscirev.2015.05.003 Kong, J., Yao, Y., Xu, Y., Kuo, C., Zhang, L., Liu, L., & Zhai, C. (2017). A clear link connecting the troposphere and ionosphere: Ionospheric reponses to the 2015 Typhoon Dujuan. Journal of Geodesy, 91(9), 1087–1097. https://doi.org/10.1007/s00190-017-1011-4 Leick, A., Rapoport, L., & Tatarnikov, D. (2015). GPS Satellite Surveying (3rd ed.). Wiley. Li, B., Shen, Y., Feng, Y., Gao, W., & Yang, L. (2014). GNSS ambiguity resolution with controllable failure rate for long baseline network RTK. Journal of Geodesy, 88(2), 99–112. https://doi.org/10.1007/s00190-013-0670-z Li, Z., Wang, N., Hernández-Pajares, M., Yuan, Y., Krankowski, A., Liu, A., Zha, J., García-Rigo, A., Roma-Dollase, D., Yang, H., Laurichesse, D., & Blot, A. (2020). IGS real-time service for global ionospheric total electron content modeling. Journal of Geodesy, 94(3), 32. https://doi.org/10.1007/s00190-020-01360-0 Li, Z., Yuan, Y., Wang, N., Hernandez-Pajares, M., & Huo, X. (2015). SHPTS: Towards a new method for generating precise global ionospheric TEC map based on spherical harmonic and generalized trigonometric series functions. Journal of Geodesy, 89(4), 331–345. https://doi.org/10.1007/s00190-014-0778-9 Liu, Z. (2011). A new automated cycle slip detection and repair method for a single dual-frequency GPS receiver. Journal of Geodesy, 85(3), 171–183. https://doi.org/10.1007/s00190-010-0426-y Liu, Z., & Gao, Y. (2004). Ionospheric TEC predictions over a local area GPS reference network. GPS Solut, 8(1), 23–29. https://doi.org/10.1007/s10291-004-0082-x Liu, Z., Gong, Y., & Zhou, L. (2020). Impact of China's high speed train window glass on GNSS signals and positioning performance. Satellite Navigation, 1(1), 14. https://doi.org/10.1186/s43020-020-00013-z Mannucci, A. J., Wilson, B. D., Yuan, D. N., Ho, C. H., Lindqwister, U. J., & Runge, T. F. (1998). A global mapping technique for GPS-derived ionospheric total electron content measurements. Radio Science, 33, 565–582. Mendillo, M., Lin, B., & Aarons, J. (2000). The application of GPS observations to equatorial aeronomy. Radio Science, 35(3), 885–904. https://doi.org/10.1029/1999RS002208 Mungufeni, P., Habarulema, J. B., & Jurua, E. (2016). Trends of ionospheric irregularities over African low latitude region during quiet geomagnetic conditions. J Atmospheric Sol-Terr Phys, 138–139, 261–267. https://doi.org/10.1016/j.jastp.2016.01.015 Opperman, B. D. L., Cilliers, P. J., McKinnell, L. A., & Haggard, R. (2007). Development of a regional GPS-based ionospheric TEC model for South Africa. Advances in Space Research, 39(5), 808–815. https://doi.org/10.1016/j.asr.2007.02.026 Orús, R., Hernández-Pajares, M., Juan, J. M., & Sanz, J. (2005). Improvement of global ionospheric VTEC maps by using kriging interpolation technique. Journal of Atmospheric Solar-Terrestrial Physics, 67(16), 1598–1609. https://doi.org/10.1016/j.jastp.2005.07.017 Pi, X., Mannucci, A. J., Lindqwister, U. J., & Ho, C. M. (1997). Monitoring of global ionospheric irregularities using the Worldwide GPS Network. Geophysical Research Letters, 24(18), 2283–2286. https://doi.org/10.1029/97GL02273 Roma-Dollase, D., Hernández-Pajares, M., Krankowski, A., Kotulak, K., Ghoddousi-Fard, R., Yuan, Y., Li, Z., Zhang, H., Shi, C., Wang, C., Feltens, J., Vergados, P., Komjathy, A., Schaer, S., García-Rigo, A., & Gómez-Cama, J. M. (2018). Consistency of seven different GNSS global ionospheric mapping techniques during one solar cycle. Journal of Geodesy, 92(6), 691–706. https://doi.org/10.1007/s00190-017-1088-9 Rosner, B. (1983). Percentage points for a generalized ESD many-outlier procedure. Technometrics, 25(2), 165–172. https://doi.org/10.1080/00401706.1983.10487848 Schaer S (1999) Mapping and predicting the Earth's ionosphere using the Global Positioning System, Ph.D. Dissertation Astronomical Institute, University of Berne, Berne, Switzerland, 25 March Sreeja, V., Aquino, M., & Elmas, Z. G. (2011). Impact of ionospheric scintillation on GNSS receiver tracking performance over Latin America: Introducing the concept of tracking jitter variance maps. Space Weather International Journal of Research Applications, 9(10), 1–6. https://doi.org/10.1029/2011sw000707 Tapping, K. F. (2013). The 10.7 cm solar radio flux (F10. 7). Space Weather, 11(7), 394–406. https://doi.org/10.1002/swe.20064 Tariku, Y. A. (2015). Patterns of GPS-TEC variation over low-latitude regions (African sector) during the deep solar minimum (2008 to 2009) and solar maximum (2012 to 2013) phases. Earth, Planets and Space, 67(1), 35. https://doi.org/10.1186/s40623-015-0206-2 Vuković J, Kos T (2016) Ionospheric spatial and temporal gradients for disturbance characterization. In: Proc. IEEE/ENC 2016, European Navigation Conference, Helsinki, Finland, May 30-June 2, 1–4. https://doi.org/10.1109/EURONAV.2016.7530564 Wanninger L (1993) Ionospheric monitoring using IGS data. In: Proceedings of the 1993 IGS Workshop. Astronomical Institute, University of Berne, Berne, Switzerland, March 25–26 Xie P, Petovello MG (2010) Improving carrier phase reacquisition using advanced receiver architectures. In: Proc. IEEE/ION PLANS 2010, Institute of Navigation, Indian Wells, CA, USA, May 4–6, 728–736. https://doi.org/10.1109/PLANS.2010.5507258 Yang, Z., & Liu, Z. (2016a). Observational study of ionospheric irregularities and GPS scintillations associated with the 2012 tropical cyclone Tembin passing Hong Kong. Journal of Geophysics Research Space Physics, 121(5), 4705–4717. https://doi.org/10.1002/2016JA022398 Yang, Z., & Liu, Z. (2016b). Correlation between ROTI and Ionospheric Scintillation Indices using Hong Kong low-latitude GPS data. GPS Solut, 20(4), 815–824. https://doi.org/10.1007/s10291-015-0492-y Yang, Z., Song, S., Jiao, E., Chen, G., Xue, J., Zhou, W., & Zhu, W. (2016). Ionospheric tomography based on GNSS observations of the CMONOC: performance in the topside ionosphere. GPS Solut, 21(2), 363–375. https://doi.org/10.1007/s10291-016-0526-0 Zhao, J., Hernández-Pajares, M., Li, Z., Wang, L., & Yuan, H. (2020). High-rate Doppler-aided cycle slip detection and repair method for low-cost single-frequency receivers. GPS Solut, 24(3), 80. https://doi.org/10.1007/s10291-020-00993-0 The authors would like to acknowledge the support from the Key Program of the National Natural Science Foundation of China (NSFC) and the Hong Kong Research Grants Council (RGC). The grant support to Zhizhao Liu from the Key Program of the National Natural Science Foundation of China (NSFC) project (No.: 41730109) is acknowledged. The grant supports to Zhizhao Liu from the Hong Kong Research Grants Council (RGC) project (B-Q61L PolyU 152222/17E) are thanked. The Emerging Frontier Area (EFA) Scheme of Research Institute for Sustainable Urban Development (RISUD) of the Hong Kong Polytechnic University under Grant 1-BBWJ is also acknowledged. Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, People's Republic of China Shiwei Yu & Zhizhao Liu Shiwei Yu Zhizhao Liu All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by SY. The first draft of the manuscript was written by SY. The critical review and revision were completed by ZL. ZL incubated the idea of this work, supervised the project, and provided project funding support to this work. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Correspondence to Zhizhao Liu. Yu, S., Liu, Z. Feasibility analysis of GNSS-based ionospheric observation on a fast-moving train platform (GIFT). Satell Navig 2, 20 (2021). https://doi.org/10.1186/s43020-021-00051-1 Total electron contents (TEC) TEC rate (TECR) Fast-moving train platform GNSS Ionospheric Monitoring and Modelling
CommonCrawl
Pau Rausell-Köster ORCID: orcid.org/0000-0003-2274-74231, Sendy Ghirardi ORCID: orcid.org/0000-0002-2769-28661, Jordi Sanjuán ORCID: orcid.org/0000-0002-7075-193X1, Francesco Molinari ORCID: orcid.org/0000-0001-9141-39751 & Borja Abril ORCID: orcid.org/0000-0002-1070-96842 This article defines "cultural experience" and places it in a holistic conceptual model; "the cultural city" where it plays a relevant role in improving the performing of cities. The conceptual model combines the basic elements of the heritage city, the smart city and the creative city. The city is interpreted from a threefold perspective; as a repository of resources, as a connective interface, and as the setting for citizens' life and social and professional experiences. In this context, each of these perspectives incorporates culture in a different way, enabling different models of value creation and different processes of production and reproduction of this value. In each of the urban models described above, production processes that combine symbolic, physical, financial, social, human and cultural capital in different ways and urban strategies are implemented to provide cultural experiences that ignite transformative effects through several spillovers. That means that culture, in its different dimensions, regains the role of a raw material and becomes the point of origin to activate development processes and improve urban performance. The integration of the dimensions of the heritage city, the creative city and the smart city through an enabling context is the core proposal of the "cultural city". In alignment with the New European Agenda for Culture, we deepen the analysis in the specific spillovers on wellbeing and quality of life, citizen engagement and urban renewal as the backbone of a set of external effects of cultural experiences. In the final part of this article, we test the plausibility of this speculative proposal through some empirical evidence. We develop an OLS model with proxy indicators, that could be considered transitional indicators, for the three different potential strategies (heritage, smart, creative). The findings support the assertion that it is conceivable that the supply of cultural experiences through a variety of tactics (heritage city, smart city and creative city) can account in part for the growth of European cities in the years after the 2008 financial crisis. These strategies have contributed to the good performance of the urban device in a way that is positive, not negligible (accounting for around 50% of the variance in productivity) and statistically significant. The provision of a context that increases the cultural experiences for citizens has clearly improved the performance of European cities, and we develop some conceptual and empirical mechanisms to explain and measure the socioeconomic impacts of these processes. There is no doubt that the city, as a device for human interaction and a mechanism for generating wealth, has been remarkably successful. The key to the city's success and persistence lies in the fact that it satisfies human needs with high efficiency levels, and when it does not, mechanisms appear to generate the necessary changes to transform itself. As Jane Jacobs stated more than 50 years ago, "Cities contain the seeds of their own regeneration" (Jacobs 1986). The very economies of agglomeration make inefficiencies and insufficiencies easy for citizens to be expressed and for policymakers or market agents to visualise and receive. The city appears as a "formula" with undoubted success in a long historical perspective (Sorribes 2012), and with a good forecast, as shown by the historical evolution of the urbanisation rate and the estimation that in the mid-twenty-first century nearly 70% of the world's population will live in cities. As the Shanghai Manual (United Nations, 2012) makes clear,Footnote 1 people gravitate towards cities not only for economic opportunities, but also looking for better education and an uninterrupted flow of ideas, information, and culture. Marxist literature claims that the city rescues people from the "idiocy of rural life" (Manifesto of the Communist Party), and it was also in the industrial cities that the dream of a new social order was forged and reinforced. The city has always been the neural centre of freedom, culture, and political and institutional innovation in its broadest sense. The exchange of ideas and experiences, the cultural "mix" that is consubstantial to cities, has meant an enormous positive externality for society as a whole, to the point of Jane Jacobs' affirmation "The city, the wealth of nations", which perfectly summarises this powerful idea. In this article, we are going to test whether "cultural experiences" have effects on individuals and communities, influencing their perceptions of the city itself and, more importantly, their values, their feelings about their own identity and belonging, their behaviours, and their relationships with others, as well as the effect of these changes in urban performance. Our initial intuition is that urban cultural engineering, defined as the technique for the production of cultural experiences in the urban context, which manipulates symbolic (arts and culture, senses and meanings), material (cultural infrastructures) and technological contexts, could become a very powerful tool for social transformation, influencing the general model of urban performance, including its economic framework. Cultural experience at the centre of the analysis However, if we are interested in delving into the impacts of culture beyond its economic classification, we will have to look at the impact generation process, considering the concept in all its complexity. What we seek with this approach is an operational definition of the basic process that activates and generates these processes of transformation and change in order to try to articulate plausible sequences of causality of these impacts. In his early work, Matarasso (1997) spoke of the social impacts of participation in the arts in a broad sense. He did not use the term 'participation' as an euphemism for community arts, but he interpreted broadly and failed to provide a precise definition. According to the UNESCO (2012), cultural participation can be defined as "participation in any activity that, for individuals, represents a way of increasing their own cultural and informational capacity and capital, which helps define their identity, and/or allows for personal expression". Such activities may take many forms—both active, such as creating art or even volunteering for a cultural organisation, and passive, such as watching a movie—and may occur through a variety of formal or informal channels, including the internet. The notion of the prosumer—term coined by Alvin Toffler in the 80's to describe the increasing integration of consumers into the process of cultural production (Hesmondhalgh 2010)—and the lesser univocity between generator and consumer of culture recommend that the analysis should not focus on the concept of cultural participation, but on that of cultural experience. Access to cultural content loses its traditional passive, appreciative character and becomes a form of creative appropriation by the user (Valtysson 2010). A "cultural experience" can be defined as the generation, emission or reception of information flows with symbolic content, usually expressed through artistic grammars, that have the explicit and more or less deliberate intention of having some kind of resonance on our cognitive, emotional or aesthetic dimension or our perception of our location in a social body. A cultural experience is a concrete act of cognitive, sensory and emotional appropriation of the world around us, the intensity and quality of which depends on material, psychological and social issues, as well as on our own cognitive and cultural capital. In this context, we are applying the concept of resonance from the German philosopher and sociologist H. Rosa, who states (Bialakowsky 2018) that resonance is the opposite of alienation and has four crucial characteristics; (a) one is in resonance with something when one feels affected by it (b) the subject reacts to it—the psychological concept of self-efficacy, (c) the experience has a transformative capacity on individuals of greater or lesser intensity or of greater or lesser duration in temporal terms, and (d) resonance is not controllable and cannot be approached in a purely instrumental way; it is elusive, meaning that you cannot anticipate that it will actually happen even if you fully control the context, and its obtainability cannot be taken for granted (Susen 2020). According to Rosa, resonance can be defined as "a form of world-relation, in which subject and world meet and transform each other". The emergence of resonance is possible only 'through af ← fection and e → motion [sic], intrinsic interest and expectation of self-efficacy', entailing the construction of a meaningful, dynamic, and transformative rapport between actors and their environment (Susen 2020). It is important to note here that the transformative effects of resonance are beyond the control of the subject: when something really touches us, we can never know or predict in advance what we will become as a result of this. The cultural experience in an integral vision of the city The non-disposability and moment-like character of resonance does not mean that it is completely random and contingent. There are structured ways to generate resonances through artistic action and participation, and ultimately cultural projects and policies are production functions of cultural experiences. However, these resonances have a very unstable chemistry, as their transformative effects do not follow a stable causal logic over time and space. Urban cultural policies constitute a more or less coherent approach based on instrumental rationality, and the city as human engine is a contextual space where the probabilities of concrete projects becoming real cultural experiences, with their associated resonances, are multiplied. A recent UNESCO document underlines that the spatial, economic and social benefits of culture on the city are achieved through six sets of transition variables (UNESCO and World Bank 2021) that act as enabling tools for impacts. The six categories of creative city enablers that have proven critical to translating culture into spatial, economic and social benefits are: urban infrastructure and liveability, contexts to improve the skills and produce innovation, social and financial networks and technical support, inclusive institutions and friendly regulations, some sense of uniqueness through attractive storytelling and a digital environment. Further analysis of these transition indicators could provide us with better social control and improve the effectiveness and efficiency of cultural policies and projects. With the same intention, we have taken another explanatory route. The city can take benefit from the processes of ignition of cultural experiences through three basic mechanisms (Sorribes 2012): (1) the city as a repository of heritage elements accumulated and superimposed throughout history, or as a way to generate and broadcast stories that generate resonance (2) the city as an engine for the exchange of ideas, which multiplies the possibilities of interactions that require cultural experiences (Pareja-Eastaway 2020), and (3) the city as the vital scenario where most people carry out their personal, family and professional activities and are exposed to cultural experiences (Mellander et al. 2012). These dimensions of the city are intertwined and articulate its social, political, symbolic, and economic fabric, as shown in the conceptual map below. According to our hypothesis, the activation of cultural experiences through any of these mechanisms will generate social, cultural and economic value, consequently improving the efficiency of the city as a "social artefact" to a greater or lesser extent. In the final part of this article, we make an instrumental simplification and assume that the greater efficiency of the "urban engine" can be approximated through an indicator such as the variation in productivity. The view we defend is that since the 2008 crisis, European cities have used some of these strategies, with varying degrees of instrumental rationality and intuition, to improve urban efficiency (Fig. 1). Source: Own elaboration Conceptual Map. An integral view of the relations between culture and the city. The three faces of the cultural city The "cultural city" as a space and support for the cultural experiences of individuals becomes a relevant variable to explain the success of cities. There is extensive literature from different disciplinary fields on the location of culture and creativity in urban complexes. From the late 1980s until the onset of the crisis, different theories successfully pointed, specifically and in a renewed way, to the cultural dimension of cities (Zukin 1995) as an opportunity element to be addressed by local development strategies (Evans 2001; Florida 2002; Landry and Bianchini 1995). Culture—understood as the set of cultural experiences that are activated in a given territory over a period of time—is interlinked and generates value in different ways, which are described in the following paragraphs. The city as a repository of resources: The Heritage City The first perspective from which the concept of a city can be approached is as a geographical space where a large number of resources are concentrated (Scott 2001). This large storehouse of resources can be used to fulfil various functions. There is no doubt that one of the most important factors in the success of some cities is the dense accumulation of resources, a stock of accumulated wealth and historical capital gains deposited over time and materialised in urban assets. From the perspective of the accumulation of cultural resources, heritage cities are urban spaces that have managed to identify and recognise the value of material and symbolic resources from the cultural field and that, through a regulatory and normative process, maintain certain levels of protection and conservation. It is a type of urban organisation in which economy and culture have fused together, in a way that economic outputs are subject to ever-increasing injections of aesthetic and semiotic meaning, while the culture that is consumed is produced more and more by profit-seeking firms in the commodity form (Scott 2014). It should be noted that urban assets, although they refer especially to the material dimension of the city, are not limited to the physical artefacts that make up cities, such as the grid of streets, buildings, gardens, monuments or public and private facilities. These should be added to the value of the iconic elements and the stories or meanings associated with the material elements. In this sense, the city can be seen as a container for the meanings attached to its material contents where the capacity to generate value is often much more related to the discourses than to the physical elements. In post-industrial cities (Scott 2014), value is increasingly generated through discourses, narratives, and information flows, rather than through the production of material goods. Therefore, cultural experiences happen when the physical elements of the city interact with its symbolic heritage elements and their meanings. The narratives of an urban space constitute more than a brand, as they contain a set of physical and socio-psychological attributes and beliefs that can be considered as inputs to social, cultural, and economic processes. These resources have the same or even greater capacity than material resources to generate collective value and shape the sense of place. Moreover, such discourses are a constituent part of the cultural and cognitive capital of the people who inhabit, use or visit the sites and consequently condition their behaviours and ways of relating to each other and to the space (Table 1). Table 1 The city as a repository of resources. Heritage city. The heritage city enhances the ability to attract or develop new and higher-order functions, increase internal efficiency (Camagni et al. 2015) and achieve economies of scale through the resignification of its material attributes. In order to achieve the resignification through new narratives, the construction of new heritage, the valorisation of existing heritage or the creation and/or revitalisation of icons to improve the average productivity of the city, the rate of return on invested capital must exceed the average urban productivity. This is possible through the reuse or heritage resources for higher value-added activities, including but not limited to tourism. The city as an interface for exchange and communication: The Smart City The second dimension in which we place the processes of value generation is the concept of the city as an interface that enables the concentration of resources and interaction. The concentration of resources in a limited geographical space is the necessary condition for the activation of certain processes, without the concurrence of which the success of an urban space would not be possible (Concilio et al. 2019). While the concentration of producers, workforce and consumers in a physical or virtual space is necessary for the articulation of a market, it also poses logistical, economic and social challenges related to organisation, regulation and service provision, without which it would collapse (Florida et al. 2017). In other words, the concentration of material resources forces the search for technological, organisational, social, economic or spatial solutions to overcome its propensity to collapse. Productivity improvements in this dimension are achieved through the concept of the Smart City. The Smart City can be understood as a set of innovation processes that improve urban life in terms of living conditions, economy, mobility and governance primarily—although not necessarily—through information and communication technologies (ICT) (Anthopoulos and Reddick 2016). The Smart City response has been the use of technological innovations and data analytics applied to the city as a connective interface, driving away congestion costs and improving the efficiency of processes and the effectiveness of urban service delivery. In the realm of interaction spaces, the city articulates both spaces of conflict (competition), where competing interests and alternative use of resources and patterns of appropriation of public and private spaces are settled, and spaces of communication (collaboration). Density is both an agitator of conflict and a fertiliser of communication. "We find ourselves immersed in an epoch of problematic transition, in which culture and the city are alternatively defined as spaces of conflict or spaces of hope" (Segovia and Hervé 2022). The first of these two approaches defines the political arena of the city and shapes certain power relations that are channelled into a concrete institutional architecture and shape a concrete symbolic representation (Concilio et al. 2019). The material shaping of the city itself is a more or less subtle representation of power relations and hierarchies (political, religious, economic and cultural), with its town halls, churches and banks in the centres (Monnet and Jérôme 2011). One of the key elements in this context is that the city enables the concentration of human capital, which as we know from the Romer-Lucas models (Romer 1986) is the central element of economic growth theories. To explain why cities attract human capital, three theories can be identified (Storper and Scott 2009): (a) Florida's "creative class" theory, (b) research by Glaeser and others that identifies a broad set of amenities—educational or cultural—and weather conditions, and (c) Clark's notion of the city as an entertainment machine that offers parks, museums, art galleries, orchestras and landmark buildings. However, dynamic cities are also great attractors of people because of their ability to offer well-paid jobs, as they have higher levels of productivity derived from agglomeration economies. Therefore, the smart city locates cultural experience in the dynamics of agglomeration and the mechanics of density, in the exchange of ideas, in people-to-people communication and interaction, and in the generation of opportunities for connections that would otherwise have been improbable. The smart city as a facilitator of the generation of cultural experiences is based on its ability to take advantage of the concentration of niche demands and cross-fertilisation and serendipity (Table 2). Table 2 City as connective interface Agglomeration economies are the result of both economies of scale and the network economies that develop when firms and people are located close to each other. They are therefore related to spatial proximity and, Glaeser, (2011) states, can be formulated as a reduction of transport costs in a broad sense, i.e. transport costs related to goods, but also to people and ideas. Today, cities have a productivity advantage for different reasons related to the circulation of ideas and people rather than costs, in contrast to the industrial cites of the nineteenth century. In this sense, digitalisation, urban mobility and commuting speed become relevant elements to approach the efficiency of physical and virtual interaction processes. The city as a stage for the life trajectories of individuals and communities: The Creative City The third dimension to which we wish to refer is the concept of the city as the setting for the vital, personal, professional and social trajectories of the people who inhabit it. With urbanisation levels expected to reach 70% by 2050 (United Nations 2019) the city is becoming the setting where most of the planet's inhabitants' life events take place and, consequently, the main determinant of our individual levels of wellbeing, utility and/or happiness. Although economic factors have a strong impact on subjective wellbeing in low-income territories, there are evolving cultural changes in territories with higher levels of development, with people attaching greater importance to self-expression and freedom of choice (Inglehart and Welzel 2005). Other authors suggest that pleasure, engagement and meaning are the three main components of life satisfaction (Peterson et al. 2005). These factors are closely linked to the satisfaction of individuals' cultural rights. The ability of cities to satisfy the symbolic needs of their residents defines their success, which is rediscovering its original meaning once more. As a result, this capacity is becoming more and more dependent and more connected to the cultural ecosystem. The city as a space for creation and experimentation generates value by activating sufficient stimuli to enable people's integral development through the exercise of creativity, the pursuit of pleasure and the enjoyment of rich and multiple experiences. The key lies not so much in the functionality and efficiency of the economic device as in the potential of the social fabric and the space for the development of personal and social relations—in short, in the liveability of the urban environment (McArthur and Robin 2019). The richness and density of this network is conditioned by its capacity to stimulate a sense of identity, commitment to the community and belonging, and promote participation and trust in others (Table 3). Table 3 The city as a living and working environment If we want to maximise the utility of our life trajectories, we are no longer guided by purely instrumental rationality, but also by the expressive values of exchange and mutual benefit. This is what creates the tension between the physical or constructed city (la ville) and the lived city (la cité) (Sennett 2018). The ethics and values linked to the increasing centrality of the human condition in the urban setting extend spatially, socially and economically and enable the emergence of new activities, some of which have economic value but also drive technological innovation and community development. Sustainable development, creativity, transparency, participation, accountability, technology, and engagement are the pillars of new social activities and new productive sectors (Rausell-Köster et al. 2012). Citizens who are aware, well informed and in control of their freedoms wish to develop their professional and life trajectories through activities such as social innovation, creative activities, proximity economy, collaborative economy, circular economy, care activities, green economy and the economy of the common good because they allow them to find a sense of commitment, pleasure and meaning in their daily actions. The determinants of behaviour in the new emerging activities respond to a new hierarchy of values associated with cultural practices: pleasure, the desire for innovation, relational (versus transactional) consumption and free exchange, critical thinking, personal development, solidarity, cooperation, networking, the value of diversity and beauty, the sense of justice, participation and the importance of the recreational and vital dimension beyond purely economic benefit (Boix-Domènech and Rausell-Köster 2018). The urban concept that captures this vision is the Creative City as formulated by Landry and Bianchini (1995), who tried to identify what could improve people's lived experience of cities. Today, we know that the concentration of cultural and creative activities in a given territory changes the logic and functioning of its economic dynamics in a deeper and more complex way than we had previously assumed and affects the potential range of personal experiences available to citizens in a determining way. We also know that the centrality of creativity and innovation is changing the role of economic organisations and human resource management models, and we know that a liquid labour market is taking shape around this fact, combining liberating trends for human work that enable enriching personal development experiences with realities that tend towards extreme precariousness and self-exploitation. The Creative City refers to the attractiveness and competitiveness of the urban environment based on cognitive and symbolic elements whose main mechanism for generating added value is turning creativity into market, aesthetic or social innovation. Scott introduced the notion of "cognitive-cultural capitalism" (Scott 2014), to argue that we are entering a period marked by a distinctive third wave of urbanisation based on cognitive skills and cultural assets. The economic value of urban activities is subject to increasing injections of aesthetic and semiotic meaning, while the culture that is consumed is increasingly produced by for-profit companies in the form of commodities (Scott 2014). Professional opportunities in the creative sector become a good indicator to identify the Creative City. Combining Jacobs' ideas about cities with Schumpeter's ideas about innovation, it is argued that innovation and risk appetite do not only take place in cities, but require cities to occur. (Florida et al. 2017). However, one of the potential pitfalls is that innovation and equity are not two spontaneously cooperating issues (Pileri 2015). The risks of the Creative City are identified in the possible slide towards the society of the spectacle, the trivialisation of the symbolic dimension or the growing pressures associated with the commodification of all cultural experiences, including those that fulfil an important social function. Recent critiques also refer to phenomena of social polarisation that are seen to be caused by the occupation of certain urban spaces by the creative class such as social segmentation in cities, gentrification, segregation and the exclusion of middle-class families from urban centres—the new urban crisis—(Florida 2017). But with all its possible distortions and problems, the creative city is the desired setting for a population that is increasingly educated and demanding in all its expressive, social and professional experiences. The conceptual model of the Cultural City Cultural experience is associated with several types of positive effects, ranging from achieving innovation and lifelong learning objectives to fostering social cohesion and health and wellbeing (Sacco et al. 2018). The New European Agenda for Culture, whose strategic objective is to harness "the power of culture and cultural diversity for social cohesion and wellbeing", focuses on a structural model based on the dimensions of health and wellbeing, urban and territorial renovation and people's engagement and participation (European Commission 2018), which are also addressed by the MESOC projectFootnote 2 (2021). MESOC adapts and further develops a method of "transition based" impact assessment derived from a previous UNESCO Chair publication, building a structural model of the Societal Dimension of Culture, as defined by one of the strategic objectives of the European Agenda. Through cultural experience in facilitative contexts, individuals learn and reconfigure the codes that underlie cultural meaning. Cultural experiences bring about changes in individuals (Soren 2009), impacting on knowledge, skills, attitudes, values, emotions, beliefs, relationships, and states of mind. From the perspective of cultural experiences, participation in cultural experiences within a community generates impacts that ensure wellbeing and progress in the era of post-industrial economy, in areas that go beyond traditional spillover (Sacco et al. 2013). Each of these paradigms shows, through a certain dynamic perspective, the relationship between culture and the city. In each of the urban models described above, production processes in which symbolic, physical, financial, social, human, and cultural capital is combined in different ways and urban strategies are implemented to provide cultural experiences that ignite transformative effects through several spillovers. That means that culture, in its different dimensions, regains the role of raw material and becomes the starting point for the activation of development processes and the improvement of urban performance. The integration of the dimensions of the Heritage City, the Creative City, and the Smart City in an enabling context is the core proposal of the Cultural City (Fig. 2). The "Cultural City" model The New European Agenda for Culture (European Commission 2018), whose strategic objective to harness "the power of culture and cultural diversity for social cohesion and wellbeing", focuses on an impact generation model that directly connect an individual or collective experience with arts and culture to three main societal impact domains: health and wellbeing, urban renovation and social cohesion. Quality of life, health and wellbeing It is acknowledged that culture influences people's behaviour, their self-esteem and, ultimately, their health and wellbeing. Aspects related to health and wellbeing can be directly connected to the concept of the city as the setting in which our life trajectories develop, with the cultural and creative dimension being the ingredient that facilitates or hinders a "good life". Our perceptions of health and wellbeing are directly influenced by our lifestyle, how stimulating and creative our work is, the quality and density of our social and family relationships, the intensity of our cultural practices and the meaning of our actions. All these aspects are related to culture and creativity. In this sense, the perceptions around health and wellbeing become indicators of whether or not that "good life" materialises. The 67th World Health Report of the World Health Organization synthesizes the findings of over 3500 studies on the role of the arts in the prevention of illness, the promotion of health and the management and treatment of illness across people's lifespan (Fancourt and Finn 2019). The report highlights how the components of the cultural experience, i.e. aesthetic engagement, involvement of the imagination, sensory activation, evocation of emotion, cognitive stimulation, social interaction, and physical activity, can trigger psychological, physiological, social, and behavioural responses that are themselves causally linked with health and wellbeing outcomes. Certain studies note that cultural participation is the second most important determinant of a person's psychological wellbeing, preceded only by the absence of disease, with a significantly stronger impact than variables such as income, place of residence, age, gender or occupation (Grossi et al. 2012). Moreover, the studies reveal that the impact of culture on subjective wellbeing is far more relevant in contexts of high cultural supply and cultural engagement than in circumstances of low endowment and low participation (Tavano Blessi et al. 2016). As a result, two factors appear to be critical in terms of culture as an urban planning tool for individual and collective wellbeing: cultural vibrancy in terms of policy initiatives, use of facilities and activities, and an individual and social propensity to experience cultural activities and goods. In 2010, cultural participation and cultural heritage density and policies became part of the Measures of Equitable and Sustainable Wellbeing Index by the Italian National Institute of Statistics, which attempt to go beyond GDP (Cicerchia 2018). Over the last few decades, many governments disillusioned with the traditional use of GDP or income as a measure of their citizens' welfare have started focusing on wellbeing. Governments from all over the world have been introducing new indices of progress in which the concept of culture appears as a wellbeing determinant to guide their policymaking (Hall et al. 2010). Cultural-led development strategies can therefore be defined as sets of actions operating on a broad variety of urban cultural assets (from the cultural heritage to the visual arts, from museums to theaters, etc.), whose final objective is the maximization of residents' well-being (Perucca 2019). Urban and territorial renovation The interplay between urban and territorial renovation, culture and cultural initiatives and urban governance modes (Degen and García 2012) is widely recognised as a developmental key for cities to offer a high quality of life at both the spatial and social levels (Evans 2005). Everything started in Europe in the mid-1980s, when post-industrial cities sought to revive former industrial, contaminated and waterfront sites and their city centres as they aimed to establish themselves in the new arena of the global market and cities started looking at cultural planning and programming as strategies to enable economic development and promote spatial and social regeneration. Urban renewal, although not exclusively, is about the perception of the city as a repository of physical and symbolic elements. The impact of culture is the capacity to regenerate and re-signify spaces with culture and creativity, either by developing new cultural functions on existing spaces or by improving the functionalities and uses of culturally significant spaces. As stated in the 2018 Davos Declaration on high-quality Baukultur for Europe, "we urgently need a new, adaptive approach to shaping our built environment; one that is rooted in culture, actively builds social cohesion, ensures environmental sustainability, and contributes to the health and wellbeing of all" (European Ministers of Culture 2018). The Urban Agenda Partnership for Culture and Cultural Heritage, created in November 2018 under the Urban Agenda of the EU, has the objective of defining actions to improve regulation, financial capacity and data/knowledge exchange of EU urban authorities that share the common goal of improving the management of their historical built environment and preserving the quality of urban landscapes and cultural heritage. An Orientation Paper (Partnership on Cultural and Cultural Heritage 2019) was published in November 2019 and the revised Leipzig Charter, which was published more than 20 years after the signature of the original one to promote the adoption of integrated urban development policies and set out the key principles behind them for the first time in a single EU document, reaffirm the notion that culture is at the core of any sustainable urban development, including the preservation and development of the built and non-built cultural heritage. Cities have used "built culture" for urban regeneration through reactive models focused on providing a response to the decline of the industrial city or on the possibility of making better use of the opportunities available, trying to attract global tourism, investment or fluxes of creative citizens in the framework of the redefinition of their position in the global hierarchy or in circumstantial and adaptive planning (Boix et al. 2017). People's engagement and participation It goes without saying that a city with high levels of citizen participation and engagement in both political and cultural life is a city with a good performance. The absence of engagement and participation might be interpreted as a lack of freedom of choice, which jeopardizes the pursuit of positive freedom (Sen 1999). It is now recognised that cultural experience has an impact on empowerment, providing people with the social tools they need to comprehend the behaviours and motivations of others, as well as the confidence they need to act socially. There is ample evidence of the impact of cultural experiences on citizen engagement and participation and, more generally, on social cohesion. Studies focused on the impacts of participation in the field of culture have been carried out by renowned authors like Matarasso (1997), Stanley (2006) and Brown and Novak-Leonard (2013), among others. Indeed, there have even been examples where culture has been used as a political tool for conflict resolution and the activation of pro-social behaviour (Cala Buendía 2010). The indivisibility between life and work, the way in which new technologies are altering our ways of communicating and relating, or the tensions derived from the local and global demands that converge in the city are some of the bridges between people's engagement and the city as a stage (Segovia et al. 2015). However, when discussing cultural participation and urban policies, it is important to address not only impacts but also the strategies for accessing culture. Ever since the introduction of contemporary cultural policies, participation in culture has been a primary goal (Tomka 2013). For example, the theme of participatory governance applied to cultural heritage is a topic of great interest in the European context (Sani 2015).The issue of access to culture and social inclusion has been analysed by scholars like Laaksonen (2005) who stressed the importance of adopting a cultural rights approach. Brown et al. (2011) studied the modalities of participation, identifying five main typologies according to the degree of involvement. In the last few years, we have been witnessed a "participative turn" that is changing the panorama and dynamics of cultural policy (Bonet and Négrier 2018). Therefore, the implications of people's participation for governmental cultural policies is becoming relevant in the current debate (Jancovich and Bianchini 2013). With the new societal trend of "prosumerism", an increasing number of people feel that they have the right to have their voice heard and they exercise that right to the best of their ability under their specific circumstances. This paradigm, which shifts from passive to active, is affecting different aspects of society and appears reflected in each of the three urban models presented in this paper. Some evidence of the plausibility of the Cultural City model In the following paragraphs we will try to provide, without further empirical pretensions, the plausibility that the model of the cultural city as a combined proposal for the functionality of culture in the creative, heritage and smart city can be a useful explanatory model. The logic is as follows; if it can be empirically proven that we can explain the performance of a set of cities from combinations of different creative, heritage and smart city strategies, then we have some clues that the concept of "cultural city" is comprehensive and complete enough to explain the dynamics of cities from the perspective of culture. In order to try to give plausibility to the analytical proposal developed in the previous paragraphs, we make a gross simplification. Our hypothesis is that between 2008 and 2018, European cities improved their performance by using, either deliberately or intuitively, some combination of strategies that use culture, and more specifically "cultural experiences", as a central element in the value generation processes in one of the models described; that is, through the Heritage City, the Smart City or the Creative City. The second step in this simplification is approaching "improved urban performance" through the proxy of a variation in labour productivity. The third step is approximating the use of each of these strategies through very simple synthetic indicators. The Heritage City strategy is approximated with the indicator of the number of museum visitors. The Smart City strategies are approximated with the variables of the number of ICT graduates—digitalisation—and the agility in commuting—interaction. Finally, the Creative City strategy is proxied by new jobs in the creative sectors (career opportunities) and risk-proneness (proxy for innovation). However, the purpose is not to elaborate a complete econometric model capable of fully explaining the economic growth of cities, but to determine whether these elements have a significant impact and, if so, whether it is a positive one. This is a first empirical approach to confirm or disprove our hypothesis. We are aware that there are already numerous studies in the academic literature on productivity growth that are extremely accurate and that incorporate variables such as the capital stock, the rate of capital depreciation, the rate of growth of technology, etc. These variables, however, are hardly obtainable in a reliable way at the local level. While all these additional variables, among others, would be necessary for a rigorous sophisticated model that attempts to explain productivity growth accurately and robustly, this is not the purpose of this article. We would like to stress that the added value of this work does not lie in the robustness of its empirical evidence but in the consistency and plausibility of its theoretical proposal. The core idea of the model is to test whether the three components of the cities we have conceptualised (Heritage City, Creative City, and Smart City) contribute to their economic growth and development and, if so, to what extent. As a proxy for the concept of economic growth, we will use the cumulative change in productivity between 2008 and 2018. An ordinary least squares regression (OLS) is applied, with the following equation: $${\Delta Productivity}_{i}=\,{\beta }_{0}+ {\beta }_{1}\cdot {heritage}_{i}+{\beta }_{2}\cdot {creative}_{i}+{\beta }_{3}\cdot {smart}_{i}+{\varepsilon }_{i}$$ where, for a given city i, the estimated increase in productivity depends on the sum of its indicators in the areas of heritage, creativity and smartness as explanatory variables, multiplied by their coefficients (\(\beta\)), and added to the intercept (\({\beta }_{0}\)) and a random error (\({\varepsilon }_{i}\)) that responds to variables not observed in the model. The indicators defining the heritage, creative, and smart components are explained in the following section. Obtaining indicators and comparable data at the local level is always a major challenge, as they are not always accessible and sometimes, they do not even exist. To address this problem, the database has been built using a combination of different sources. However, a limitation of this method is that the city coverage of the different indicator panels does not always coincide. The Cultural and Creative Cities Monitor (Montalto et al. 2019), the European Digital City Index (Bannerjee et al. 2016) and the OECD all provide different indicators. The first covers a sample of 190 cities; the second, 60; and the third, another 60. The intersection of the samples from these three sources results in 50 cities from 23 European countries, which make up the sample for this analysis. Regarding the dependent variable, productivity at the local level is taken from the OECD, which defines it as GDP per worker in USD at constant prices and constant purchasing power parity (PPP). Based on these data, the indicator used in the model corresponds to the cumulative change, in percentage points, between 2008 and 2018. There are three explanatory variables, which we have called Heritage, Creative and Smart. Heritage is composed of a single indicator: museum visits per 1,000 inhabitants, which is obtained from the Cultural and Creative Cities Monitor. The original source is Eurostat (Urban Audit) and the data refers to the period 2011–2017. Creative is composed of two indicators: New jobs in creative sectors per 100,000 inhabitants. Derived from the Cultural and Creative Cities Monitor, and originally collected from Eurostat (Regional Statistics). It includes three sub-indicators that are weighted equally: new jobs in arts, culture and entertainment enterprises; in media & communication; and in other creative sectors. The data corresponds to the period 2010–2016 and to the NUTS 3 regional level in which each city is inserted. Willingness to take on risk, defined as the percentage of people who disagreed with the statement "One should not start a business if there is a risk it might fail". It is taken from the European Digital City Index, which in turn takes this indicator from the 2013 Eurobarometer. The data correspond to the NUTS 2 regional level in which each city is inserted. Smart is also composed of two other indicators: Commute. This variable is also derived from the European Digital City Index. It is a score that is calculated from Numbeo data and considers the average distance and travel time from home to work. Higher values represent better scores, i.e., shorter time and shorter distance, showing a better performance of the city as an interface device. Annual graduates in ICT per 100,000 inhabitants. Derived from the Cultural and Creative Cities Monitor, and originally collected from the ETER project. Data corresponds to the period 2013–2015 for tertiary education. Given the difficulties of obtaining standardised data at the local level, these indicators have been chosen because they reasonably capture some of the defining features of the conceptualised city typologies. We select visits to museums not only because museums are the most representative repositories of the heritage stock of cities in its many different forms, but also because the number of visitors is a good indicator of the enhancement of this heritage and the involvement of citizens in it. Moreover, both the emergence of new creative jobs (which measures the capacity and opportunities to exploit creativity through the local productive structure) and the willingness and open-mindedness of the population to take risks and undertake uncertain projects, ideas and initiatives (as a prerequisite for developing creativity and innovation) are variables that allow us to quantify the capacity of cities to activate creative processes. Finally, a highly digitalised environment with widespread access to and use of technological tools (measured through ICT graduates that provide the required human capital for its development) and efficient transport infrastructures that minimise the time and distance between places and allow cities to become accessible spaces of interpersonal connection are two defining features of Smart Cities, as explained above. All the raw indicators used to construct the explanatory variables, both those from the Cultural and Creative Cities Monitor and from the European Digital City Index, are first standardised according to population. Subsequently, they have been subjected to a winsorization process in case they contained outliers. That is, if the distribution of a variable has a kurtosis greater than 3.5 and an absolute skewness greater than 2, upper-end outliers are substituted with the next highest value and lower-end outliers with the next lowest value. This process is repeated iteratively until a distribution that meets the kurtosis and skewness requirements is obtained. This process of winsorization is followed by a min–max normalisation process, so that all indicators fall within an interval of 0 to 1. This, in addition to allowing a direct comparison of the coefficients of the three components considered in the regression (heritage, creative and smart), is necessary to aggregate variables with different magnitudes within the same score. It also applies in the case of the new creative jobs variable. This, as mentioned above, in turn considers three different indicators: new jobs in arts, culture and entertainment enterprises; in media & communication; and in other creative sectors. In order to weight these three areas equally, so that the different dimensions do not introduce biases, the min–max score is obtained first, and then averaged. The scores from the Cultural and Creative Cities Monitor and the European Digital City Index have not been used directly but have been recalculated for the sample of cities used in our analysis. The equation used for the min–max normalisation is as follows. $${z}_{i}=\frac{{x}_{i}-\mathrm{min}(x)}{\mathrm{max}\left(x\right)-\mathrm{min}(x)}$$ where \({z}_{i}\) is the normalised score for city i, \({x}_{i}\) is the original value for city i, and min(x) and max(x) are the minimum and the maximum value in the sample for variable x. Table 4 summarises the descriptive statistics of the different variables incorporated in some way in the model, with the original winsorized data. The lower part of the table also shows the scores that were finally used in the model after the normalisation process within the interval from 0 to 1. Table 4 Descriptive statistics The results of the OLS regression applied following Eq. (1) are shown in Table 5. All three components, as well as the intercept, are found to be statistically significant. All three also have coefficients with a positive sign, i.e., they are positively related to productivity growth. The magnitudes, however, differ. The largest effect corresponds to the Smart component, followed by the Creative and the Heritage component. Comparing the magnitudes of the coefficients, it can be noted that the Smart component score is responsible for 55% of the growth explained by the model, compared to 28% for the Creative component and 17% for the Heritage one. However, we should not underestimate the effects of heritage on issues that go beyond economic growth such as sense of belonging, community building or psychological wellbeing. Table 5 Results of the OLS regression The OLS model has an adjusted R2 of 0.5034. This means that the model only explains about half of the variability in productivity growth of cities. The limited explanatory power of the model must therefore be borne in mind. However, this should not come as a surprise. It is worth remembering that the aim is not to determine all the factors that contribute to productivity growth in cities from a holistic perspective, but only to test the effect of some of them, i.e. those conceptualised in this article. Naturally, the model leaves out a multitude of explanatory factors, ranging from the national and regional economic context to the productive structure, the embedded capital or the provision of key infrastructures. Nevertheless, given the enormous complexity of the phenomenon under study and the multitude of factors that are not considered, the model is still a reasonable fit. Figure 3 shows a graphical representation of the model estimation, considering the three modelled parameters, compared to the current values of local productivity growth. Matching model-estimated productivity growth with real growth The validity of the model is tested by applying a Shapiro–Wilk normality test (Shapiro and Wilk 1965). The result (W = 0.9841, p-value = 0.7322) indicates that the residuals are normally distributed, so the model is adequate. Variance inflation factors (VIF) are also checked to verify that there are no multicollinearity problems among the independent variables (Dormann et al. 2013). The values are, in fact, very low (heritage = 1.027, creative = 1.059, smart = 1.069), so the presence of multicollinearity is discarded. In sum, the model provides a first empirical confirmation of our initial hypothesis. If we consider that these three components act as a driver of growth in cities, this growth may be partly due to different combinations of these components in each case. Hence, different specialisation models can be defined depending on which of the components predominates. We would therefore be talking of Heritage Cities, Creative Cities or Smart Cities. From our database and model estimates, we can rank the cities according to which component explains the most productivity growth in each. This results in 5 Heritage Cities and 16 Creative Cities, while the remaining 29 stand out for their Smart City component (Figs. 4, 5 and 6). The most representative city of the Heritage Cities in the sample would be Rome (55% of the Heritage component). Hamburg would be the most prototypical case of a Creative City (68% of this component) and Karlsruhe would be the most prominent Smart City (72%). Sample cities classified as 'Creative Cities' ranked by share of contribution to productivity growth of creative component Sample cities classified as 'Heritage Cities' ranked by share of contribution to productivity growth of heritage component Sample cities classified as 'Smart Cities' ranked by share of contribution to productivity growth of smart component We have sufficient evidence that culture and creativity have played a relevant role in the recovery of European cities in the aftermath of the 2008 crisis. This effect has been articulated in different ways. We have been able to define a conceptual structure that includes the three main strategies (Heritage City, Smart City, and Creative City) and have also found an indicative way to measure their effects and test their plausibility. The conceptualisation of the "Cultural City", which integrates all three approaches, opens up new avenues for research and comparison in other geographical spaces, other scales and other periods. From the point of view of policy recommendations, increasing the provision of cultural experiences is a strategy that improves the performance of cities. The second recommendation is that the social values generated through the Heritage City can be enhanced and formulas beyond tourism should be sought, and the third is that digitalisation and the improvement of urban switching speed, both of which are quite dependent on local authorities, have a considerable impact that is likely to become even greater as a result of the Covid-19 pandemic. It seems that we can accept that the analytical approach we set out in the first part of this article is plausible. It is plausible that part of the growth of European cities in the post-2008 crisis period can be explained by the provision of cultural experiences through different strategies (Heritage City, Smart City and Creative City). These strategies have statistically and positively contributed in a significant way to the good performance of the urban device, accounting for around 50% of the variance in productivity. The interpretative framework that we have called "the Cultural City" represents a more or less balanced combination of the Heritage City, the Smart City and the Creative City. We are of course aware that we are not dealing with a complete and definitive test that validates this new framework. Rather, we are making an approximation to the plausibility of the proposal through partial and circumstantial evidence that so far fits. In a way, we are identifying some transitional indicators that make it possible to connect cities' cultural experiences and performance improvement processes. It must be understood that although our dependent variable is the variation in the productivity of the labour factor in the cities, this variable approximates the good performance of the cities and includes the impacts on different dimensions that beyond the strictly economic (for example, healthier citizens or those with a better perception of their wellbeing are also more productive agents in the economic sphere and more efficient in the processes of participation or collective action and reflection). Thus, variables such as risk propensity, the number of visitors to museums or the number of ICT graduates anticipate the fluidity of the transmission processes between cultural experiences and the impacts on good urban performance. These transitional indicators are not limited to those that fit statistically into the model (or are available to us) but point to a wider family of variables that enable the transformation processes that take place through cultural experiences. These transitional indicators need to be investigated more intensively, as they define the transmission mechanisms between the policies and projects that produce cultural experiences and their final impacts on the economy, culture or society. Although the ways of generating value are very diverse, the main ways in which we should focus in future research are those that materialise through the improvement of citizens' health and wellbeing, those that are generated by a greater commitment to the community (enhancing understanding and capacity for action; creating and retaining identity; modifying values and preferences for collective choice; building social cohesion; contributing to community development and fostering civic participation), and those that materialise in the processes of urban regeneration with social impacts through "placemaking" processes and economic value generation, placing high added value activities in new refurbished urban spaces or through real state or tourism impacts. Another possible area of research improvement would be to connect individual preferences with cultural experiences by testing their effects on socio-economic impacts. This research will be further developed in the future with the AU Culture application that tries to measure individual impacts of cultural participation (see the Resources of the MESOC project). If we look at levels, it seems that the Smart dimension is the one that has contributed the most to growth, with an impact that is twice that of the Creative dimension and almost three times that of the Heritage dimension. These differences probably have to do with the times and circumstances we are living in. Since the 2000s, cities have invested in technology to enhance their competitiveness. One of the thematic objectives of EU Cohesion Policy during the 2014–2020 period was to enhance access to, and the use and quality of information and communications technology, including developing products and services and strengthening applications. The EU eGovernment Action Plan (2016–2020) currently sets out concrete actions to accelerate the implementation of existing legislation and the related uptake of online services. The digital transition is reshaping public services, and it is clear that its impact is very significant. Nine out of ten cities report that their services have improved as a result of digitalisation (ESPON 2017). Changes in urban mobility have also taken an important leap forward in this period. The uptake of digital solutions and changes in mobility shortens the time and lowers the cost of obtaining information, contacting other people, accessing cultural experiences and carrying out administrative procedures. Two in three cities have seen an increase in the uptake of specific services, including culture, as a result of digitalisation and two in five have even reported a substantial increase (ESPON 2017). Our database includes some medium-sized cities where the Smart strategy is clearly central, such as Karlsruhe, Toulouse, Edinburgh, Bordeaux, or Lille. This strategy clearly predominates in more cities and is probably the one where the relationship with culture and creativity is more diffuse and the transformation is more systemic. Within the scope of the Creative Cities strategy, we can identify large European capitals that are also major centres of creativity and culture such as Paris, London, Madrid, Amsterdam or Copenhagen. Finally, the Heritage strategy is more prevalent in cities with significant historical and artistic heritage such as Rome, Lisbon or Ljubljana. The reason why the Heritage City strategy has a lower impact on the model is probably because the way to capitalise the impacts is either through tourism, an activity with low average productivity, or through the increase of real estate value, an activity that has gone through a crisis during the period considered. In conclusion, the provision of contexts that increase citizens' cultural experiences has clearly improved the performance of European cities and this study suggests a series of conceptual and empirical mechanisms that can help explain and measure the socioeconomic impacts of these processes. The datasets used and analysed during the current study are available from the corresponding authors upon reasonable request. The Shanghai Manual was established in 2011, following the 2010 World Expo in Shanghai, China. The initial purpose of the Manual was to serve as a tool to support mayors and urban managers in achieving sustainable urban development in cities. MESOC is a Research and Innovation Action designed to propose, test and validate an innovative and original approach to measuring the societal value and impacts of culture and cultural policies and practices, related to three crossover themes of the new European Agenda for Culture: 1) Health and Wellbeing, 2) Urban and Territorial Renovation, and 3) People's Engagement and Participation. The global aim is to respond to the challenge posed by the H2020 Call ("To develop new perspectives and improved methodologies for capturing the wider societal value of culture, including but also beyond its economic impact"). Anthopoulos LG, Reddick CG (2016) Smart city and smart government. In: Proceedings of the 25th International Conference Companion on World Wide Web - WWW '16 Companion. ACM Press, New York, New York, USA, pp 351–355. https://doi.org/10.1145/2872518.2888615. Bannerjee S, Bone J, Finger Y, Haley C (2016) European Digital City Index - Methodology Report. In: Nesta website. Nesta. https://www.nesta.org.uk/documents/2422/2016_EDCi_Construction_Methodology_FINAL.pdf. Accessed 22 Oct 2021. Bialakowsky A (2018) Alienación, aceleración, resonancia y buena vida - Entrevista a Hartmut Rosa. Revista Colombiana De Sociología 41(2):249–259. https://doi.org/10.15446/rcs.v41n2.75164 Bína V, Chantepie P, Deroin V, Frank G, Kommel K, Kotýnek J, Robin P (2012) ESSnet-CULTURE European statistical system network on culture: Final report. In: ESSnet programme, Collaboration in Research and Methodology for Official Statistics. European Commission, Eurostat & ESSnet-Culture Project. https://ec.europa.eu/eurostat/cros/system/files/ESSnet%20Culture%20Final%20report.pdf. Accessed 22 Oct 2021. Boix-Domènech R, Rausell-Köster P (2018) The economic impact of the creative industry in the European Union. In: Santamarina-Campos V, Segarra-Oña M (eds) Drones and the creative industry. Springer International Publishing, Cham, pp 19–36. https://doi.org/10.1007/978-3-319-95261-1 Boix R, Rausell P, Abeledo R (2017) The Calatrava model: reflections on resilience and urban plasticity. Eur Plan Stud 25(1):1–19. https://doi.org/10.1080/09654313.2016.1257570 Bonet L, Négrier E (2018) The participative turn in cultural policy: paradigms, models, contexts. Poetics 66:64–73. https://doi.org/10.1016/j.poetic.2018.02.006 Brown AS, Novak-Leonard JL (2013) Measuring the intrinsic impacts of arts attendance. Cultural Trends 22(3–4):223–233. https://doi.org/10.1080/09548963.2013.817654 Brown AS, Novak-Leonard JL, Gilbride S (2011) Getting in on the act: How arts groups are creating opportunities for active participation. In: Arts engagement focus, An Irvine research series. The James Irvine Foundation. http://www.irvine.org/wp-content/uploads/GettingInOntheAct2014_DEC3.pdf. Accessed 22 Oct 2021. Cala Buendía F (2010) More carrots than sticks: Antanas Mockus's civic culture policy in Bogotá. New Dir Youth Dev 2010(125):19–32. https://doi.org/10.1002/yd.335 Camagni R, Capello R, Caragliu A (2015) The rise of second-rank cities: what role for agglomeration economies? Eur Plan Stud 23(6):1069–1089. https://doi.org/10.1080/09654313.2014.904999 Camagni R, Capello R, Caragliu A (2017) Static vs. dynamic agglomeration economies: spatial context and structural evolution behind urban growth. In: Capello R (ed) Seminal studies in regional and urban economics. Springer, Milano, pp 227–259. https://doi.org/10.1007/978-3-319-57807-1_12 Cicerchia A (2018) Cultural heritage and landscape as determinants of well-being. Economia Della Cultura XXVII I(4/2018):451–464. https://doi.org/10.1446/92241 De Voldere I, Romainville JF, Knotter S, Durinck E, Engin E, Le Gall A, Kern P, Airaghi E, Pletosu T, Ranaivoson H, Hoelck K (2017) Mapping the creative value chains - A study on the economy of culture in the digital age: Final report. In: Publications Office of the EU. European Commission, Directorate-General for Education and Culture. https://op.europa.eu/en/publication-detail/-/publication/4737f41d-45ac-11e7-aea8-01aa75ed71a1#. Accessed 22 Oct 2021. Concilio G, Li C, Rausell P, Tosoni I (2019) Cities as enablers of innovation. In: Tosoni I (ed) Innovation capacity and the city. Springer briefs in applied sciences and technology. Springer, Cham. https://doi.org/10.1007/978-3-030-00123-0_3 Degen M, García M (2012) The transformation of the "Barcelona Model": an analysis of culture, urban regeneration and governance. Int J Urban Reg Res 36(5):1022–1038. https://doi.org/10.1111/j.1468-2427.2012.01152.x Dormann CF, Elith J, Bacher S, Buchmann C, Carl G, Carré G, Marquéz JRG, Gruber B, Lafourcade B, Leitão PJ, Münkemüller T, McClean C, Osborne PE, Reineking B, Schröder B, Skidmore AK, Zurell D, Lautenbach S (2013) Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography 36(1):27–46. https://doi.org/10.1111/J.1600-0587.2012.07348.X ESPON (2017) The territorial and urban dimensions of the digital transition of public services. In: ESPON Policy Briefs. European Observation Network for Territorial Development and Cohesion. https://www.espon.eu/sites/default/files/attachments/ESPON%20Policy%20Brief%20on%20Digital%20Transition.pdf. Accessed 22 Oct 2021. European Commission (2018) A new European Agenda for Culture. In: Culture and Creativity Document library. European Commission. https://ec.europa.eu/culture/sites/default/files/2020-08/swd-2018-167-new-european-agenda-for-culture_en.pdf. Accessed 22 Oct 2021. European Ministers of Culture (2018) Davos Declaration - Towards a high-quality Baukultur for Europe. In: Davos Declaration 2018. Office fédéral de la culture, Section Patrimoine culturel et monuments historiques. https://davosdeclaration2018.ch/media/Brochure_Declaration-de-Davos-2018_WEB_2.pdf. Accessed 22 Oct 2021. Evans G (2001) Cultural planning: an urban renaissance? Routledge, London Evans G (2005) Measure for measure: Evaluating the evidence of culture's contribution to regeneration. Urban Studies 42(5/6):959–983. https://doi.org/10.1080/00420980500107102 Fancourt D, Finn S (2019) Health Evidence Network synthesis report 67 - What is the evidence on the role of the arts in improving health and well-being? A scoping review. In: World Health Organization Publications. World Health Organization, Regional Office for Europe. https://apps.who.int/iris/bitstream/handle/10665/329834/9789289054553-eng.pdf. Accessed 22 Oct 2021. Florida R (2002) The rise of the creative class: And how it's transforming work, leisure, community and everyday life. Basic Books, New York Florida R (2017) The new urban crisis: How our cities are increasing inequality, deepening segregation, and failing the middle class-and what we can do about it. Basic Books, New York Florida R, Adler P, Mellander C (2017) The city as innovation machine. Reg Stud 51(1):86–96. https://doi.org/10.1080/00343404.2016.1255324 Glaeser W (2011) Triumph of the city: How urban spaces make us human. Pan Macmillan, London Grossi E, Tavano Blessi G, Sacco PL, Buscema M (2012) The interaction between culture, health and psychological well-being: data mining from the Italian Culture and Well-Being Project. J Happiness Stud 13(1):129–148. https://doi.org/10.1007/s10902-011-9254-x Hall J, Giovannini E, Morrone A, Ranuzzi G (2010) A framework to measure the progress of societies. In: OECD iLibrary, Statistics Working Papers Series. OECD, Statistics and Data Directorate. https://www.oecd-ilibrary.org/a-framework-to-measure-the-progress-of-societies_5km4k7mnrkzw.pdf?itemId=%2Fcontent%2Fpaper%2F5km4k7mnrkzw-en&mimeType=pdf. Accessed 22 Oct 2021. Hesmondhalgh D (2010) User-generated content, free labour and the cultural industries. Ephemera 10(3/4):267–284 Inglehart R, Welzel C (2005) Modernization, cultural change, and democracy - The human development sequence. Cambridge University Press, New York Jacobs J (1986) Las ciudades y la riqueza de las naciones: principios de la vida económica. Ariel, Barcelona Jancovich L, Bianchini F (2013) Problematising participation. Cultural Trends 22(2):63–66. https://doi.org/10.1080/09548963.2013.783158 Jin X, Long Y, Sun W, Lu Y, Yang X, Tang J (2017) Evaluating cities' vitality and identifying ghost cities in China with emerging geographical data. Cities 63:98–109. https://doi.org/10.1016/j.cities.2017.01.002 Laaksonen A (2005) Measuring cultural exclusion through participation in cultural life. Paper presented at the Third Global Forum on Human Development: Defining and Measuring Cultural Exclusion, Interarts Foundation, Paris, 17–19 January, 2005. http://www.culturalrights.net/descargas/drets_culturals67.pdf. Accessed 22 Oct 2021. Landry C, Bianchini F (1995) The creative city. Demos, London Matarasso F (1997) Use or ornament? The social impact of participation in the arts. Comedia, Bournes Green Stroud. https://www.artshealthresources.org.uk/wp-content/uploads/2017/01/1997-Matarasso-Use-or-Ornament-The-Social-Impact-of-Participation-in-the-Arts-1.pdf. McArthur J, Robin E (2019) Victims of their own (definition of) success: Urban discourse and expert knowledge production in the Liveable City. Urban Studies 56(9):1711–1728. https://doi.org/10.1177/0042098018804759 Mellander C, Florida R, Rentfrow J (2012) The creative class, post-industrialism and the happiness of nations. Camb J Reg Econ Soc 5(1):31–43. https://doi.org/10.1093/cjres/rsr006 MESOC (2021) MESOC Project. https://www.mesoc-project.eu/about/project. Accessed 22 Oct 2021. Monnet J (2011) The symbolism of place: a geography of relationships between space, power and identity. Cybergeo. https://doi.org/10.4000/cybergeo.24747 Montalto V, Tacao Moura CJ, Alberti V, Panella F, Saisana M (2019) The cultural and creative cities monitor - 2019 Edition. In: Joint Research Centre's publications. European Comission, Publications Office of the European Union. https://publications.jrc.ec.europa.eu/repository/bitstream/JRC117336/citiesmonitor_2019.pdf. Accessed 22 Oct 2021. Nogueira JP (2017) From failed states to fragile cities: Redefining spaces of humanitarian practice. Third World Q 38(7):1437–1453. https://doi.org/10.1080/01436597.2017.1282814 Pareja-Eastaway M (2020) The generation of value in urban spaces. Equilibri. https://doi.org/10.1406/98103 Partnership on Cultural and Cultural Heritage (2019) Culture and cultural heritage orientation paper. In: Urban Agenda for the EU's Culture/Cultural Heritage Library. European Comission's Futurium. https://ec.europa.eu/futurium/en/system/files/ged/cch_orientation_paper_-_final-public_version.pdf. Accessed 22 Oct 2021. Perucca G (2019) Residents' Satisfaction with Cultural City Life: Evidence from EU Cities. Appl Res Qual Life 14(2):461–478. https://doi.org/10.1007/S11482-018-9623-2 Peterson C, Park N, Seligman MEP (2005) Orientations to happiness and life satisfaction: the full life versus the empty life. J Happiness Stud 6(1):25–41. https://doi.org/10.1007/s10902-004-1278-z Pileri P (2015) Technological city and cultural criticism: challenges, limits, politics. City Territ Architect 2(1):12. https://doi.org/10.1186/S40410-015-0028-3 Rausell-Köster P, Abeledo Sanchís R, Blanco Sierra O, Boix Doménech R, De Miguel Molina B, Hervás Oliver JL, Marco-Serrano F, Pérez-Bustamante Yábar D, Vila Lladosa L (2012) Culture as a factor for economic and social innovation. Universitat de València, Sostenuto project. https://www.uv.es/soste/pdfs/Sostenuto_Volume1_CAST.pdf. Accessed 14 Nov 2022. Romer PM (1986) Increasing returns and long-run growth. J Polit Econ 94(5):1002–1037. https://doi.org/10.1086/261420 Sacco PL, Ferilli G, Blessi GT, Nuccio M (2013) Culture as an engine of local development processes: System-wide cultural districts I: Theory. Growth Chang 44(4):555–570. https://doi.org/10.1111/grow.12020 Sacco PL, Ferilli G, Blessi GT (2018) From Culture 1.0 to Culture 3.0: three socio-technical regimes of social and economic value creation through culture, and their impact on European cohesion policies. Sustainability 10(11):3923. https://doi.org/10.3390/su10113923 Sani M (2015) Participatory governance of cultural heritage (Ad hoc question 27). In: European Expert Network on Culture and Audiovisual (EENCA) Publications. European Expert Network on Culture (EENC) under request of The Directorate-General for Education, Youth, Sport and Culture (DG EAC). https://eenca.com/eenca/assets/File/EENC%20publications/interarts2538.pdf. Accessed 22 Oct 2021. Scott AJ (2001) Globalization and the rise of city-regions. Eur Plan Stud 9(7):813–826. https://doi.org/10.1080/09654310120079788 Scott AJ (2014) Beyond the creative city: Cognitive-cultural capitalism and the new urbanism. Reg Stud 48(4):567–578. https://doi.org/10.1080/00343404.2014.891010 Segovia C, Hervé J (2022) The creative city approach: origins, construction and prospects in a scenario of transition. City Territ Archit 9(1):1–15. https://doi.org/10.1186/s40410-022-00178-x Segovia C, Marrades R, Rausell P, Abeledo R (2015) ESPACIOS - Para la innovación, la creatividad y la cultura. Publicacions de la Universitat de València, València Sen A (1999) Development as freedom. Anchor Books, New York Sennett R (2018) Building and dwelling - Ethics for the city. Farrar, Straus & Giroux - Macmillan, New York Shapiro SS, Wilk MB (1965) An analysis of variance test for normality (complete samples). Biometrika 52(3–4):591–611. https://doi.org/10.2307/2333709 Soren BJ (2009) Museum experiences that change visitors. Museum Manage Curatorship 24(3):233–251. https://doi.org/10.1080/09647770903073060 Sorribes J (2012) La ciudad: economía, espacio, sociedad y medio ambiente. Tirant lo Blanch, Valencia Stanley D (2006) The social effects of culture. Can J Commun 31(1):7–15. https://doi.org/10.22230/cjc.2006v31n1a1744 Storper M, Scott AJ (2009) Rethinking human capital, creativity and urban growth. J Econ Geogr 9(2):147–167. https://doi.org/10.1093/jeg/lbn052 Susen S (2020) The resonance of resonance: critical theory as a sociology of world-relations? Int J Polit Cult Soc 33:309–344. https://doi.org/10.1007/s10767-019-9313-6 Tavano Blessi G, Grossi E, Sacco PL, Pieretti G, Ferilli G (2016) The contribution of cultural participation to urban well-being. A comparative study in Bolzano/Bozen and Siracusa. Italy Cities 50:216–226. https://doi.org/10.1016/j.cities.2015.10.009 Tomka G (2013) Reconceptualizing cultural participation in Europe: grey literature review. Cult Trends 22(3–4):259–264. https://doi.org/10.1080/09548963.2013.819657 UNESCO (2009) The 2009 UNESCO Framework for cultural statistics (FCS). In: Documents. UNESCO Institute for Statistics (UIS), Montreal. http://uis.unesco.org/sites/default/files/documents/unesco-framework-for-cultural-statistics-2009-en_0.pdf. Accessed 22 Oct 2021. UNESCO (2012) Measuring cultural participation - 2009 UNESCO Framework for cultural statistics handbook no. 2. In: Documents. UNESCO Institute for Statistics (UIS), Montreal. http://uis.unesco.org/sites/default/files/documents/measuring-cultural-participation-2009-unesco-framework-for-cultural-statistics-handbook-2-2012-en.pdf. Accessed 22 Oct 2021. UNESCO, The World Bank (2021) Cities, culture, creativity: Leveraging culture and creativity for sustainable urban development and inclusive growth. In: Digital Library. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000377427. Accessed 22 Oct 2021. United Nations, Bureau International des Exhibitions, Shanghai 2010 World Exposition Executive Committee, Municipal Government of Shanghai (2012) Shanghai Manual: A guide for sustainable urban development in the 21st century. In: Sustainable Development Publications. United Nations Department of Economic and Social Affairs (UNDESA) & Eastern Publishing Centre. https://sdgs.un.org/sites/default/files/publications/shanghaimanual.pdf. Accessed 22 Oct 2021. United Nations, Department of Economic and Social Affairs, Population Division (2019). World Urbanization Prospects: The 2018 Revision (ST/ESA/SER.A/420). New York: United Nations. https://population.un.org/wup/publications/Files/WUP2018-Report.pdf. Accessed 14 Nov 2022- Valtysson B (2010) Access culture: Web 2.0 and cultural participation. Int J Cult Policy 16(2):200–214. https://doi.org/10.1080/10286630902902954 Zukin S (1995) The cultures of cities. Blackwell Publishers, Oxford This work was supported by Horizon 2020 framework Programme MESOC PROJECT [Grant Number 870935] and by the Spanish Ministry of Universities through the "Formación de Profesorado Universitario" Programme [FPU19/00182]. Research Unit in Economics of Culture and Tourism, Department of Applied Economics, University of València, Av. Tarongers S/N, 46022, Valencia, Spain Pau Rausell-Köster, Sendy Ghirardi, Jordi Sanjuán & Francesco Molinari Department of Methodology of the Behavioural Sciences, University of Valencia, Av. Blasco Ibañez, 21, 46010, Valencia, Spain Borja Abril Pau Rausell-Köster Sendy Ghirardi Jordi Sanjuán Francesco Molinari Conceptualisation, PR, SG and FM; empirical analysis and data curation, JS; writing—original draft preparation, PR; writing—review and editing, BA. All authors read and approved the final manuscript. Correspondence to Pau Rausell-Köster. Rausell-Köster, P., Ghirardi, S., Sanjuán, J. et al. Cultural experiences in the framework of "cultural cities": measuring the socioeconomic impact of culture in urban performance. City Territ Archit 9, 40 (2022). https://doi.org/10.1186/s40410-022-00189-8 Cultural impact
CommonCrawl
Is there a way to draw Sudoku (and other) grids? We now have MathJax. Can we use it to draw grids, like those needed for Sudoku and other puzzles? The following MathJax code leads to the grid below it. \begin{array}{|c|c|c|} \hline \\ 1 & \; & \fbox{3} \\ 4 & 5 & 6 \\ 7 & \; & \tiny{\begin{array}{ccc} & 2 & \\ & \fbox{5} & \\ & 8 & 9 \end{array}} \\ \hline \end{array} $$ \begin{array}{|c|c|c|} \hline \\ 1 & \; & \fbox{3} \\ \hline \\ 4 & 5 & 6 \\ \hline \\ 7 & \; & \tiny{\begin{array}{ccc} & 2 & \\ & \fbox{5} & \\ & 8 & 9 \end{array}} \\ \hline \end{array} $$ Here we see a single Sudoku block, with several cells filled in. There is one given, 3; the other values are a partial solution. There are two empty cells and there's one cell with a number of hints, one of which is highlighted. Of course, this is far from perfect and even just barely usable. It needs equally spaced cells, minimal padding (especially in the cell with hints there's way too much vertical padding above it), another way to separate givens from parts of the solution (or another way to highlight hints), thicker horizontal and vertical lines to define the blocks, and probably much more. Also, the syntax isn't easy. Ideally, we should be able to do something like \begin{grid{sudoku}}{1,,(3), ...},{4,5,6, ...},{7,,{,2,,,[5],,,8,9}, ...}, ...}\end{grid} and have the grid drawn. According to the TEX.SE there are several packages that can do that, but they are not part of MathJax, which is what we have here. What can we do to draw better grids and how can we make drawing grids easier? feature-request mathjax formatting SQBSQB $\begingroup$ Solutions might include a MathJax macro, defined once for this stack, or perhaps another plugin altogether. $\endgroup$ – SQB May 22 '14 at 9:52 $\begingroup$ It wouldn't be terribly hard to write a MathJax macro because MathML makes equal cells relatively simple. Once there is a macro, the community could probably convince stackexchange to add it to the site. We'd be happy to help anyone who wants to give it a try. $\endgroup$ – Peter Krautzberger May 30 '14 at 15:04 Here is a slightly better version, but I think it needs work before being made into a macro for the site - it has unnecessary spacing, both within cells and between blocks, so it spills over the sidebar (at least on my browser), maybe someone else can work out how to avoid that while still keeping everything aligned? First a sudoku with relevant pencil marks (the small numbers showing what values an empty cell could take given only the direct information of what values are already taken in the three units they occupy [row, column, block]: $ \def\ph#1#2|{\tiny{\bbox[border:2px solid #2]{#1}}} \def\pn#1{\tiny{\bbox[border:2px]{#1}}} \def\pe{\tiny{\bbox[border:2px]{\phantom{0}}}} \def\vn#1{\begin{array}{}\pe&\pe&\pe\\&\rlap{\bbox[border:2px]{#1}}&\\\pe&\pe&\pe\end{array}} \def\vh#1#2|{\begin{array}{}\pe&\pe&\pe\\&\rlap{\bbox[border:2px solid #2]{#1}}&\\\pe&\pe&\pe\end{array}} \begin{array}{} \bbox[border:3px solid]{ \begin{array}{c|c|c} \vn1 & \begin{array}{}\pe&\pe&\pe\\\pn4&\pn5&\pe\\\pn7&\pe&\pe\end{array} & \begin{array}{}\pe&\pe&\pe\\\pn4&\pe&\pe\\\pn7&\pn8&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pn3\\\pe&\pe&\pe\\\pn7&\pe&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pn3\\\pe&\pe&\pe\\\pn7&\pe&\pe\end{array} & \begin{array}{}\pe&\pe&\pn3\\\pe&\pe&\pe\\\pn7&\pn8&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pe\\\pn7&\pe&\pn9\end{array} & \vn6 & \begin{array}{}\pe&\pe&\pe\\\pn4&\pe&\pe\\\pn7&\pn8&\pn9\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pn7&\pn8&\pe\end{array} & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pe\\\pn7&\pn8&\pn9\end{array} & \vn2 \\ \hline \vn1 & \vn4 & \begin{array}{}\pe&\pe&\pe\\\pe&\pe&\pn6\\\pe&\pn8&\pn9\end{array} \\ \hline \vn3 & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pe\\\pn7&\pn8&\pn9\end{array} & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pe\\\pe&\pn8&\pn9\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pn4&\pe&\pn6\\\pn7&\pn8&\pn9\end{array} & \begin{array}{}\pe&\pe&\pe\\\pn4&\pe&\pn6\\\pe&\pn8&\pn9\end{array} & \vn3 \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pn7&\pn8&\pn9\end{array} & \vn5 & \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pn7&\pe&\pn9\end{array} \\ \hline \vn1 & \begin{array}{}\pe&\pn2&\pe\\\pn4&\pe&\pe\\\pe&\pn8&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pe\\\pn7&\pe&\pn9\end{array} \\ \end{array} } \\ \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pn2&\pn3\\\pe&\pe&\pn6\\\pe&\pe&\pe\end{array} & \begin{array}{}\pn1&\pn2&\pn3\\\pn4&\pe&\pe\\\pe&\pe&\pe\end{array} & \vn5 \\ \hline \vn8 & \begin{array}{}\pn1&\pn2&\pn3\\\pe&\pe&\pe\\\pn7&\pe&\pe\end{array} & \begin{array}{}\pn1&\pe&\pn3\\\pe&\pe&\pe\\\pn7&\pe&\pe\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pn7&\pe&\pe\end{array} & \vn9 & \begin{array}{}\pn1&\pe&\pe\\\pn4&\pe&\pn6\\\pn7&\pe&\pe\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pn2&\pe\\\pn4&\pe&\pe\\\pe&\pn8&\pe\end{array} & \begin{array}{}\pn1&\pn2&\pe\\\pe&\pe&\pe\\\pe&\pn8&\pn9\end{array} & \begin{array}{}\pn1&\pe&\pn3\\\pe&\pe&\pe\\\pe&\pn8&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pe\\\pn7&\pe&\pe\end{array} & \vn6 & \begin{array}{}\pn1&\pe&\pn3\\\pe&\pn5&\pe\\\pe&\pe&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pn4&\pn5&\pe\\\pn7&\pn8&\pe\end{array} & \begin{array}{}\pn1&\pn2&\pe\\\pe&\pn5&\pe\\\pn7&\pn8&\pe\end{array} & \begin{array}{}\pn1&\pe&\pe\\\pe&\pn5&\pe\\\pe&\pn8&\pe\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pe&\pn8&\pn9\end{array} & \vn7 & \begin{array}{}\pn1&\pn2&\pe\\\pe&\pe&\pn6\\\pe&\pe&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pe\\\pe&\pe&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pe\\\pe&\pe&\pn9\end{array} & \vn4 \\ \hline \vn3 & \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pe&\pn8&\pe\end{array} & \begin{array}{}\pn1&\pn2&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pe\end{array} \\ \end{array} } \\ \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pn7&\pe&\pn9\end{array} & \begin{array}{}\pn1&\pe&\pe\\\pe&\pn5&\pe\\\pn7&\pe&\pe\end{array} & \vn2 \\ \hline \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pn9\end{array} & \vn8 & \begin{array}{}\pn1&\pe&\pe\\\pe&\pe&\pn6\\\pe&\pe&\pn9\end{array} \\ \hline \vn4 & \begin{array}{}\pe&\pe&\pn3\\\pe&\pn5&\pe\\\pn7&\pe&\pe\end{array} & \begin{array}{}\pe&\pe&\pn3\\\pe&\pe&\pn6\\\pn7&\pe&\pe\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pe&\pn8&\pe\end{array} & \begin{array}{}\pn1&\pe&\pe\\\pe&\pn5&\pe\\\pe&\pn8&\pe\end{array} & \vn4 \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pe\end{array} & \vn3 & \vn7 \\ \hline \vn9 & \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pe\\\pe&\pe&\pe\end{array} & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pe\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pn7&\pe&\pn9\end{array} & \vn3 & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pn7&\pe&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pn4&\pn5&\pn6\\\pe&\pe&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pe\\\pn4&\pe&\pn6\\\pe&\pe&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pn6\\\pn7&\pe&\pe\end{array} & \vn1 & \vn8 \\ \end{array} } \end{array} $ Now we can demonstrate some of the features of the code... Below we can highlight: $3$ hidden singles and the pencil marks they remove - the large blue-boxed $1$s and red-boxed $8$ are the only ones in some unit that may be the respective value - the pencil marks removed are also shown with boxes around them; and a pointing pair - one of the green-boxed pencil marked $9$s must be a $9$ since they are the only ones left in that block - so the cyan-boxed pencil marked $9$s may be removed $ \def\ph#1#2|{\tiny{\bbox[border:2px solid #2]{#1}}} \def\pn#1{\tiny{\bbox[border:2px]{#1}}} \def\pe{\tiny{\bbox[border:2px]{\phantom{0}}}} \def\vn#1{\begin{array}{}\pe&\pe&\pe\\&\rlap{\bbox[border:2px]{#1}}&\\\pe&\pe&\pe\end{array}} \def\vh#1#2|{\begin{array}{}\pe&\pe&\pe\\&\rlap{\bbox[border:2px solid #2]{#1}}&\\\pe&\pe&\pe\end{array}} \begin{array}{} \bbox[border:3px solid]{ \begin{array}{c|c|c} \vn1 & \begin{array}{}\pe&\pe&\pe\\\pn4&\pn5&\pe\\\pn7&\pe&\pe\end{array} & \begin{array}{}\pe&\pe&\pe\\\pn4&\pe&\pe\\\pn7&\pn8&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pn3\\\pe&\pe&\pe\\\pn7&\pe&\ph9cyan|\end{array} & \begin{array}{}\pe&\pn2&\pn3\\\pe&\pe&\pe\\\pn7&\pe&\pe\end{array} & \begin{array}{}\pe&\pe&\pn3\\\pe&\pe&\pe\\\pn7&\pn8&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pe\\\pn7&\pe&\ph9cyan|\end{array} & \vn6 & \begin{array}{}\pe&\pe&\pe\\\pn4&\pe&\pe\\\pn7&\pn8&\pn9\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pn7&\ph8red|&\pe\end{array} & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pe\\\pn7&\pn8&\pn9\end{array} & \vn2 \\ \hline \vn1 & \vn4 & \begin{array}{}\pe&\pe&\pe\\\pe&\pe&\pn6\\\pe&\pn8&\pn9\end{array} \\ \hline \vn3 & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pe\\\pn7&\pn8&\pn9\end{array} & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pe\\\pe&\pn8&\pn9\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pn4&\pe&\pn6\\\pn7&\pn8&\pn9\end{array} & \begin{array}{}\pe&\pe&\pe\\\pn4&\pe&\pn6\\\pe&\pn8&\pn9\end{array} & \vn3 \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pn7&\pn8&\pn9\end{array} & \vn5 & \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pn7&\pe&\pn9\end{array} \\ \hline \vn1 & \begin{array}{}\pe&\pn2&\pe\\\pn4&\pe&\pe\\\pe&\pn8&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pe\\\pn7&\pe&\pn9\end{array} \\ \end{array} } \\ \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pn2&\pn3\\\pe&\pe&\pn6\\\pe&\pe&\pe\end{array} & \begin{array}{}\pn1&\pn2&\pn3\\\pn4&\pe&\pe\\\pe&\pe&\pe\end{array} & \vn5 \\ \hline \vn8 & \begin{array}{}\pn1&\pn2&\pn3\\\pe&\pe&\pe\\\pn7&\pe&\pe\end{array} & \begin{array}{}\ph1blue|&\pe&\pn3\\\pe&\pe&\pe\\\pn7&\pe&\pe\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pn7&\pe&\pe\end{array} & \vn9 & \begin{array}{}\ph1blue|&\pe&\pe\\\pn4&\pe&\pn6\\\pn7&\pe&\pe\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pn2&\pe\\\pn4&\pe&\pe\\\pe&\ph8red|&\pe\end{array} & \begin{array}{}\ph1blue|&\pn2&\pe\\\pe&\pe&\pe\\\pe&\pn8&\pn9\end{array} & \begin{array}{}\pn1&\pe&\pn3\\\pe&\pe&\pe\\\pe&\pn8&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pe\\\pn7&\pe&\pe\end{array} & \vn6 & \begin{array}{}\pn1&\pe&\pn3\\\pe&\pn5&\pe\\\pe&\pe&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pn4&\pn5&\pe\\\pn7&\ph8red|&\pe\end{array} & \begin{array}{}\ph1blue|&\pn2&\pe\\\pe&\pn5&\pe\\\pn7&\pn8&\pe\end{array} & \begin{array}{}\pn1&\pe&\pe\\\pe&\pn5&\pe\\\pe&\pn8&\pe\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pe&\pn8&\pn9\end{array} & \vn7 & \begin{array}{}\pn1&\pn2&\pe\\\pe&\pe&\pn6\\\pe&\pe&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pe\\\pe&\pe&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pe\\\pe&\pe&\pn9\end{array} & \vn4 \\ \hline \vn3 & \begin{array}{}\pe&\pn2&\pe\\\pe&\pe&\pn6\\\pe&\pn8&\pe\end{array} & \begin{array}{}\pn1&\pn2&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pe\end{array} \\ \end{array} } \\ \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pn7&\pe&\ph9green|\end{array} & \begin{array}{}\ph1blue|&\pe&\pe\\\pe&\pn5&\pe\\\pn7&\pe&\pe\end{array} & \vn2 \\ \hline \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\ph9green|\end{array} & \vn8 & \vh1blue| \\ \hline \vn4 & \begin{array}{}\pe&\pe&\pn3\\\pe&\pn5&\pe\\\pn7&\pe&\pe\end{array} & \begin{array}{}\pe&\pe&\pn3\\\pe&\pe&\pn6\\\pn7&\pe&\pe\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \vh8red| & \vh1blue| & \vn4 \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pe\end{array} & \vn3 & \vn7 \\ \hline \vn9 & \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pe\\\pe&\pe&\pe\end{array} & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pe\end{array} \\ \end{array} } & \bbox[border:3px solid]{ \begin{array}{c|c|c} \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pn7&\pe&\pn9\end{array} & \vn3 & \begin{array}{}\pe&\pe&\pe\\\pe&\pn5&\pn6\\\pn7&\pe&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pn4&\pn5&\pn6\\\pe&\pe&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pe\\\pn4&\pe&\pn6\\\pe&\pe&\pn9\end{array} & \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pn6\\\pe&\pe&\pn9\end{array} \\ \hline \begin{array}{}\pe&\pn2&\pe\\\pe&\pn5&\pn6\\\pn7&\pe&\pe\end{array} & \vn1 & \vn8 \\ \end{array} } \end{array} $ Jonathan AllanJonathan Allan As an interim workaround, I use a screenshot tool to grab images quickly (there are a lot out there, but I use LightShot) off a Sudoku website. I use this site to generate nice-looking Sudoku grids. This also allows for fast reduction of Sudoku possibilities and clues, which can also definitely be useful in an answer. It's not as pretty as a custom module would be, but it's good enough for the time being until we get an official response. AzaAza $\begingroup$ The link in your answer appears broken to me (the host is not resolvable). $\endgroup$ – Xynariz May 23 '14 at 20:19 $\begingroup$ @Xynariz Link's been fixed, sorry about that! $\endgroup$ – Aza May 23 '14 at 20:53 $\begingroup$ Sorry; link is still broken. $\endgroup$ – Peregrine Rook Apr 23 '16 at 22:08 $\begingroup$ The site's URL has changed a bit i think: sudokuwiki.org/sudoku.htm $\endgroup$ – manshu Apr 24 '16 at 9:26 I used screenshots of an Excel worksheet here. It was a bit of work, but possibly no more than the MathJax approach would be. Peregrine RookPeregrine Rook Not the answer you're looking for? Browse other questions tagged feature-request mathjax formatting . We should have MathJax. What should the escape sequence be? What if all "solve my puzzle and that's it" questions got banned Polls about Puzzling What is our reason for wanting bounties on questions?
CommonCrawl
An iterative algorithm based on model-reality differences for discrete-time nonlinear stochastic optimal control problems NACO Home Hahn's symmetric quantum variational calculus 2013, 3(1): 95-108. doi: 10.3934/naco.2013.3.95 Linear quadratic differential games with mixed leadership: The open-loop solution Alain Bensoussan 1, , Shaokuan Chen 1, and Suresh P. Sethi 1, Naveen Jindal School of Management, The University of Texas at Dallas, Richardson, TX 75080-3021, United States, United States, United States Received October 2011 Revised November 2012 Published January 2013 This paper is concerned with open-loop Stackelberg equilibria of two-player linear-quadratic differential games with mixed leadership. We prove that, under some appropriate assumptions on the coefficients, there exists a unique Stackelberg solution to such a differential game. Moreover, by means of the close interrelationship between the Riccati equations and the set of equations satisfied by the optimal open-loop control, we provide sufficient conditions to guarantee the existence and uniqueness of solutions to the associated Riccati equations with mixed-boundary conditions. As a result, the players' open-loop strategies can be represented in terms of the system state. Keywords: Stackelberg equilibria, Differential games, mixed leadership, Riccati equations, two-point boundary value problems.. Mathematics Subject Classification: Primary: 91A10, 91A23; Secondary: 34K1. Citation: Alain Bensoussan, Shaokuan Chen, Suresh P. Sethi. Linear quadratic differential games with mixed leadership: The open-loop solution. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 95-108. doi: 10.3934/naco.2013.3.95 T. Başar, On the relative leadership property of Stackelberg strategies, J. Optimization Theory and Applications, 11 (1973), 655-661. doi: 10.1007/BF00935564. Google Scholar T. Başar and A. Haurie, Feedback equilibria in differential games with structural and modal uncertainties, in "Advances in Large Scale Systems" (eds. J. B. Cruz, Jr.), JAE Press Inc., (1984), 163-201. Google Scholar T. Başar, A. Haurie and G. Ricci, On the dominance of capitalists' leadership in a feedback Stackelberg solution of a differential game model of capitalism, J. Economic Dynamics and Control, 9 (1985), 101-125. doi: 10.1016/0165-1889(85)90026-0. Google Scholar T. Başar and G. J. Olsder, "Dynamic Noncooperative Game Theory," 2nd edition, Academic Press, New York, 1995. Google Scholar T. Başar, A. Bensoussan and S. P. Sethi, Differential games with mixed leadership: the open-loop solution, Applied Mathematics and Computation, 217 (2010), 972-979. doi: 10.1016/j.amc.2010.01.048. Google Scholar A. Bensoussan, S. Chen and S. P. Sethi, Feedback Stackelberg solutions of infinite-horizon stochastic differential games,, forthcoming., (). Google Scholar A. Bensoussan, S. Chen and S. P. Sethi, The maximum principle for global solutions of stochastic Stackelberg differential games,, working paper., (). Google Scholar G. F. Cachon, Supply chain coordination with contracts, in "Handbooks in OR and MS Vol. 11, SCM: Design, Coordination and Cooperation" (eds. A. G. De Kok and S. C. Graves), Elsevier, (2003), 227-339. Google Scholar A. Chutani and S. P. Sethi, Cooperative advertising in a dynamic retail market oligopoly, Dynamic Games and Applications, 2012, forthcoming. doi: 10.1007/s13235-012-0053-8. Google Scholar A. Chutani and S. P. Sethi, Optimal advertising and pricing in a dynamic durable goods supply chain, Journal of Optimization Theory and Applications, 154 (2012), 615-643. doi: 10.1007/s10957-012-0034-5. Google Scholar E. Dockner, S. Jøgensen, N. V. Long and G. Sorger, "Differential Games in Economics and Management Science," Cambridge University Press, Cambridge, UK, 2000. doi: 10.1017/CBO9780511805127. Google Scholar X. He, A. Krishnamoorthy, A. Prasad and S. P. Sethi, Retail competition and cooperative advertising, Operations Research Letters, 39 (2011), 11-16. doi: 10.1016/j.orl.2010.10.006. Google Scholar X. He, A. Krishnamoorthy, A. Prasad and S. P. Sethi, Co-Op advertising in dynamic retail oligopolies, Decision Sciences, 43 (2012), 73-105. doi: 10.1111/j.1540-5915.2011.00336.x. Google Scholar X. He, A. Prasad and S. P. Sethi, Cooperative advertising and pricing in a dynamic stochastic supply chain: feedback stackelberg strategies, Production and Operations Management, 18 (2009), 78-94. Google Scholar X. He, A. Prasad, S. P. Sethi and G. J. Gutierrez, A survey of Stackelberg differential game models in supply chain and marketing channels, J. Systems Science and Systems Engineering, 16 (2007), 385-413. doi: 10.1007/s11518-007-5058-2. Google Scholar A. Krishnamoorthy, A. Prasad and S. P. Sethi, Optimal pricing and advertising in a durable-good duopoly, European Journal of Operations Research, 200 (2010), 486-497. doi: 10.1016/j.ejor.2009.01.003. Google Scholar G. Leitmann, On generalized Stackelberg strategies, J. Optimization Theory and Applications, 26 (1978), 637-643. doi: 10.1007/BF00933155. Google Scholar E. Pardoux and S. Tang, Forward-backward stochastic differential equations and quasilinear parabolic PDEs, Probab. Theory Relat. Fields, 114 (1999), 123-150. doi: 10.1007/s004409970001. Google Scholar A. Prasad, S. P. Sethi and P. A. Naik, Understanding the impact of churn in dynamic oligopoly markets, Automatica, 48 (2012), 2882-2887. doi: 10.1016/j.automatica.2012.08.031. Google Scholar M. Simaan and J. B. Cruz, Jr., On the Stackelberg strategy in nonzero-sum games, J. Optimization Theory and Applications, 11 (1973), 533-555. doi: 10.1007/BF00935665. Google Scholar M. Simaan and J. B. Cruz, Jr., Additional aspects of the Stackelberg strategy in nonzero-sum games,, J. Optimization Theory and Applications, 11 (): 613. doi: 10.1007/BF00935561. Google Scholar H. von Stackelberg, "Marktform und Gleichgewicht," Springer, Vienna, 1934 (An English translation appeared in "The Theory of the Market Economy," Oxford University Press, New York, 1952). Google Scholar S. Tang, General linear quadratic optimal stochastic control problems with random coefficients: linear stochastic Hamilton systems and backward stochastic Riccati equations, SIAM J. Control Optim., 42 (2003), 53-75. doi: 10.1137/S0363012901387550. Google Scholar J. Yong, Linear forward-backward stochastic differential equations with random coefficients, Probab. Theory Relat. Fields, 135 (2006), 53-83. doi: 10.1007/s00440-005-0452-5. Google Scholar Feliz Minhós, A. I. Santos. Higher order two-point boundary value problems with asymmetric growth. Discrete & Continuous Dynamical Systems - S, 2008, 1 (1) : 127-137. doi: 10.3934/dcdss.2008.1.127 Jerry L. Bona, Hongqiu Chen, Shu-Ming Sun, Bing-Yu Zhang. Comparison of quarter-plane and two-point boundary value problems: The KdV-equation. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 465-495. doi: 10.3934/dcdsb.2007.7.465 Xiao-Yu Zhang, Qing Fang. A sixth order numerical method for a class of nonlinear two-point boundary value problems. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 31-43. doi: 10.3934/naco.2012.2.31 Jerry Bona, Hongqiu Chen, Shu Ming Sun, B.-Y. Zhang. Comparison of quarter-plane and two-point boundary value problems: the BBM-equation. Discrete & Continuous Dynamical Systems, 2005, 13 (4) : 921-940. doi: 10.3934/dcds.2005.13.921 Wenming Zou. Multiple solutions results for two-point boundary value problem with resonance. Discrete & Continuous Dynamical Systems, 1998, 4 (3) : 485-496. doi: 10.3934/dcds.1998.4.485 Shao-Yuan Huang, Shin-Hwa Wang. On S-shaped bifurcation curves for a two-point boundary value problem arising in a theory of thermal explosion. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 4839-4858. doi: 10.3934/dcds.2015.35.4839 K. Q. Lan, G. C. Yang. Optimal constants for two point boundary value problems. Conference Publications, 2007, 2007 (Special) : 624-633. doi: 10.3934/proc.2007.2007.624 Wen-Chiao Cheng. Two-point pre-image entropy. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 107-119. doi: 10.3934/dcds.2007.17.107 Jeremiah Birrell. A posteriori error bounds for two point boundary value problems: A green's function approach. Journal of Computational Dynamics, 2015, 2 (2) : 143-164. doi: 10.3934/jcd.2015001 Hermen Jan Hupkes, Emmanuelle Augeraud-Véron. Well-posedness of initial value problems for functional differential and algebraic equations of mixed type. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 737-765. doi: 10.3934/dcds.2011.30.737 Felix Sadyrbaev, Inara Yermachenko. Multiple solutions of nonlinear boundary value problems for two-dimensional differential systems. Conference Publications, 2009, 2009 (Special) : 659-668. doi: 10.3934/proc.2009.2009.659 Chan-Gyun Kim, Yong-Hoon Lee. A bifurcation result for two point boundary value problem with a strong singularity. Conference Publications, 2011, 2011 (Special) : 834-843. doi: 10.3934/proc.2011.2011.834 M.J. Lopez-Herrero. The existence of weak solutions for a general class of mixed boundary value problems. Conference Publications, 2011, 2011 (Special) : 1015-1024. doi: 10.3934/proc.2011.2011.1015 Santiago Cano-Casanova. Coercivity of elliptic mixed boundary value problems in annulus of $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems, 2012, 32 (11) : 3819-3839. doi: 10.3934/dcds.2012.32.3819 J. R. L. Webb. Remarks on positive solutions of some three point boundary value problems. Conference Publications, 2003, 2003 (Special) : 905-915. doi: 10.3934/proc.2003.2003.905 K. Q. Lan. Properties of kernels and eigenvalues for three point boundary value problems. Conference Publications, 2005, 2005 (Special) : 546-555. doi: 10.3934/proc.2005.2005.546 Wenying Feng. Solutions and positive solutions for some three-point boundary value problems. Conference Publications, 2003, 2003 (Special) : 263-272. doi: 10.3934/proc.2003.2003.263 John R. Graef, Shapour Heidarkhani, Lingju Kong. Existence of nontrivial solutions to systems of multi-point boundary value problems. Conference Publications, 2013, 2013 (special) : 273-281. doi: 10.3934/proc.2013.2013.273 Lingju Kong, Qingkai Kong. Existence of nodal solutions of multi-point boundary value problems. Conference Publications, 2009, 2009 (Special) : 457-465. doi: 10.3934/proc.2009.2009.457 Alexei Korolev, Gennady Ougolnitsky. Optimal resource allocation in the difference and differential Stackelberg games on marketing networks. Journal of Dynamics & Games, 2020, 7 (2) : 141-162. doi: 10.3934/jdg.2020009 Alain Bensoussan Shaokuan Chen Suresh P. Sethi
CommonCrawl
export.arXiv.org > physics > arXiv:2110.07338v6 physics.class-ph math.CA Physics > Classical Physics Title: Lorentz-equivariant flow with four delays of neutral type Authors: Jayme De Luca (Submitted on 14 Oct 2021 (v1), revised 28 Nov 2021 (this version, v6), latest version 19 Jan 2022 (v11)) Abstract: In order to allow integration by the method of steps, we generalize the two-body problem of variational electrodynamics by adding a second time-reversible interaction along lightcones. The time-reversible equations of motion define a semiflow on $C^2(\mathbb{R})$ with four state-dependent delays and nonlinear gyroscopic terms. If velocity discontinuities are present, their propagation requires additional constraints, i.e., the two energetic Weierstrass-Erdmann conditions defining the boundary layer neighborhood where velocity denominators become small. Furthermore, the flow includes an inversion layer where attraction turns into repulsion. In order to display the boundary layers, we discuss the motion restricted to a straight line by the initial condition. Comments: 32 pages, 3 figures. The caption of Fig. 3 was edited, and section 7C was added to the appendix to include an external field in the equations of motion. A one-dimensional version of the no-semiflow theorem was included to motivate the general case of theorem 3.3, and last, the abstract was shortened from v5 Subjects: Classical Physics (physics.class-ph); Classical Analysis and ODEs (math.CA) Cite as: arXiv:2110.07338 [physics.class-ph] (or arXiv:2110.07338v6 [physics.class-ph] for this version) From: Jayme Vicente De Luca [view email] [v1] Thu, 14 Oct 2021 16:27:40 GMT (90kb,D) [v2] Tue, 19 Oct 2021 14:32:01 GMT (91kb,D) [v3] Mon, 25 Oct 2021 21:41:07 GMT (92kb,D) [v4] Wed, 3 Nov 2021 16:39:09 GMT (93kb,D) [v5] Mon, 8 Nov 2021 20:30:48 GMT (92kb,D) [v6] Sun, 28 Nov 2021 21:01:32 GMT (92kb,D) [v7] Wed, 1 Dec 2021 13:30:29 GMT (93kb,D) [v8] Sat, 4 Dec 2021 21:25:50 GMT (93kb,D) [v9] Mon, 13 Dec 2021 11:12:25 GMT (93kb,D) [v10] Thu, 13 Jan 2022 12:17:45 GMT (99kb,D) [v11] Wed, 19 Jan 2022 13:37:44 GMT (99kb,D)
CommonCrawl
Search SpringerLink Supervised temporal link prediction in large-scale real-world networks Part of a collection: Complex Networks 2020 Gerrit Jan de Bruin ORCID: orcid.org/0000-0003-4936-879X1, Cor J. Veenman1,2, H. Jaap van den Herik3 & Frank W. Takes1 Social Network Analysis and Mining volume 11, Article number: 80 (2021) Cite this article Link prediction is a well-studied technique for inferring the missing edges between two nodes in some static representation of a network. In modern day social networks, the timestamps associated with each link can be used to predict future links between so-far unconnected nodes. In these so-called temporal networks, we speak of temporal link prediction. This paper presents a systematic investigation of supervised temporal link prediction on 26 temporal, structurally diverse, real-world networks ranging from thousands to a million nodes and links. We analyse the relation between global structural properties of each network and the obtained temporal link prediction performance, employing a set of well-established topological features commonly used in the link prediction literature. We report on four contributions. First, using temporal information, an improvement of prediction performance is observed. Second, our experiments show that degree disassortative networks perform better in temporal link prediction than assortative networks. Third, we present a new approach to investigate the distinction between networks modelling discrete events and networks modelling persistent relations. Unlike earlier work, our approach utilises information on all past events in a systematic way, resulting in substantially higher link prediction performance. Fourth, we report on the influence of the temporal activity of the node or the edge on the link prediction performance, and show that the performance differs depending on the considered network type. In the studied information networks, temporal information on the node appears most important. The findings in this paper demonstrate how link prediction can effectively be improved in temporal networks, explicitly taking into account the type of connectivity modelled by the temporal edge. More generally, the findings contribute to a better understanding of the mechanisms behind the evolution of networks. Introduction and problem statement Link prediction is a frequently employed method within the broader field of social network analysis (Barabási 2016). Many important real-world applications exist in a variety of domains. Two examples are the prediction of (1) missing links between pages of Wikipedia and (2) which users are likely to be friends on an online social network (Kumar et al. 2020). Link prediction is often defined as the task to predict missing links based on the currently observable links in a network (Linyuan and Zhou 2011). Many real-world networks have temporal information on the times when the edges were created (Divakaran and Mohan 2020). Such temporal networks are also called dynamic or evolving networks. They open up the possibility of doing temporal link prediction. This means that they are able to infer future edges between two nodes as opposed to predicting only missing links (Liben-Nowell and Kleinberg 2007). For instance, in friendship networks, temporal link prediction might (1) facilitate friend recommendations, and (2) may actually predict which people will form new friendships in the future. Existing work on temporal link prediction is typically performed on one or a handful of specific networks, making it difficult to assess the generalisability of the approaches used (Marjan et al. 2018). This paper presents the first large-scale empirical study of temporal link prediction on 26 different large-scale and structurally diverse temporal networks originating from various domains. In doing so, we provide a systematic investigation of how temporal information is best used in a temporal link prediction model. This can be illustrated briefly by the social networks used in this study. They have a higher density then the other networks. This might improve performance in temporal link prediction, because the pairs of nodes in the networks used in our study are likely to have more common neighbours, providing more information to the supervised link prediction model. Thus, it is important to have an understanding of the relation between structural characteristics of the network and performance of topological features. A common approach in temporal link prediction is to employ a supervised machine learning model that utilises multiple features to classify which links are missing or, in case of temporal link prediction, will appear in the future (de Bruin et al. 2021). Features are typically computed for every pair of nodes that is not (yet) connected, based on the topology of the network (Kumar et al. 2020). These topological features essentially calculate a similarity score for a node pair, where a higher similarity signals a higher likelihood that this pair of nodes should be connected. Commonly used topological features, both used in supervised and unsupervised learning, include Common Neighbours, Adamic-Adar, Jaccard Coefficient and Preferential Attachment (Sect. 4.1.1). These features clearly relates to the structural position of the nodes in the network. Previous work has suggested a straightforward approach to take the temporal evolution into account in these topological features (Tylenda et al. 2009; Bütün et al. 2018). We describe the process of obtaining the set of temporal topological features in Sect. 4.1.2. The benefits of using this set of features are that they are well-established and interpretable. Moreover, recent work has shown that in a supervised classifier these topological features perform as well as other types of features that are less interpretable and more complex (Ghasemian et al. 2020). A further comparison with other types of features is provided in Sect. 2. In our work, we extend the set of state-of-the-art temporal topological features by considering that two types of temporal networks can be distinguished: networks with persistent relationships and networks with discrete events (O'Madadhain et al. 2005). The aforementioned example of friendship networks contains edges marking persistent relationships, that occur at most once for related persons. In case of communication networks, an edge usually marks a discrete event at an associated timestamp, representing a message sent from one person to another. In contrast to networks with persistent relationships, multiple edges can occur between two persons in discrete event networks. Previous studies have ignored that each link is not of the same type. In our approach, we address this gap in the literature by means of what we coin past event aggregation. This allows us to take both types of temporal links into account, where all information of two-faceted past interactions (i.e. persistent and discrete) are incorporated into the temporal topological features. Last but not least, the temporal topological features implicitly assume so-called edge-centred temporal behaviour, suggesting that phenomena at the level of links determine the evolution of the network. Here, we may challenge the usual assumption that the temporal aspect is merely caused by the activity of nodes, being the decision-making entities in the network, and operating somewhat independently of the structure of the remainder of the network (Hiraoka et al. 2020). To investigate whether this assumption holds, we present a comparison between (1) temporal topological features and (2) features consisting of static topological features along with features capturing temporal node activity. By testing this distinction on the 26 different temporal networks, we can better understand whether the temporal aspect is best captured by considering edge-centred or node-centred temporal information. To sum up, the four contributions of this work are as follows. First, to the best of our knowledge, we are the first to present a large-scale empirical study of temporal link prediction on a variety of networks. In total, we assess the performance of a temporal link prediction model on 26 structurally diverse networks, varying in size from a few hundred to over a million nodes and edges. Second, we analyse possible relations between structural network properties and the observed performance in temporal link prediction. We find that networks with degree disassortativity, signalling frequent connections between nodes with different degrees, show better performance in temporal link prediction. Third, we propose to account for all past interactions in discrete event networks. Fourth, in an attempt to understand the relation between node-centred and edge-centred temporal behaviour, we find that the information networks used in this study stand out, as they appear to have more node-centred temporal behaviour. This work is structured as follows. In Sect. 2, we further elaborate on related work. Section 3 provides the notation used in this work, leading up to the definition of temporal link prediction. After that, we continue with the approach in Sect. 4. This will be followed by a description of the temporal networks in Sect. 5. In Sect. 6 the four results of the experiments are presented and discussed. In Sect. 7, the conclusion is presented, together with suggestions for future work. Although there is much literature available on link prediction, we found that attention for temporal networks and temporal link prediction is relatively limited. Some reviews have been published. They are pointing out the various approaches that exists towards temporal link prediction (Dhote et al. 2013; Divakaran and Mohan 2020). Consequently, we will start with an exploration of four types of approaches presented therein. First, probabilistic models require (1) additional node or edge attributes to obtain sufficient performance (which hinders a generic approach to all networks) or (2) techniques that do not scale to larger networks (Kumar et al. 2020) (rendering them unusable for the larger networks used in the study). Second, approaches such as matrix factorisation, spectral clustering (Romero et al. 2020), and deep learning approaches, like DeepWalk (Perozzi et al. 2014) and Node2Vec (Grover and Leskovec 2016), all try to find a lower dimensional representation of the temporal network and use the obtained representation as a basis for link prediction. These approaches all learn a representation of the network without requiring explicit engineering of features. However, the downside is that the obtained features are hard to interpret, thereby making it difficult to explain why a certain link is predicted to appear. In applied scenarios, under some jurisdictions, this explanation can be required by law when an employed machine learning model affects people, which is often the case in for example the health and law enforcement domain (Holzinger et al. 2017). As an example, in previous work we examined driving patterns of trucks in a so-called truck co-driving network, where trucks are connected when they frequently drive together (de Bruin et al. 2020). When an inspection agency would use gathered network information to predict which trucks should be inspected for possible misconduct, truck drivers may legally have the right to know why they were selected. Since we aim to provide a approaches towards temporal link prediction that are applicable to any scientific domain, we disregard approaches that learn a lower dimensional representation. Third, in the time series forecasting approach, the temporal network is divided into multiple snapshots (Potgieter et al. 2007; Da Silva Soares and Prudencio 2012; Öczan and Öğüdücü 2015; Güneş et al. 2016; Öczan A, Öğüdücü 2017). For each of these snapshots, static topological features are learned. By using time series forecasting, the topological features of a future snapshot are learned, enabling link prediction. This approach does scale well to larger networks and is interpretable. However, it is unclear into how many snapshots the temporal network should be divided and whether the number of snapshots should remain constant across all networks used. Again, hindering a truly generic approach. Finally, we focus on temporal topological features in this work (Tylenda et al. 2009; Bütün et al. 2016). Recent work has suggested that the use of topological features in supervised learning can outperform more complex features learned from a lower dimensional representation of the temporal network (Ghasemian et al. 2020). Section 4 provides further details on this concept. These topological features are provided to a supervised link prediction classifier. Many different classification algorithms are known to work well in link prediction. Commonly used classifiers include logistic regression (Potgieter et al. 2007; O'Madadhain et al. 2005), support vector machines (Al Hasan et al. 2006; Öczan A, Öğüdücü 2017), k-nearest neighbours (Al Hasan et al. 2006; Bütün et al. 2018, 2016), and random forests (Öczan A, Öğüdücü 2017; Bütün et al. 2016, 2018; Ghasemian et al. 2020; de Bruin et al. 2021, 2020). We report performances using the logistic regression classifier. This classifier provides the following benefits, (1) it allows intuitive explanation on how each instance is classified (Bishop 2013), (2) the classifier is relatively simple and hence interpretable (Molnar 2020), (3) the classifier scales well to larger networks, and (4) good results are achieved without any parameter optimisation (O'Madadhain et al. 2005). To sum up, in contrast to earlier works on temporal link prediction, which has been applied on only a handful networks (Bliss et al. 2014; Öczan and Öğüdücü 2015; Potgieter et al. 2007; Bütün et al. 2016, 2018; Da Silva Soares and Prudencio 2012; Güneş et al. 2016; Öczan A, Öğüdücü 2017; Tylenda et al. 2009; O'Madadhain et al. 2005; Soares and Prudêncio 2013; Muniz et al. 2018; Romero et al. 2020), we apply link prediction on a structurally diverse set of 26 large-scale, real-world networks. We aim to do so using a generic, scalable and interpretable approach. This section starts by describing the notation used throughout this paper in Sect. 3.1. In Sect. 3.2, we explain the various network properties and measures used in this work. Finally, in Sect. 3.3 we formally describe the temporal link prediction problem. An undirected, temporal network \(H_{[t', t'']}(V, E_H)\) consists of a set of nodes V and edges (or, equivalently, links) \(E_H = \left\{ (u,v,t) \mid u, v \in V \wedge t' \le t \le t'' \right\} \) that occur between timestamps \(t'\) and \(t''\). Networks with discrete events, where multiple events can occur between two nodes, can be seen as a multigraph, where multi-edges exist: links between the same two nodes, but with different timestamps (Gross et al. 2013). In this work, removal of edges is not considered, since this information is not available for most temporal networks. A static representation of the underlying network is needed for the comparison between static and temporal features (see Sect. 4). This static, simple graph \(G_{[t', t'']}(V, E_G)\) with edges \((u,v) \in E_G\) is obtained from the temporal network \(H_{[t', t'']}\) by considering all edges that occur between \(t'\) and \(t''\), collapsing the multi-edges into a single edge. The number of nodes (also called the size) of the graph is \(n = |V|\) and the number of edges is \(m = |E_G|\). For convenience in later definitions, \(\varGamma (u)\) is the set of all neighbours of node \(u \in V\). The size of this set, i.e. \(|\varGamma (u)|\), is the degree of node u. Real-world network properties and their measures Several properties exist that characterise the global structure of a network (Barabási 2016). These properties guide us in the exploration of how structure relates to the performance of a temporal link prediction algorithm. Below we discuss four of the main properties. Each of the properties is defined on the static underlying graph \(G_{[t', t'']}\). Density The density of a network is calculated by dividing the number of edges by the total number of possible edges, i.e. \({2m} / {n(n-1)}\). For networks of the same size, higher density means that the average degree of nodes is higher, which has implications for the overall structural information available to the link prediction classifier. Diameter The diameter, sometimes called the maximum distance, is the largest distance observed between any pair of nodes. This property, together with density, captures how well-connected a network is. Average clustering coefficient The average clustering coefficient is the overall probability that two neighbours of a randomly selected node are linked to each other. The average clustering coefficient is given by $$\begin{aligned} C = \frac{1}{n} \sum _{u \in G} \frac{2L_u}{|\varGamma (u)|\big (|\varGamma (u)|-1\big )}\text {,} \end{aligned}$$ where \(L_u\) represents the number of edges between the neighbours of node u. In real-world networks, and in particular social networks, often highly clustered networks are observed. Degree assortativity It is often observed that nodes do not connect to random other nodes, but instead connect to nodes that are similar in some way. For instance, in social networks degree assortativity is observed, meaning that nodes often connect to other nodes with a similar degree. We can measure the degree assortativity of a network, by calculating the Pearson correlation coefficient, \(\rho \), between the degree of nodes at both ends of all edges (Newman 2002). In case low degree nodes more frequently connect with high degree nodes, the obtained value is negative. The goal of a supervised link prediction model The goal of a supervised link prediction model is to predict for unconnected pairs of nodes in the temporal network \(H_{[t_{q=0}, t_{q=s}]}\) whether they will connect in an evolved interval \([t_{q=s}, t_{q=1}]\) where q marks the q-th percentile of observed timestamps in the network and \(0< s < 1\). Hence, timestamps \(t_{q=0}\) and \(t_{q=1}\) mark the time associated to the first and last edge in the network, respectively. Moreover, timestamp \(t_{q=s}\) marks the time used to split the network into two intervals. The examples provided to the supervised link prediction model are pairs of nodes that are not connected in \([t_{q=0}, t_{q=s}]\). For each example (u, v) in the dataset, a feature vector \(x_{(u,v)}\) and binary label \(y_{(u,v)}\) is provided to the supervised link prediction model. The label for each pair of nodes (u, v) is \(y_{(u,v)}=1\) when it will connect in \([t_{q=s}, t_{q=1}]\) and \(y_{(u,v)}=0\) otherwise. Because parameter s determines the number of considered nodes, it affects the class imbalance encountered in the supervised link prediction; values close to 1, results in a larger number of node pairs to consider, while limiting the number of positives. The features used in the supervised link prediction model are only allowed to use information of network \(H_{[t_{q=0}, t_{q=s}]}\), preventing any leakage from nodes that will connect in the evolved time interval \([t_{q=s}, t_{q=1}]\). Note that the temporal information contained in the network is used for two purposes; (1) it allows to split the network into two temporal intervals and (2) it is used in feature engineering to model temporal evolution. This section explains the features used towards a supervised temporal link prediction. We start with an explanation of the different sets of features used in this study in Sect. 4.1. In particular, in step 2 of Sect. 4.1.2, we present a novel and intuitive approach to incorporate information on past interactions in the case of discrete event networks. In Sect. 4.2, we discuss the supervised link prediction model. In this subsection, we explain three types of features used. First, in Sect. 4.1.1 static topological features are provided. Second, temporal topological features are given in Sect. 4.1.2. The node activity features are specified in Sect. 4.1.3. Static topological features We use four common static topological features, which together form the feature vector for each candidate pair of nodes (u, v). These features are computed on the static graph G underlying the temporal network considered, as defined in Sect. 3.1. Below we define each of them. Common Neighbours (CN) The CN feature is equal to the number of common neighbours of two nodes. $$\begin{aligned} { CN }_{{\mathrm {static}}}(u,v) = | \varGamma (u) \cap \varGamma (v) | \end{aligned}$$ Adamic-Adar (AA) The AA feature considers all common neighbours, favouring nodes with low degrees (Adamic and Adar 2003). $$\begin{aligned} { AA }_{{\mathrm {static}}}(u,v) = \sum _{z\in \varGamma (u)\cap \varGamma (v)} \frac{1}{\log \big | \varGamma (z) \big |} \end{aligned}$$ Jaccard Coefficient (JC) The JC feature is similar to the CN feature, but normalises for the number of unique neighbours of the two nodes. $$\begin{aligned} { JC }_{{\mathrm {static}}}(u,v) = \frac{| \varGamma (u) \cap \varGamma (v) |}{| \varGamma (u) \cup \varGamma (v) |} \end{aligned}$$ Preferential Attachment (PA) The PA feature takes into account the observation that nodes with a high degree are more likely to make new links than nodes with a lower degree. $$\begin{aligned} { PA }_{{\mathrm {static}}}(u,v) = |\varGamma (u)| \cdot |\varGamma (v)| \end{aligned}$$ Temporal topological features Straightforward temporal extensions to topological features have been proposed in the literature (Tylenda et al. 2009; Bütün et al. 2018). Our method extends these approaches to also capture past interactions in case of aforementioned discrete event networks. The construction of these features then requires three steps, namely: Temporal weighting. The proposed approach of past event aggregation. Computation of weighted topological features. The resulting feature vector for a given pair of nodes, after applying the three steps, consists of all possible combinations of 3 different temporal weighting functions (linear, exponential, square root), 8 different past event aggregations (see below under B) and 4 different weighted topological features (CN, AA, JC, PA). Thus, for discrete event networks the feature vector is of length \(3 \cdot 8 \cdot 4 = 96\) and for networks with persistent relationships it is of length \(3 \cdot 4 = 12\). Temporal weighting The topological features need weighted edges (step C), while the networks used in this study have edges with an associated timestamp. In the temporal weighting step, we obtain these weights in a procedure described by Tylenda et al. (2009). The definitions of the temporal weighting functions are provided in Eqs. (5)–(7). In these definitions, a numeric timestamp t is converted to a weight w. Note that \(t_{\min }\) and \(t_{\max }\) denote the earliest and latest observed timestamp over all edges of the considered network. $$\begin{aligned} w_{\mathrm {linear}}&= l+(1-l)\cdot \frac{t-t_{\min }}{t_{\max }-t_{\min }} \end{aligned}$$ $$\begin{aligned} w_{\mathrm {exponential}}&= l+(1-l)\cdot \frac{ \exp \left( 3\frac{t-t_{\min }}{t_{\max }-t_{\min }}\right) -1 }{\mathrm {e}^3-1} \end{aligned}$$ $$\begin{aligned} w_{\mathrm {square\, root}}&= l+(1-l)\cdot \sqrt{\frac{t-t_{\min }}{t_{\max }-t_{\min }}} \end{aligned}$$ In Fig. 1 the behaviour of the different weighting functions is shown, when applied to the DBLP network (Ley 2002). It is further described in Sect. 5. The exponential weighting function (Eq. 6) assigns a higher weight to more recent edges than the linear (Eq. 5) and square root (Eq. 7) functions. In contrast, the square root function assigns higher weights to older edges in comparison to the linear and exponential functions. When weights of older edges become close to zero, these edges are discarded by the weighted topological features. To prevent that edges far in the past are discarded completely, we bound the output of each weighting function between a positive value l and 1.0 (l stands for lower bound). Past event aggregation In case of networks with discrete events, each multi-edge has an associated weight after the previous temporal weighting step. To allow the weighted topological features to be computed, we need to obtain a single weight for each node pair, capturing their past activity. Here we propose to aggregate all past events using eight different aggregation functions. All eight functions use as input a set containing all the weights of past events. The following functions are used: (1) the zeroth, (2) first, (3) second, (4) third, (5) fourth quantile and the (6) sum, (7) mean, and (8) variance of all past weights. By means of these summary statistics, we aim to capture the fact that depending on which network is considered, it may matter whether interaction took place very often, far away in the past, or very recent. These aggregation functions aim to capture different temporal behaviours. The quantile functions bin the set of weights, which is a common step in feature engineering. Taking the sum, mean, variance of the set of weights, allow the model to capture also more complex trends. An example of these complex trends, is the so-called bursty behaviour, which is often observed in real-world data (Barabási 2005). Weighted topological features In Eqs. (8)–(11), the weighted topological features are presented, which are taken from Bütün et al. (2018). In these equations, \( wtf \!\!\left( u, v\right) \) denotes the weight obtained for a given pair of nodes (u, v) after edges have been temporal weighted and, in case of networks with discrete events, events have been aggregated. $$\begin{aligned} { AA }_{{\mathrm {temporal}}}(u,v)= \sum _{z\in \varGamma (v)\cap \varGamma (y)} \frac{ wtf \!\!\left( u, z\right) + wtf \!\!\left( v, z\right) }{\log \left( 1 + \sum \nolimits _{x\in \varGamma (z)} wtf \!\!\left( z, x\right) \right) }\end{aligned}$$ $$\begin{aligned} { CN }_{{\mathrm {temporal}}}(u,v)&= \sum _{z\in \varGamma (u)\cap \varGamma (v)} wtf \!\!\left( u, z\right) + wtf \!\!\left( v, z\right) \end{aligned}$$ $$\begin{aligned} { JC }_{{\mathrm {temporal}}}(u,v)= \sum _{z\in \varGamma (u)\cap \varGamma (v)} \frac{ wtf \!\!\left( u, z\right) + wtf \!\!\left( v, z\right) }{ \sum \nolimits _{x\in \varGamma \left( u\right) } wtf \!\!\left( u, x\right) + \sum \nolimits _{y\in \varGamma \left( v\right) } wtf \!\!\left( v, y\right) } \end{aligned}$$ $$\begin{aligned} { PA }_{{\mathrm {temporal}}}(u,v)&= \sum _{a\in \varGamma (x)} wtf \!(u, x) \cdot \sum _{b\in \varGamma (y)} wtf \!(v, y) \end{aligned}$$ The mapping of the three different weighting functions for the entire DBLP network Node activity features The goal of the node activity features is to capture node-centred temporal activity. To this end, we create the node activity features in the following three steps: (1) temporal weighting, (2) aggregation of node activity, and (3) combining node activity. These steps are explained below. The feature vector for a given pair of nodes consists of all combinations of three different temporal weighting functions, seven different aggregation functions applied to the node activity, and four different combinations of the node activity. This results in a feature vector of length \(3 \cdot 7 \cdot 4 = 84\). Temporal weighting The temporal weighing procedure is the same as used in feature engineering of the temporal weighted topological features (see Sect. 4.1.2). Aggregation of node activity For each node, the set of weights from all edges adjacent to the node under investigation is collected. To obtain a fixed feature vector for each node, the set of weights is aggregated using the following functions: (1) the zeroth, (2) first, (3) second, (4) third, (5) fourth quantile and the (6) sum and (7) mean of the node activity vector (here the variance of all node weights is suppressed). Similar to the engineering of the temporal topological features, these aggregations are used to capture different kinds of activity that a node may exhibit. In particular, nodes are known to show bursty activity patterns in some networks (Hiraoka et al. 2020). Combining node activity To take the activity, obtained in the previous two steps, of both nodes under consideration into account, we use four different combination functions. These four functions are the (1) sum, (2) absolute difference, (3) minimum, and (4) maximum. By doing this, we obtain the node activity feature vector. Supervised link prediction The features discussed in Sect. 4.1 serve as input for a supervised machine learning model that predicts whether or not a pair of currently disconnected nodes will connect in the future (see Sect. 3.3). Here we use the logistic regression classifier. It was chosen because of its simplicity, overall good performance on this type of task and its explainability (see Sect. 2). We did not consider optimisation of any set of parameters, because that is considered outside the scope of the current paper. In theory, a number quadratic in the number of nodes (i.e. the node pairs) could be selected as input for the classifier, with positive instances being node pairs that connect in the future, resulting in a significant class imbalance. To counter this problem and at the same time limit the computation time needed to train the model, we reduce the number of node pairs given as input to the classifier by the following two steps. First, a well-known step is to select only pairs of nodes that are at a specific distance of each other (Lichtenwalter and Chawla 2012). Given the large sizes of networks used in this study, we limit the selection to only include pairs of nodes that are at distance two. Second, we sample with replacement from the remaining pairs of nodes a total of 10,000 that will connect (positives) and 10,000 that will not connect in the future (negatives). By doing this, we obtain a balanced set of examples which does not require any further post-processing, and can be used directly by the classifier. The training set for the logistic regression classifier is obtained by means of stratified sampling, taking 75% of all examples. The remaining instances are used as a test set. Because we do not optimise any parameters of the logistic regression classifier, no validation set is used. Analogously to previous work Divakaran and Mohan (2020) we measure the performance of the classifier on the test set by means of the Area Under the Receiver Operating Characteristic Curve (AUC). The AUC only considers the ranking of each score obtained for each pair of nodes provided to the logistic regression classifier. The AUC does not consider the absolute value of the score. This makes the measured performance robust to cases where the applied threshold on the scores is chosen poorly. An AUC of 0.5 signals random behaviour, i.e. no classifier performance at all. A perfect performance is obtained when the AUC is equal to 1, which is highly unlikely in practical settings. In this work, we use a structurally diverse and large collection of in total 26 temporal networks. The networks can be categorised into the three different domains, namely social, information, and technological networks. The distinction of networks in these three domains are taken from other network repositories. In Table 1 some common structural properties of these datasets are presented (see Sect. 3.2 for definitions). It is apparent from Fig. 2, which shows the relation between the number of nodes and edges for each of the 26 datasets, that the selected networks span a broad range in terms of size. Also, for each network it is indicated whether the edges marks persistent relationships or discrete events. In the latter case, the network has a multigraph structure, which requires preprocessing as discussed in Sect. 4.1.2. We observe seventeen networks showing degree disassortative behaviour, meaning that high degree nodes tend to connect to low degree nodes more frequently. The other nine networks show the opposite behaviour. We do not observe any significant relation between the domain of a network and its degree assortativity, or any other global property of the network. The number of nodes and edges of the networks used in this paper. The horizontal and vertical axes have logarithmic scaling Table 1 Networks used in this work A total of 21 networks were obtained from the Konect repository (Kunegis 2013), four networks from SNAP (Leskovec and Krevl 2014) and one from AMiner (Zhuang et al. 2013). The last column in Table 1 provides a reference to the work where each network is first introduced. Any directed network is converted to an undirected network by ignoring the directionality. In originally signed networks, we use only positive edges. In Sect. 6.1 we start with the experimental setup. Then, the structure of this section follows the four contributions of this work. In Sect. 6.2 the performance of temporal link prediction on 26 networks is assessed. Section 6.3 continues with the analysis of the relation between structural network properties and the performance in temporal link prediction. In Sect. 6.4, we show the results of our methodological contribution to temporal link prediction in networks with discrete events. We finish in Sect. 6.5 with a comparison between node-centred and edge-centred temporal behaviour. Experimental setup In Sect. 3.3, the procedure to obtain examples and labels that serve as input for the classifier have been explained. In this procedure, we need to determine the value s for each network. Commonly, around two thirds of the edges are used for extraction of features (Lichtenwalter et al. 2010; Bütün et al. 2018, 2016; Al Hasan et al. 2006) and hence we choose \(s=\frac{2}{3}\). In the feature engineering of the temporal topological and node activity features, the first step is to temporally weight each edge. In Sect. 4.1.2, step 1, parameter l is introduced to prevent the discarding of old edges in the temporal weighting procedure. Based on earlier work (Tylenda et al. 2009), we set \(l=0.2\), giving a minimal weight to links far away in the past, while still sufficiently discounting these older links. In this work, we use four sets of features. These feature sets, indicated by capital Roman numerals, are as follows. Static topological (as defined in Sect. 4.1.1). II-A Temporal topological (as defined in Sect. 4.1.2). II-B Temporal topological without past event aggregation (as defined in Sect. 4.1.2, skipping step 2 and using only the last occurring event). Static topological + node activity (Sect. 4.1.2 + Sect. 4.1.3). It is common practice to standardise features by subtracting the mean and scaling the variance to unit. The logistic regression classifier provided in the Python scikit-learn package (Pedregosa et al. 2011) is used. Although the goal of this paper is not to extensively compare machine learning classifiers, in Appendix A2, results on the performance in terms of AUC obtained using two other commonly used classifiers, being random forests (Pedregosa et al. 2011) and XGBoost (Chen and Guestrin 2016) are presented. For almost all datasets, similar relative performance is observed. The code used in this research, is available at http://github.com/gerritjandebruin/snam2021-code. It uses the Python language and the packages NetworkX (Hagberg et al. 2008) for network analysis, scikit-learn (Pedregosa et al. 2011) for the machine learning pipeline, and the Scipy ecosystem (Virtanen et al. 2020) for some of the feature engineering and statistical tests. The C++ library teexGraph (Takes and Kosters 2011) was used to determine the diameter of each network. The package versions, as well as all dependencies, can be found in the aforementioned repository. Improvement of prediction performance with temporal information We examine whether temporal information improves the overall prediction performance. Baseline performance is obtained by ignoring temporal information, using only static topological features (feature set I). In contrast, temporal topological features (feature set II-A) are used to obtain the performance of link prediction utilising temporal information. The results of this comparison are presented in Table 2 and in Fig. 3. They clearly indicate that using temporal information improves the prediction performance of new links, i.e. performance reported in column 'II-A' is always higher than that in 'I'. So, every single network shows better performance when temporal topological features are used. The average improvement in performance is \(0.07\pm 0.04\) (± standard deviation). For some networks, performance improves considerably more when temporal information is used in prediction. For example, the loans network has a mediocre baseline performance of 0.79, but a high performance of 0.95 is observed when temporal information is employed. This improvement in performance can be related to the structure of the network. Hence, in the next section the relation between the structural properties of networks and the performance in temporal link prediction is explored. Table 2 Performance obtained in (temporal) link prediction, using the following sets of features; static topological (I), temporal topological with (II-A) and without (II-B) past event aggregation, and static topological + node activity (III) Performance of the link prediction classifier for the 26 different networks Structural network properties and link prediction performance In this section, we examine which structural properties are associated to high link prediction performance. In Fig. 4, the Pearson correlations between the performance in (temporal) link prediction and various structural network properties (see Sect. 3.2) are presented. While most properties show at best modest correlation with the link prediction performance, we observe a significant negative correlation between the degree assortativity of a network and the prediction performance of new links using static topological features (\(p=3\cdot 10^{-6}\)) and temporal topological features (\(p=5\cdot 10^{-7}\)). This means that strong disassortative behaviour in networks, where nodes of low degree are more likely to connect with nodes of high degree, show better performance in link prediction. The relation between degree assortativity and the link prediction performance is shown in more detail in Fig. 5. The observed negative correlation might be explained as follows. In real-world networks, low degree nodes typically largely outnumber the high degree nodes. However, nodes with a degree that by far exceeds the average degree, so-called hubs, are also relatively often observed in real-world networks (Barabási 2016). In degree dissortative networks, the numerous low degree nodes by definition connect more frequently with hubs than with other low degree nodes. For these low degree nodes, the preferential attachment feature will provide higher scores for candidate nodes having a high degree. Therefore, the supervised model can use this information in a straightforward manner to obtain a better performance. To confirm the relation between the degree assortativity and temporal link prediction performance of a network, we conducted additional experiments. By performing assortative and dissassortative degree-preserving rewiring, we further substantiate the claim that disassortative networks indeed show higher link prediction performance. Detailed results can be found in Appendix A1. In Fig. 5 we observe that the temporal topological features show an even stronger correlation (\(\rho = -0.82\)) than the static topological features (\(\rho =-0.78\)). A possible explanation is that the temporal features are able to determine with higher accuracy which nodes will grow to active hubs, linking to many low degree nodes, whereas this information would be lost in a static network representation. This observation provides additional evidence that the temporal topological features are likely capturing relevant temporal behaviour. Correlations between network properties and the performance in a supervised classifier learned only with static topological features (feature set I) and with temporal topological features (feature set II-A) Degree assortativity and performance in a supervised classifier for each network, learned only with static topological features (feature set I) and temporal topological features (feature set II-A). The lines indicate the relation between the degree assortativity of the network and the performance of the classifier Enhancement of performance with past event aggregation To assess how networks with discrete events should be dealt with in temporal link prediction, we use two different sets of features. The first set of features (II-A) is constructed with past event aggregation, which allows to make fully use of information contained in all discrete events. The second set of features (II-B) considers only the last occurring edge between two nodes, thereby ignoring any past events. For networks with persistent edges, the two sets of features yield the same results, because the networks do not contain past events. The performance obtained with these two different sets of features is reported in Table 2. In Fig. 6, we show the difference between the two performances of the networks with discrete events in more detail. From this figure, we learn that these networks all show better performance when past events are aggregated using the various aggregation functions. This result is interesting more broadly for link prediction research, as the derived feature modification steps can be inserted into any topological network feature aiming to capture the similarity of nodes in an attempt to predict their future connectivity. Interestingly, when looking in more detail at the performance improvement by past event aggregation for each discrete event network, we observe large differences. On the one hand, we observe networks with only minor improvement when past events are aggregated. For example, the Condense matter (scientific) collaboration network (Condm.) shows only a minor improvement of 0.706–0.760 AUC. A possible explanation is that temporal information of discrete events has only limited use, since it takes time to come to a successful collaboration. On the other hand, the UC Irvine message network (UC), shows a major improvement in AUC from 0.744 to 0.893. This might be caused by the more variable nature of messages, which takes only a short time to establish. In that case, the feature set with past event aggregation might provide higher scores to pairs of nodes that are both actively messaging. Performance obtained in temporal link prediction, using temporal topological features without (feature set II-B, x-axis) and with (feature set II-A, y-axis) past event aggregation Comparison of node- and edge-centred temporal link prediction In the experiments performed so far, we used temporal topological features to assess temporal link prediction performance. These features assume edge-centred temporal behaviour. In the experiments below, we compare the performance of the edge-centred features (feature set II-A) with features that assume node-centred temporal behaviour (feature set III). The results of both feature sets are presented in Table 2 and in more detail in Fig. 7. We observe a strong correlation (\(\rho =0.92\), \(p=0.009\)) between the obtained performances using the two sets of features on all 26 networks. This finding suggests that the temporal aspect of most networks can be modelled by using either node-centred or edge-centred temporal features. However, for the four information networks the performance of the node-centred features seems to be higher than the edge-centred features. This finding hints that in information networks temporal behaviour may be node-centred. Given the low number of information networks available in this study, further research should be conducted to a larger set of information networks to verify this finding. A note of caution is due here since we analyse the temporal link performance only on pairs of nodes at a distance of two; different findings may be observed whether the findings still hold when more global features of node similarity are used. Notwithstanding this limitation, the study shows that both node- and edge-centred features in supervised temporal link predictions are able to achieve a high performance. Performance obtained in temporal link prediction, using node-centred features (feature set III, x-axis) and edge-centred features (feature set II, y-axis) Conclusion and outlook In this paper, the aim was to perform a large-scale empirical study of temporal link prediction, using a wide variety of structurally diverse networks. Moreover, we aimed to demonstrate the benefit of past event aggregation, allowing to take the rich interaction history of nodes into account in predicting their future linking activity. This study resulted in four findings. First, performance in supervised temporal link prediction is consistently higher when temporal information is taken into account. Second, the performance in temporal link prediction appears related to the global structure of the network. Most notably, degree disassortative networks perform better than degree assortative networks. Third, the newly proposed method of past event aggregation, is able to better model link formation in networks with discrete events. It substantially increases the performance of temporal link prediction. The derived feature modification steps can be inserted into any topological feature, potentially improving the performance of any supervised (temporal) link prediction endeavour. Fourth, we showed that in four information networks, features capturing node activity, together with static topological features, outperform features that consider edge-centred temporal information, suggesting that the temporal mechanisms in these networks reside with the nodes. A natural next step of this work is to analyse even bigger temporal networks, or networks originating from different domains. It appears that publicly available networks from other domains, such as biological, economic and transportation networks, typically do not contain temporal information (Ghasemian et al. 2020). However, it would be interesting to investigate whether findings presented in this paper also hold for these types of networks. In addition, it is evident that there is an advantage to taking temporal information into account when performing supervised link prediction on temporal networks. It could be interesting to see whether such temporal information also benefits prediction performance in other machine learning tasks on networks, such as node classification (Hamilton et al. 2017). Adamic LA, Adar E (2003) Friends and neighbors on the Web. Soc Netw 25(3):211–230. https://doi.org/10.1016/S0378-8733(03)00009-1 Al Hasan M, Chaoji V, Salem S, Zaki M, Hasan MA, Chaoji V, Salem S, Zaki M, York N (2006) Link prediction using supervised learning. In: SDM06: workshop on link analysis, counter-terrorism and security, vol 30, pp 798–805 Barabási AL (2005) The origin of bursts and heavy tails in human dynamics. Nature 435(7039):207–211. https://doi.org/10.1038/nature03459 Barabási AL (2016) Network science. Cambridge University Press, Cambridge MATH Google Scholar Bishop CM (2013) Pattern recognition and machine learning. Springer, New York. https://doi.org/10.1117/1.2819119 Book MATH Google Scholar Bliss CA, Frank MR, Danforth CM, Dodds PS (2014) An evolutionary algorithm approach to link prediction in dynamic social networks. J Comput Sci 5(5):750–764. https://doi.org/10.1016/j.jocs.2014.01.003 MathSciNet Article Google Scholar Brandes U, Kenis P, Lerner J, Van Raaij D (2009) Network analysis of collaboration structure in Wikipedia. In: Proceedings of the 18th international world wide web conference. Association for Computing Machinery, New York, pp 731–740. https://doi.org/10.1145/1526709.1526808 Bütün E, Kaya M, Alhajj R (2016) A new topological metric for link prediction in directed, weighted and temporal networks. In: Proceedings of the 2016 IEEE/ACM international conference on advances in social networks analysis and mining. IEEE, Los Alamitos, pp 954–959. https://doi.org/10.1109/ASONAM.2016.7752355 Bütün E, Kaya M, Alhajj R (2018) Extension of neighbor-based link prediction methods for directed, weighted and temporal social networks. Inf Sci 463–464:152–165. https://doi.org/10.1016/j.ins.2018.06.051 Chen T, Guestrin C (2016) Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 785–794 Da Silva Soares PR, Prudencio RBC (2012) Time series based link prediction. In: Proceedings of the international joint conference on neural networks. IEEE, Brisbane, pp 1–7. https://doi.org/10.1109/IJCNN.2012.6252471 de Bruin GJ, Veenman CJ, van den Herik HJ, Takes FW (2020) Understanding dynamics of truck co-driving networks. Stud Comput Intell 882 SCI:140–151. https://doi.org/10.1007/978-3-030-36683-4_12 de Bruin GJ, Veenman CJ, van den Herik HJ, Takes FW (2021) Experimental evaluation of train and test split strategies in link prediction. In: Benito RM, Cherifi C, Cherifi H, Moro E, Rocha LM, Sales-Pardo M (eds) Complex networks & their applications IX. Springer, Cham, pp 79–91. https://doi.org/10.1007/978-3-030-65351-4_7 De Choudhury M, Sundaram H, John A, Seligmann DD (2009) Social synchrony: predicting mimicry of user actions in online social media. In: 2009 International conference on computational science and engineering, vol 4. IEEE, Vancouver, pp 151–158. https://doi.org/10.1109/CSE.2009.439 Dhote Y, Mishra N, Sharma S (2013) Survey and analysis of temporal link prediction in online social networks. In: Proceedings of the 2013 international conference on advances in computing, communications and informatics. IEEE, Mysore, pp 1178–1183. https://doi.org/10.1109/ICACCI.2013.6637344 Divakaran A, Mohan A (2020) Temporal link prediction: a survey. N Gener Comput 38(1):213–258. https://doi.org/10.1007/s00354-019-00065-z Ghasemian A, Hosseinmardi H, Galstyan A, Airoldi EM, Clauset A (2020) Stacking models for nearly optimal link prediction in complex networks. Proc Natl Acad Sci 117(38):23393–23400. https://doi.org/10.1073/pnas.1914950117 Gross JL, Yellen J, Zhang P (2013) Handbook of graph theory, 2nd edn. Chapman Hall/CRC, London Grover A, Leskovec J (2016) Node2vec: scalable feature learning for networks. Association for Computing Machinery, New York, pp 855–864. https://doi.org/10.1145/2939672.2939754 Güneş İ, Gündüz-Öğüdücü Ş, Çataltepe Z (2016) Link prediction using time series of neighborhood-based node similarity scores. Data Min Knowl Disc 30(1):147–180. https://doi.org/10.1007/s10618-015-0407-0 MathSciNet Article MATH Google Scholar Hagberg A, Swart P, Chult S, D. (2008) Exploring network structure, dynamics, and function using NetworkX. Tech. rep., Los Alamos National Lab. Los Alamos, NM, USA Hamilton WL, Ying R, Leskovec J (2017) Representation learning on graphs: methods and applications. arXiv:1709.05584 Hiraoka T, Masuda N, Li A, Jo HH (2020) Modeling temporal networks with bursty activity patterns of nodes and links. Phys Rev Res 2(2):023073. https://doi.org/10.1103/PhysRevResearch.2.023073 Hogg T, Lerman K (2012) Social dynamics of Digg. EPJ Data Sci 1(1):5. https://doi.org/10.1140/epjds5 Holzinger A, Biemann C, Pattichis CS, Kell DB (2017) What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923 Klimt B, Yang Y (2004) The enron corpus: a new dataset for email classification research. In: Boulicaut JF, Esposito F, Giannotti F, Pedreschi D (eds) Machine learning: ECML. Springer, Berlin, pp 217–226. https://doi.org/10.1007/978-3-540-30115-8_22 Kumar S, Spezzano F, Subrahmanian VS, Faloutsos C (2017) Edge weight prediction in weighted signed networks. In: Proceedings—IEEE international conference on data mining. IEEE, Barcelona, pp 221–230. https://doi.org/10.1109/ICDM.2016.175 Kumar S, Hamilton WL, Leskovec J, Jurafsky D (2018) Community interaction and conflict on the web. In: Proceedings of the 2018 world wide web conference. International World Wide Web Conferences Steering Committee, Geneva, Switzerland, pp 933–943. https://doi.org/10.1145/3178876.3186141 Kumar A, Singh SS, Singh K, Biswas B (2020) Link prediction techniques, applications, and performance: a survey. Physica A 553:124289. https://doi.org/10.1016/j.physa.2020.124289 Kunegis J (2013) KONECT: the Koblenz network collection. In: Proceedings of the 22nd international conference on world wide web. Association for Computing Machinery, New York, pp 1343–1350. https://doi.org/10.1145/2487788.2488173 Leskovec J, Krevl A (2014) SNAP datasets: Stanford large network dataset collection. http://snap.stanford.edu/data Leskovec J, Kleinberg J, Faloutsos C (2007) Graph evolution: densification and shrinking diameters. ACM Trans Knowl Discov Data 1(1):2–43. https://doi.org/10.1145/1217299.1217301 Ley M (2002) The DBLP computer science bibliography: evolution, research issues, perspectives. In: Laender AHF, Oliveira A (eds) String processing and information retrieval, string processing and information retrieval, vol 2476. Springer, Berlin, pp 1–10. https://doi.org/10.1007/3-540-45735-6_1 Liben-Nowell D, Kleinberg J (2007) The link-prediction problem for social networks. J Am Soc Inform Sci Technol 58(7):1019–1031. https://doi.org/10.1002/asi.20591 Lichtenwalter R, Chawla NV (2012) Link prediction: fair and effective evaluation. Proceedings of the 2012 IEEE/ACM international conference on advances in social networks analysis and mining, pp 376–383. https://doi.org/10.1109/ASONAM.2012.68 Lichtenwalter RN, Lussier JT, Chawla NV (2010) New perspectives and methods in link prediction. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining. Association for Computing Machinery, New York, pp 243–252. https://doi.org/10.1145/1835804.1835837 Linyuan LL, Zhou T (2011) Link prediction in complex networks: a survey. Physica A 390(6):1150–1170. https://doi.org/10.1016/j.physa.2010.11.027 Marjan M, Zaki N, Mohamed EA (2018) Link prediction in dynamic social networks: a literature review. In: 5th International congress on information science and technology. IEEE, Marrakech, pp 200–207. https://doi.org/10.1109/CIST.2018.8596511 Michalski R, Palus S, Kazienko P (2011) Matching organizational structure and social network extracted from email communication. In: Abramowicz W (ed) Business information systems, vol 87. Springer, Berlin. https://doi.org/10.1007/978-3-642-21863-7_17 Molnar C (2020) Interpretable machine learning. Lulu.com Muniz CP, Goldschmidt R, Choren R (2018) Combining contextual, temporal and topological information for unsupervised link prediction in social networks. Knowl Based Syst 156:129–137. https://doi.org/10.1016/j.knosys.2018.05.027 Newman MEJ (2002) Assortative mixing in networks. Phys Rev Lett 89(20):208701. https://doi.org/10.1103/PhysRevLett.89.208701 Öczan A, Öğüdücü ŞG (2015) Multivariate temporal Link Prediction in evolving social networks. In: 2015 IEEE/ACIS 14th international conference on computer and information science. IEEE, Las Vegas, pp 185–190. https://doi.org/10.1109/ICIS.2015.7166591 Öczan A, Öğüdücü ŞG (2017) Supervised temporal link prediction using time series of similarity measures. In: 2017 Ninth international conference on ubiquitous and future networks. IEEE, Milan, pp 519–521. https://doi.org/10.1109/ICUFN.2017.7993838 O'Madadhain J, Hutchins J, Smyth P (2005) Prediction and ranking algorithms for event-based network data. ACM SIGKDD Explorations Newsletter 7(2):23–30. https://doi.org/10.1145/1117454.1117458 Opsahl T (2013) Triadic closure in two-mode networks: redefining the global and local clustering coefficients. Soc Netw 35(2):159–167. https://doi.org/10.1016/j.socnet.2011.07.001 Paranjape A, Benson AR, Leskovec J (2017) Motifs in temporal networks. In: Proceedings of the 10th ACM international conference on web search and data mining, pp 601–610. https://doi.org/10.1145/3018661.3018731 Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12:2825–2830 MathSciNet MATH Google Scholar Perozzi B, Al-Rfou R, Skiena S (2014) DeepWalk: Online learning of social representations. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining. Association for Computing Machinery, New York, pp 701–710. https://doi.org/10.1145/2623330.2623732 Potgieter A, April KA, Cooke RJE, Osunmakinde IO (2007) Temporality in link prediction: understanding social complexity Redmond U, Cunningham P (2013) A temporal network analysis reveals the unprofitability of arbitrage in the Prosper Marketplace. Expert Syst Appl 40(9):3715–3721. https://doi.org/10.1016/j.eswa.2012.12.077 Richardson M, Agrawal R, Pedro D (2003) Trust management for the semantic web. In: Fensel D, Sycara K, Mylopoulos J (eds) The semantic web—ISWC. Springer, Berlin, pp 351–368. https://doi.org/10.1109/ICCEE.2009.241 Romero M, Finke J, Rocha C, Tobón L (2020) Spectral evolution with approximated eigenvalue trajectories for link prediction. Soc Netw Anal Min 10(1):60. https://doi.org/10.1007/s13278-020-00674-3 Soares PR, Prudêncio RB (2013) Proximity measures for link prediction based on temporal events. Expert Syst Appl 40(16):6652–6660. https://doi.org/10.1016/j.eswa.2013.06.016 Takes FW, Kosters WA (2011) Determining the diameter of small world networks. In: Proceedings of the 20th ACM international conference on Information and knowledge management. Association for Computing Machinery, New York, pp 1191–1196. https://doi.org/10.1145/2063576.2063748 Tylenda T, Angelova R, Bedathur S (2009) Towards time-aware link prediction in evolving social networks. In: Proceedings of the 3rd workshop on social network mining and analysis, vol 9. Association for Computing Machinery, New York, pp 1–10. https://doi.org/10.1145/1731011.1731020 Van Mieghem P, Wang H, Ge X, Tang S, Kuipers FA (2010) Influence of assortativity and degree-preserving rewiring on the spectra of networks. Eur Phys J B 76(4):643–652. https://doi.org/10.1140/epjb/e2010-00219-x Article MATH Google Scholar Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J, van der Walt SJ, Brett M, Wilson J, Millman KJ, Mayorov N, Nelson ARJ, Jones E, Kern R, Larson E, Carey CJ, Polat I, Feng Y, Moore EW, VanderPlas J, Laxalde D, Perktold J, Cimrman R, Henriksen I, Quintero EA, Harris CR, Archibald AM, Ribeiro AH, Pedregosa F, van Mulbregt P (2020) SciPy 1.0 Contributors: SciPy 1.0: fundamental algorithms for scientific computing in python. Nat Methods 17:261–272. https://doi.org/10.1038/s41592-019-0686-2 Viswanath B, Mislove A, Cha M, Gummadi KP (2009) On the evolution of user interaction in Facebook. In: Proceedings of the 2nd ACM workshop on Online social networks. Association for Computing Machinery, New York, pp 37–42. https://doi.org/10.1145/1592665.1592675 Wikileaks (2016) US Democratic National Committee leak. https://www.wikileaks.org/dnc-emails/ Yin H, Benson AR, Leskovec J, Gleich DF (2017) Local higher-order graph clustering. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining, pp 555–564. https://doi.org/10.1145/3097983.3098069 Zhuang H, Sun Y, Tang J, Zhang J, Sun X (2013) Influence maximization in dynamic social networks. In: 13th International conference on data mining. IEEE, Dallas, pp 1313–1318. https://doi.org/10.1109/ICDM.2013.145 Gerrit Jan de Bruin was supported by funding of the Dutch Ministry of Infrastructure and Water Management. We thank Jasper van Vliet for providing comments. Leiden Institute of Advanced Computer Science, Leiden University, Leiden, The Netherlands Gerrit Jan de Bruin, Cor J. Veenman & Frank W. Takes Data Science Department, TNO, The Hague, The Netherlands Cor J. Veenman Leiden Centre of Data Science, Leiden University, Leiden, The Netherlands H. Jaap van den Herik Gerrit Jan de Bruin Frank W. Takes Correspondence to Gerrit Jan de Bruin. Appendix A1: Relation degree assortativity and temporal link prediction performance To further assess the relation between degree assortativity and temporal link prediction performance, as derived from the empirical results in Sect. 6.3, we conducted additional experiments. By means of simulation, we modified a number of network datasets from Table 1 using assortative and disassortative degree-preserving rewiring, following an approach similar to the one proposed in Van Mieghem et al. (2010). In particular, we aim to retain the local clustering properties by not selecting two edges at random, but rather selection two edges that are close to each other, ensuring that not too many triangles and therewith clustering is destructed, as this is a determining feature in link prediction. The procedure, which we repeat for a certain number of times (explained below), consists of the following five steps. First, an edge (u, v) is randomly selected. Second, we randomly select a node x from the neighbourhood of u. Third, we sample a node y that is connected to x, but not to u or v. At this time, pairs of nodes (u, v) and (x, y) are connected while the link (v, y) is absent. The fourth step is to determine from the pairs of nodes (u, v), (v, y) and (x, y) which node pair has a maximum difference in degree. Step five is the actual rewiring of edges. There can be three outcomes from step 4, (a) node pair (v, y) has the maximum difference in degree and there is no gain in assortativity by rewiring any edges, (b) node pair (u, v) has the maximum difference in degree and by moving all edges (recall, there can be multiple links between two nodes) between (u, v) to (v, y) the assortativity is increased, and (c) node pair (x, y) has the maximum difference in degree and by moving all edges between (x, y) to (v, y) the assortativity is increased. In case we want to perform dissassortative degree-preserving rewiring, we consider in step four and five the node pair with the lowest difference in degree. The five steps are repeated, with increments of 0.2m from the original network up to m of the edges that gets a chance to rewire. The degree assortativity values of the rewired networks can be found in Table 3. We observe that for degree disassortative rewiring, a larger performance is attained than for assortative rewiring, strengthening our result from Sect. 6.3. This finding is further explored in Table 4, in which we list the percentual increase in performance for both disassortativity and assortativity rewired datasets. In all cases, we observe higher performance for disassortativity rewired networks. Table 3 Assortatvity of all networks after rewiring, for disassortative rewiring (up to − 100%) and assortative rewiring (up to 100%) for each of the network datasets in Table 1 Table 4 Performance (in AUC) for the rewired networks as reported on in Table 3 Appendix A2: Choice of classifier As described in Sect. 2, many classifiers are known to work well in link prediction. We used the logistic regression classifier in this work, for reasons of interpretability and explainability, as further discussed in Sect. 2. In Table 5, we provide, for each of the datasets as introduced in Table 1, the performance in terms of AUC obtained using two other commonly used classifiers, being random forests (Pedregosa et al. 2011) and XGBoost (Chen and Guestrin 2016), with default parameters. For almost all datasets, similar relative performance is observed. Table 5 Performance obtained with the II-A feature set (see Sect. 6.1) de Bruin, G.J., Veenman, C.J., van den Herik, H.J. et al. Supervised temporal link prediction in large-scale real-world networks. Soc. Netw. Anal. Min. 11, 80 (2021). https://doi.org/10.1007/s13278-021-00787-3 Revised: 21 June 2021 Temporal link prediction Supervised learning Temporal networks Network evolution Multigraphs Over 10 million scientific documents at your fingertips Academic Edition Corporate Edition Not affiliated Springer Nature © 2022 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Is there a reversible transform that reduces the number of distinct values? The idea is to take a sequence like $\langle 1,2,3,4 \rangle$ and reduce it to something like $\langle 1,2,2,3 \rangle$, or something similar. The key point is that there are fewer distinct values $\{1,2,3\}$ vs $\{1,2,3,4\}$. In the case that the number of distinct values cannot be reduced, i.e. $\langle 0,0,0,0 \rangle$, it would output the same value $\langle 0,0,0,0 \rangle$. An algorithm that doesn't always decrease the number of distinct values is ok, as long as it doesn't increase the number of distinct values, or as long as it does so only rarely. The output sequence length can be less than or equal to the input sequence length, and the input sequence is bounded by $[0,255]$, but the output sequence need not be, though they should be integers in the range $(-\infty,+\infty)$. Another restriction on the input, if it helps at all, is that it's a restricted growth string. A property of restricted growth strings that might be useful is that the first $0$ in the sequence always comes before the first $1$, the first $1$ before the first $2$, and so on. So a valid sequence might look something like $\langle 0,0,1,0,2,1,1,0,2 \rangle$. As an example, consider this simple algorithm. First, construct a "dictionary" from the input sequence consisting of all distinct values in the input. So, for the input sequence $\langle 1,2,2,3 \rangle$, the dictionary would be $\langle 1,2,3 \rangle$. It then iterates over the input, doing two things: replacing the input value with its position in the dictionary, and rotating the dictionary left by one. For most inputs, this will do little to reduce the number of distinct values, but for the input sequence $\langle 1,2,3,4 \rangle$, it will output $\langle 0,0,0,0 \rangle$. This can be inverted by applying a similar algorithm. Starting with the same dictionary, iterate over the output, successively replacing the values in the output with the value in the dictionary at that output value's index (that is, if $D$ is our dictionary and $i$ is a value in our output sequence, replace that value with $D_i$) and then rotate the dictionary to the left by one. Jordy Dickinson Jordy DickinsonJordy Dickinson $\begingroup$ One interesting things about your examples is that they are all in sorted order. If that's the case, encoding differences is a common technique. In fact, looking into inverted indexes might be worth a look. $\endgroup$ – Pseudonym Aug 4 '16 at 0:07 $\begingroup$ Are you trying to ask about compressability? $\endgroup$ – user541686 Aug 4 '16 at 0:39 $\begingroup$ @Merhdad, in a way, yes. But I'm not interested in actually reducing the size of the input, I just want to reduce the number of distinct values. $\endgroup$ – Jordy Dickinson Aug 4 '16 at 20:01 $\begingroup$ @Pseudonym, although my examples are in order, the input sequence need not be. However, I've added an additional restriction on the input sequence that may be relevant. $\endgroup$ – Jordy Dickinson Aug 4 '16 at 20:50 What do you mean by "reduces the number of distinct values"? If you mean an invertible function whose input is, for example, $k$ things and whose output is fewer than $k$ things, the answer is yes. Yuval's answer gives two examples. Here's another example, which will make you laugh/cry/want to slap me/other (delete as applicable): the input is $k$ integers; the output is a list of those integers. There's just one list! If you mean an invertible function $f$ whose input ranges over some set $I$ and whose output ranges over some strictly smaller set $O$, then the answer is no. By the pigeonhole principle, there must be at least two distinct values $a,b\in I$ such that $f(a)=f(b)$, which means that the function cannot be invertible. If you mean something else, I'm gonna have to edit or delete this answer. David RicherbyDavid Richerby There are many reversible transformations that take a list of (let's say) non-negative integers and output a single integer. Here are two examples: $$ 2^{a_1} \cdot 3^{a_2} \cdot 5^{a_3} \cdot 7^{a_4} \cdot 11^{a_5} \cdots \\ \langle a_1, \langle a_2, \langle a_3, \ldots, \langle a_{n-1}, a_n \rangle \cdots \rangle, \quad \langle a,b \rangle = \binom{a+b+1}{2} + a. $$ $\begingroup$ Interesting idea, but that cannot work of any sequence given, just special cases. Otherwise, why would we struggle with compression? $\endgroup$ – antipattern Aug 4 '16 at 0:27 $\begingroup$ Actually it does work for any given sequence, though in the second example you need to know its length (this can be fixed). There's no connection to compression, though. $\endgroup$ – Yuval Filmus Aug 4 '16 at 4:59 $\begingroup$ @antipattern The reason this is not compression is that the output number is at least as big as all the input numbers. $\endgroup$ – user253751 Aug 4 '16 at 5:56 You're looking for a reversible function that takes a (long) list of integers in the range [0,255] and produces a very short list of integers in the range [-infinity, infinity]? Easy: Add in base 255. func(list) int result = 0 for i = 0 => list.length result = (result * 256) + list[i] ... except that that loses leading 0s, so tweak the base and count inputs: result = (result * 256) + list[i] + 1 I mean, it's cheating, but func (a) will never produce more values than it was fed, and (b) can easily give you back the original list from the result. However, func will also (c) not work in the real world for any but the shortest of input lists. If you want an actual list as the output, look for three or more duplicate values in a row; when you find one, output -1 * the count of identical values then a single copy of the value; eg: { 1, 2, 2, 3, 3, 3, 3, 4 } -> { 1, 2, 2, -4, 3, 4 } (you can do "two or more" instead of "3 or more", but you don't save anything on pairs) minnmassminnmass $\begingroup$ I don't really need the sequence of integers to be short, I just need the output to contain fewer distinct values. $\endgroup$ – Jordy Dickinson Aug 3 '16 at 22:43 $\begingroup$ Ah. Sorting the input will help, but there are cases in which there will be more values. { 1, 1, 1 }, for instance. Hrm... $\endgroup$ – minnmass Aug 3 '16 at 22:53 $\begingroup$ Sorting isn't valid, because this requires a reversible transform, and sorting is an operation that can't be undone (e.g., [1,2] and [2,1] will map to the same thing). $\endgroup$ – D.W.♦ Aug 4 '16 at 5:38 $\begingroup$ @JordyDickinson, this solution meets all of the requirements you listed. With this solution, every input maps to an output that has only one distinct integer -- a great reduction in the number of distinct integers! So it's a perfectly valid solution according to the requirements you've listed in your question. If it additionally happens to produce a list that is shorter than the input, why is that a bad thing? If you have some objection to this answer then that suggests you have some additional requirement you haven't listed in the question. $\endgroup$ – D.W.♦ Aug 4 '16 at 5:39 $\begingroup$ @D.W., you're right, it is perfectly valid. It won't work to solve my particular problem, but I'm having difficulty explaining why. In any case, I like the answers I've gotten so far, so I'd say my question as it is isn't too far off from what I'm really trying to ask. That said, I'll put some thought into how I could revise my question. $\endgroup$ – Jordy Dickinson Aug 4 '16 at 20:31 If I understand the problem correctly, your aim is to reduce the entropy of the source sequence, by reducing the number of possible symbols. Then the closest thing you could do is apply a compression algorithm of your choice (LZMA, ZIP et al.). Modern compression leaves you with the minimal number of symbols neccesary to represent the information stored in the sequence. If "reversible" means reversible with losses, see below. There are methods of approxization which might yield even greater savings if you allow for lossy compression. What it will do is actually reduce the count of distinct patterns/numbers to the minimum possible amount, while maintaining the information that these numbers carry. If you discard this information (As I understand you did when transforming <1, 2, 3, 4> to <1, 2, 2, 3>), you will never ever be capable of reversing your transformation, unless you "cheat" by having this information hardcoded into your algorithm. One example would be Entropy encoding, specifically MQ Coding, which I fathom would be adaptable, with some modifications, to work on integer granularity. A word of warning if you plan to use this in a commercial setting: There are patents pending on this method. If a loss of information is acceptable, then you might use simple binning, but that is not reversible. For certain special cases, you might be able to fit a mathematical function to your values, so you do not even need to maintain any list at all, just a number of parameters to that function, but as said, this only works in special cases (such as {1, 2, 3,4} - A linear function). A possible option for more complex functions would be an approximization via (multiple) gaussian functions. When bounding the stored information to the list size, this will not be lossless however. Keyword: Radial Basis Function After reading your comment, I guess im way off the problem. But the way you ask for it is not possible. If each of your symbols has a different meaning, there is no reversible way in which you can reduce the number of symbols without discarding information. Any method you chose will have to rely on your sequence having a certain structure and will not be applicable every possible sequence. Kudos to @minnmass. Why not invert his idea. If the list size does not matter, why aren't you just looking at individual bits? Then you only have two different values - 0 and 1. That does not really decrease entropy and would be reversible - just consume a bunch (8, 16, 32, whatever) of bits at once and you are back to your original representation. antipatternantipattern $\begingroup$ The last paragraph is nice. The rest of the answer seems off-base, though. Your next-to-last paragraph is wrong; it is possible. I'd suggest deleting all but the last paragraph of this answer -- I think that would make this a better answer. $\endgroup$ – D.W.♦ Aug 4 '16 at 5:41 $\begingroup$ @D.W. number of symbols mulitplied with the lenght is the information contained. If one tries to reduce either one without increasing the other, information is irretrievably lost, unless some domain specific knowledge is brought in. I'd appreciate if you substanciate that claim with a link. About that first part, it was written before OP overhauled his question, making it more clear what he was asking about. $\endgroup$ – antipattern Aug 15 '16 at 10:27 $\begingroup$ For substantiation: Take a look at the other answers. You say it's impossible; but some other answers show it is possible (they show how to do it). $\endgroup$ – D.W.♦ Aug 15 '16 at 15:33 Here's an interesting approach, based on the fact that my input is always a restricted growth string. Given the sequence $\langle a_1, a_2, ..., a_n \rangle$, we know that $a_{i+1} \leq max(a_1,a_2,...,a_i)+1$. We can thus split our sequence into subsequences based on their greatest possible value. I.e., $\langle 0,1,0,1,1,2,1,0 \rangle$ would be split into $\langle 0 \rangle$, $\langle 1,0,1,1 \rangle$, and $\langle 2,1,1 \rangle$. The first sequence has a greatest possible value of $0$, the second of $1$, and the third of $2$. Since we know that each sequence will start with the greatest possible value, and may or may not contain lesser values, we may be able to reduce the number of distinct values by mapping each sequence to the range $[1,-\infty)$ by changing the first value in the sequence to $1$ and subtracting the max value in the sequence from the remaining values. The rationale for this is that if the range of a given subsequence is $[n,m]$, if $n$ is close to $m$, then there will be overlap between the values in this sequence and the previous sequence. For example, the three sequences $\langle 2,1,2,1 \rangle$, $\langle 3,2,3 \rangle$, and $\langle 4,3,3,4 \rangle$, would be mapped to $\langle 1,-1,0,-1 \rangle$, $\langle 1,-1,0 \rangle$, and $\langle 1,-1,-1,0 \rangle$, reducing the distinct values from $\{1,2,3,4\}$ to $\{-1,0,1\}$. The reason for always starting with $1$ is so that when we combine these subsequences back into one sequence we will be able to distinguish between subsequences, and thus be able to reconstruct the original sequence. In practice this doesn't reduce the number of distinct values by very much. Also, in the worst case scenario it will increase the number of distinct values slightly. $\begingroup$ @D.W. yes! I will edit to clarify. $\endgroup$ – Jordy Dickinson Aug 5 '16 at 0:42 Not the answer you're looking for? Browse other questions tagged algorithms or ask your own question. Computing a histogram with the number of extant values not known in advance match an array with a given set of arrays Is there a "flaw" in the backpropagation algorithm? Minimum number of sets of unreachable vertices for directed acyclic graph (DAG) Fitting a feedback loop to data points Rod Cutting without using dynamic approach Confuse on using bucket number to replace hashing multiple times in hyperloglog explanation
CommonCrawl
Almost-periodic perturbations of non-hyperbolic equilibrium points via Pöschel-Rüssmann KAM method Liouville theorems for stable weak solutions of elliptic problems involving Grushin operator January 2020, 19(1): 527-539. doi: 10.3934/cpaa.2020026 Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian Phuong Le 1,2, Division of Computational Mathematics and Engineering, Institute for Computational Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam Received February 2019 Revised April 2019 Published July 2019 Fund Project: This work was done while the author was visiting the Vietnam Institute for Advanced Study in Mathematics (VIASM) in 2019. He wish to thank the institute for their hospitality and support. $ u \in L_{sp} \cap C^{1, 1}_{\rm loc}(\mathbb{R}^n\setminus\{0\}) $ be a positive solution, which may blow up at zero, of the equation $ (-\Delta)^s_p u = \left(\frac{1}{|x|^{n-\beta }} * \frac{u^q}{|x|^\alpha}\right) \frac{u^{q-1 }}{|x|^\alpha} \quad\text{ in } \mathbb{R}^n \setminus \{0\}, $ $ 0 < s < 1 $ $ 0 < \beta < n $ $ p>2 $ $ q\ge 1 $ $ \alpha>0 $ . We prove that if $ u $ satisfies some suitable asymptotic properties, then must be radially symmetric and monotone decreasing about the origin. In stead of using equivalent fractional systems, we exploit a direct method of moving planes for the weighted Choquard nonlinearity. This method allows us to cover the full range in our results. Keywords: Choquard equations, fractional p-Laplacian, symmetry of solutions, positive solutions, singular solutions. Mathematics Subject Classification: 35R11, 35J92, 35J75, 35B06. Citation: Phuong Le. Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian. Communications on Pure & Applied Analysis, 2020, 19 (1) : 527-539. doi: 10.3934/cpaa.2020026 G. Siciliano and M. Squassina, On fractional Choquard equations, Math. Models Methods Appl. Sci., 25 (2015), 1447-1476. doi: 10.1142/S0218202515500384. Google Scholar P. Belchior, H. Bueno, O. H. Miyagaki and G. A. Pereira, Remarks about a fractional Choquard equation: Ground state, regularity and polynomial decay, Nonlinear Anal., 164 (2017), 38-53. doi: 10.1016/j.na.2017.08.005. Google Scholar C. Bjorland, L. Caffarelli and A. Figalli, Non-local gradient dependent operators, Adv. Math., 230 (2012), 1859-1894. doi: 10.1016/j.aim.2012.03.032. Google Scholar C. Bjorland, L. Caffarelli and A. Figalli, Nonlocal tug-of-war and the infinity fractional Laplacian, Comm. Pure Appl. Math., 65 (2012), 337-380. doi: 10.1002/cpa.21379. Google Scholar L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar W. Chen and C. Li, Maximum principles for the fractional p-Laplacian and symmetry of solutions, Adv. Math., 335 (2018), 735-758. doi: 10.1016/j.aim.2018.07.016. Google Scholar W. Chen, C. Li and Y. Li, A direct method of moving planes for the fractional Laplacian, Adv. Math., 308 (2017), 404-437. doi: 10.1016/j.aim.2016.11.038. Google Scholar W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. doi: 10.1002/cpa.20116. Google Scholar W. Dai, Y. Fang and G. Qin, Classification of positive solutions to fractional order Hartree equations via a direct method of moving planes, J. Differential Equations, 265 (2018), 2044-2063. doi: 10.1016/j.jde.2018.04.026. Google Scholar J. Dou and H. Zhou, Liouville theorem for fractional Hénon equation and system on $\mathbb{R}^n$, Comm. Pure Appl. Anal., 14 (2015), 1915-1927. doi: 10.3934/cpaa.2015.14.1915. Google Scholar L. Du, F. Gao and M. Yang, Existence and qualitative analysis for nonlinear weighted Choquard equations, preprint, arXiv: 1810.11759. Google Scholar A. T. Duong and P. Le, Symmetry and nonexistence results for a fractional Hénon-Hardy system on a half-space, Rocky Mountain J. Math., (2019), to appear. Available from: https://projecteuclid.org/euclid.rmjm/1552186836. Google Scholar E. P. Gross, Physics of Many-Particle Systems, Vol.1, Gordon Breach, New York, 1966. Google Scholar P. Le, Liouville theorem and classification of positive solutions for a fractional Choquard type equation, Nonlinear Anal., 185 (2019), 123-141. doi: 10.1016/j.na.2019.03.006. Google Scholar E. H. Lieb and B. Simon, The Hartree-Fock theory for Coulomb systems, Comm. Math. Phys., 53 (1977), 185-194. doi: 10.1007/BF01609845. Google Scholar B. Liu and L. Ma, Radial symmetry results for fractional Laplacian systems, Nonlinear Anal., 146 (2016), 120-135. doi: 10.1016/j.na.2016.08.022. Google Scholar S. Liu, Regularity, symmetry, and uniqueness of some integral type quasilinear equations, Nonlinear Anal., 71 (2009), 1796-1806. doi: 10.1016/j.na.2009.01.014. Google Scholar P. Ma and J. Zhang, Symmetry and Nonexistence of Positive Solutions for Fractional Choquard Equations, preprint, arXiv: 1704.02190. Google Scholar P. Ma and J. Zhang, Existence and multiplicity of solutions for fractional Choquard equations, Nonlinear Anal., 164 (2017), 100-117. doi: 10.1016/j.na.2017.07.011. Google Scholar L. Ma and Z. Zhang, Symmetry of positive solutions for Choquard equations with fractional p-Laplacian, Nonlinear Anal., 182 (2019), 248-262. doi: 10.1016/j.na.2018.12.015. Google Scholar L. Ma and L. Zhao, Classification of positive solitary solutions of the nonlinear Choquard equation, Arch. Ration. Mech. Anal., 195 (2010), 455-467. doi: 10.1007/s00205-008-0208-3. Google Scholar V. Moroz and J. V. Schaftingen, A guide to the Choquard equation, J. Fixed Point Theory Appl., 19 (2017), 773-813. doi: 10.1007/s11784-016-0373-1. Google Scholar S. Pekar, Untersuchung über die Elektronentheorie der Kristalle, Akademie-Verlag, Berlin, 1954. Google Scholar L. Wu and P. Niu, Symmetry and nonexistence of positive solutions to fractional p-Laplacian equations, Discrete Contin. Dyn. Syst., 39 (2018), 1573-1583. doi: 10.3934/dcds.2019069. Google Scholar D. Xu and Y. Lei, Classification of positive solutions for a static Schrodinger-Maxwell equation with fractional Laplacian, Applied Math. Letters, 43 (2015), 85-89. doi: 10.1016/j.aml.2014.12.007. Google Scholar W. Zhang and X. Wu, Nodal solutions for a fractional Choquard equation, J. Math. Anal. Appl., 464 (2018), 1167-1183. doi: 10.1016/j.jmaa.2018.04.048. Google Scholar Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445 Zaizheng Li, Qidi Zhang. Sub-solutions and a point-wise Hopf's lemma for fractional $ p $-Laplacian. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020293 Haoyu Li, Zhi-Qiang Wang. Multiple positive solutions for coupled Schrödinger equations with perturbations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020294 Anh Tuan Duong, Phuong Le, Nhu Thang Nguyen. Symmetry and nonexistence results for a fractional Choquard equation with weights. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 489-505. doi: 10.3934/dcds.2020265 Lucio Damascelli, Filomena Pacella. Sectional symmetry of solutions of elliptic systems in cylindrical domains. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3305-3325. doi: 10.3934/dcds.2020045 Raffaele Folino, Ramón G. Plaza, Marta Strani. Long time dynamics of solutions to $ p $-Laplacian diffusion problems with bistable reaction terms. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020403 Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 Serena Dipierro, Benedetta Pellacci, Enrico Valdinoci, Gianmaria Verzini. Time-fractional equations with reaction terms: Fundamental solutions and asymptotics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 257-275. doi: 10.3934/dcds.2020137 Rim Bourguiba, Rosana Rodríguez-López. Existence results for fractional differential equations in presence of upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1723-1747. doi: 10.3934/dcdsb.2020180 Yutong Chen, Jiabao Su. Nontrivial solutions for the fractional Laplacian problems without asymptotic limits near both infinity and zero. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021007 Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021005 Craig Cowan, Abdolrahman Razani. Singular solutions of a Lane-Emden system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 621-656. doi: 10.3934/dcds.2020291 Hoang The Tuan. On the asymptotic behavior of solutions to time-fractional elliptic equations driven by a multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1749-1762. doi: 10.3934/dcdsb.2020318 Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091 Pierre Baras. A generalization of a criterion for the existence of solutions to semilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 465-504. doi: 10.3934/dcdss.2020439 Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $ p $ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020442 Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168 PDF downloads (194) Phuong Le \begin{document}$ p $\end{document}-Laplacian" readonly="readonly">
CommonCrawl
Why do atoms tend towards electrical stability? If an oxygen atom has six electrons, then it has an unfilled orbital and the oxygen atom may share electrons from two hydrogen atoms (and form water) in order to become more stable. But why does oxygen, or any other atom for that manner, favor stability over instability? Why can't the oxygen atom just have six electrons and be done with it? Also, is an atom being stable identical to it being neutral? thermodynamics electrons atomic-physics equilibrium dotancohen KorvexiusKorvexius $\begingroup$ I've deleted an inappropriate comment and several responses to it. There was a possibly-salvageable discussion about whether it's appropriate to assign intent to inanimate objects; that discussion wants to happen in a chat room, rather than in these comments. $\endgroup$ – rob♦ Feb 10 at 2:46 $\begingroup$ Maybe you mean to ask why an oxygen atom can't be stable with 6 electrons. Asking why something favors stability is a different question that can be answered simply by looking at the definition of "stable", which is to be "firmly fixed" or "not likely to give way". Something unstable, by definition, is not fixed and likely to give way. $\endgroup$ – JoL Feb 10 at 5:45 $\begingroup$ Everyone please keep in mind that comments are meant for suggesting improvements or requesting clarifications on the question, and in particular, not for answering. $\endgroup$ – David Z♦ Feb 10 at 14:06 $\begingroup$ There's some confusion of terms here... the title refers to electrical stability, yet the body discusses the stability of completed shells and 8 electrons in the outermost shell (chemistry.stackexchange.com/q/1196). Furthermore, are you referring to stable in terms of "not-going-to-decay", or "will-try-to-gain-them-electrons"? $\endgroup$ – user191954 Feb 10 at 14:39 $\begingroup$ @Chair well the title I originally wrote didn't say "electrical" stability however dotancohen appears to have thought this was a proper amendment to what I was looking for. by "stability" I am NOT referring to radioactive decay. $\endgroup$ – Korvexius Feb 10 at 17:26 An $\text{O}^{2+}$ ion, with 6 electrons, is stable in a vacuum. If you put one in the depths of interstellar space, far from any other matter, then it will sit in that state until the end of time. Similarly, you can have a high-elevation lake, and it can be absolutely stable if there is no path for water to drain out. So no, as shown by that example, it is not the same thing. However, of you have a higher-elevation lake and another one whose water level is 10 meters lower, and they're connected by a stream bed, then water will drain out of the higher one until the two surfaces reach an equal level. The stable state is the one that has the minimum energy (minimum total gravitational potential energy of all the water). This is analogous to what happens when oxygen and hydrogen are mixed. The electrons are like the water. Ben CrowellBen Crowell $\begingroup$ Ahh, so stability is basically the ability to maintain your properties in light of the environment, right? $\endgroup$ – Korvexius Feb 9 at 21:59 $\begingroup$ > "An ion, with 6 electrons, is stable in a vacuum. If you put one in the depths of interstellar space, far from any other matter, then it will sit in that state until the end of time." That is not so clear cut. In empty vacuum yes, but in real one, there is background EM radiation everywhere that can break and separate molecule's/ion's constituents. This is easy to see from the fact that such isolated system has increasing density of energy levels to infinity when energy approaches 0. $\endgroup$ – Ján Lalinský Feb 10 at 1:47 $\begingroup$ For a system which is not a pure mechanical one, entropy is to be taken in consideration ? For example, at pressure and temperature fixed, the stable state is the minimum of the Gibbs free energy and not the minimum of energy. $\endgroup$ – Vincent Fraticelli Feb 10 at 19:16 Wanting or favoring are not very good terms in physics. More scientific view on this would be that whenever an oxygen atom comes close to another atom, they interact and provided things are right (such as the number of electrons and their state), the atoms attract and can get closer, while losing part of their initial energy, in the form of radiation or lost electron or transfer some energy to other atom/molecule in a scattering event. After the binding energy is lost from the pair to the surrounding space, it becomes more probable it will stay together, until required energy for its breaking is supplied from outside the system. This can be EM radiation with the right spectral characteristics, or some other particle that moves nearby and cleaves the bond. For the air in troposphere, the available mechanisms to supply so much energy (collisions, cosmic particles) can only do so for very little fraction of molecules, so majority of oxygen atoms will exist in pairs, much lower number in triplets and so on. The situation is different in upper layes of atmosphere, where UV light and cosmic particle are more intense, so greater proportion of gas particles may be in exotic form such as unpaired oxygen atom. Ján LalinskýJán Lalinský $\begingroup$ Good or not, "favoring" is a common term in physics, at least in English, and, particularly in the context of energy equilibria. We say that a reaction favors the lower-energy state. A search on this very site will find many examples. Here are just two: 1 2. A search of abstracts on arXiv finds many more examples. $\endgroup$ – MJD Feb 10 at 1:31 $\begingroup$ @MJD common or not, I think it is not good to use it before explaining the science behind the phenomenon, especially to beginners, because it is likely to confuse them or induce bad thinking habits. For some other uses such as popularization or informal talk between scientists it may be fine. $\endgroup$ – Ján Lalinský Feb 10 at 1:58 No, not at all: many atoms are more stable after they've either absorbed (an) electron(s) or shed (an) electron(s). A good example is the formation of table salt, $\mathrm{NaCl}$, aka sodium chloride. This compound forms when sodium atoms lose an electron (the valence electron), as in: $$\mathrm{Na}\to \mathrm{Na^+}+ \mathrm{e^-}$$ Similarly the choride atoms can absorb an electron: $$\mathrm{Cl_2}+ 2\mathrm{e^-}\to 2\mathrm{Cl^-}+ 2\mathrm{e^-}$$ When those ions combine we get: $$2\mathrm{Na}+\mathrm{Cl_2}\to 2\mathrm{NaCl}+\Delta H$$ $\Delta H$ is the energy released in the process. The arrangement of these elements in the ionic lattice $\mathrm{NaCl}$ is more stable than the combination of the (unreacted) elements. By losing its 'lone' $\mathrm{3s^1}$ valence electron, sodium takes on the very stable electron configuration of neon. Similarly, by absorbing an electron into its $\mathrm{3s^23p^5}$ valence electrons, it assumes the very stable electron configuration of argon. Ry- GertGert $\begingroup$ "By losing its 'lone' $\mathrm{3s^1}$ valence electron, sodium takes on the very stable electron configuration of neon." – This sounds potentially misleading. Removing the electron from a neutral sodium atom requires an ionization energy of 5.14 eV; i.e. the process is endothermal and $\mathrm{Na^+}+ \mathrm{e^-}$ is less stable than $\mathrm{Na}$. $\endgroup$ – Loong Feb 10 at 15:53 $\begingroup$ @Loong: I made it quite clear that stability of NaCl comes from its ionic lattice. But $\mathrm{Na^+}$ cations are very stable in other contexts to, like solvated ions. $\endgroup$ – Gert Feb 10 at 17:06 $\begingroup$ You can't get there from sodium alone. You need something sufficiently "happy" to take the electron to pay the energy cost of taking it away from the sodium. It's just that the cost is rather low, so a lot of substances are willing to do that, including some substances that aren't known for being strong oxidizers. But with nothing to take the electron, it would rather stay with the sodium. $\endgroup$ – Ian Feb 10 at 22:49 $\begingroup$ @Ian: correct but why single out sodium (and other metals), in most cases where changes in orbital structure occurr either a donor or a receptor is required. $\endgroup$ – Gert Feb 10 at 23:41 $\begingroup$ @Gert The point is that it's not just that $\mathrm{Na}^+$ is particularly stable, it's that it consumes less energy to produce it than is released by converting $\mathrm{Cl}$ to $\mathrm{Cl}^-$. Thus your statements about the nice electron structures of $\mathrm{Na}^+$ and $\mathrm{Cl}^-$ in isolation don't really mean anything; it's all relative. $\endgroup$ – Ian Feb 10 at 23:42 The oxygen atoms when get close to each other, would certainly interact. When they interact, their is a higher probability that the electrons are shared between the atoms. When they share, they will lose energy. They will not break apart unless energy comes again from somewhere and satisfies them to stay separated. And no being stable is not equal to being neutral. Take oxygen atom for example, it is more stable in the ion form than the neutral oxygen atom. NightcoRohakNightcoRohak I think the bottom line here is that reasoning about atoms, ionization and chemical bonds is often done in a rather sloppy way, and the reason for that may be that people have not been taught that entropy often plays a role in these questions. The direction of physical processes is such that entropy increases overall. This is why a given system will tend to move towards a state of lower internal energy if one is available---it is because the energy thus liberated can be passed on to something else, such as emitted light or vibrations, with the result of a net increase in entropy of the environment of the system, with little change in the entropy of the system itself. Chemical bonds form mostly because the bonded configuration has a lower energy, and this lower energy is adopted mostly because the liberated energy goes to the surroundings, or to vibrations of the material, or something like that, which carry more entropy. Whenever the concept of free energy is being used, then these entropy arguments are in play. People often assume that systems will adopt whichever state has the least energy. This is ok as a general informal guide, but if you want to know why or how it happens, then you need to notice that such an assumption requires a way for the system to get rid of any extra energy it may have, and that usually means emission of energy into the system's surroundings. answered Feb 10 at 16:47 Andrew SteaneAndrew Steane It is important to use careful language to understand what is happening "What does an oxygen atom favour stability over instability" is a bad way to ask the question and not just because it anthropomorphises the factors causing chemical reactions. The right way to think about the problem is to ask which configurations of atoms and molecules in a dynamic system will have the lowest energy. Except in a high vacuum atoms and molecules are constantly interacting with other atoms and molecules. When they interact things can happen. Sometimes the atoms and molecules just bounce off each other; sometimes a chemical reaction occurs; sometimes energy is exchanged but no net reaction results. When reactions occur some of them are reversible and some are not. Since there are many possible reactions occurring in any mixture how do we know what the net result is? The factors that matter are the the energy levels of the possible products in the mixture (I'm simplifying a bit by ignoring entropy) and the degree of reversibility of all the possible reactions occurring. In the case of a mixture of isolated hydrogen and oxygen atoms a lot of different things can happen (by the way it is very very hard to create such a mixture). One is that isolated oxygens can meet and combine to give oxygen molecules (this releases energy). Or oxygen can meet hydrogen and combine to give an OH radical which can further combine to give a water molecule (releasing a lot of energy). Lots of other reactions can occur. But when the mixture has a lot of water or oxygen in it it requires a large amount of energy to go back to reverse the reaction and generate an oxygen radical. If the mixture doesn't have enough thermal energy to break up a water molecule in some collisions, this reaction is irreversible. It doesn't require any energy input for most of the isolated atoms to react to these products irreversibly. In a slightly different case, at low enough temperatures a mixture of hydrogen molecules and oxygen molecules will be stable. The molecules banging into each other at room temperature don't usually have enough energy to react or to release the oxygen radical that propagates the reaction leading to water). But it doesn't require a lot of energy input to cause them to react explosively to give water (just enough to break apart a few oxygens or hydrogens to kick start a reaction that will self-sustain because it releases a lot of energy when water is generated.) In a system as dynamic as a mixture of gases, the possible collisions between the components will explore many possible outcomes. Some of the molecules that result are resistant to further reaction because they sit in an energy well. We call them stable. Isolated atoms don't seek stability, they just happen to react with other things very easily and many of those reaction paths lead to stable molecules. The molecules don't react easily because there isn't enough energy in the system to cause them to break apart (assuming the temperature is not high enough: hydrogen oxygen mixtures are perfectly stable at room temperature unless you are stupid enough to light a cigarette near the mixture, a mistake you will not make twice). The situation when things are charged is different. The electromagnetic force is strong and causes large forces to exist between oppositely charged ions. These will actively attract each other and so reactions will happen faster than by the normal process of just bumping into each other. Otherwise the same rules apply. But what matter here is that ions might be individually stable as ions but there are strong forces pulling them together with oppositely charged ions into neutral assemblies. What matters is that you can't have, for example, a large clump of chloride ions by themselves. It isn't that the ions are individually unstable, it is just that strong charges produce strong forces attracting opposite charges. In summary, when we say that something like an oxygen atom isn't stable, we mean that in the normal course of events oxygen atoms can combine irreversibly and very easily into molecules that are stable (which in turn means they have lower energy and sit in a potential well that is hard to escape from). No molecule or atom in a gas mixture seeks anything, but the statistics of molecular collisions will explore every possible potential well in the space of possible products and the deep wells will be full because the isolated atoms fall into them very easily. matt_blackmatt_black Not the answer you're looking for? Browse other questions tagged thermodynamics electrons atomic-physics equilibrium or ask your own question. Why is ice made from boiled water clear? Why should the vapor pressure exist at all tempratures? What prevents an atom's electrons from "collapsing" onto its protons? Electron Decay, Why are there P and higher orbitals? Formation of atoms question Electron as a standing wave and its stability Why does the conjugated $\pi$ bond not violate the Pauli Exclusion Principle? What does an orbital mean in atoms with multiple electrons? What do the orbitals of Helium look like? Why do non-hydrogen atomic orbitals have the same degeneracy structure as hydrogen orbitals? thermodynamics and stability Would an $H_2O$ Molecule actually look like this 3D representation if we could see it? In chemical compounds, where does the "magic" come from in atomic "magic numbers"?
CommonCrawl
Overselling overall map accuracy misinforms about research reliability Guofan Shao ORCID: orcid.org/0000-0002-8572-95671, Lina Tang2 & Jiangfu Liao3 Landscape Ecology volume 34, pages 2487–2492 (2019)Cite this article Image classification is routine in a variety of disciplines, and analysts rely on accuracy metrics to evaluate the resulting maps. The most frequently used accuracy metric in Earth resource remote sensing is overall accuracy. However, the inherent properties of this accuracy metric make it inappropriate as the single metric for map assessment, particularly when a map contains imbalanced categories. We discuss four noteworthy problems with overall accuracy. Under circumstances frequently encountered, overall accuracy is misleading or misinterpreted. Literature review, hypothetical examples, and mathematic equations are used to prove overall accuracy is a poor general indicator of map quality. Any research that involves classification techniques or a map product that is evaluated only with overall accuracy may be unreliable. It is necessary for map providers to publish the error matrix and its development procedure so that map users can computer whatever metrics as they wish. Digital classification is commonly used to convert image data into map products by assigning each pixel into one of two or more categories. This approach is routine in many different fields, ranging from medical imaging to satellite remote sensing (Celeb et al. 2019; Phiri and Morgenroth 2017; Xiao et al. 2019). The resulting maps are frequently used for decision making and scientific research, so it is vital to verify map quality. One standard way to evaluate maps is to use error matrices (also known as confusion matrices) to compute accuracy metrics or indices, including map-level and category-level accuracies. In the geospatial field, the map-level accuracy metric is called the overall accuracy (O) and the category-level accuracy metrics are called the producer's (P) and user's (U) accuracies (Table 1) (Story and Congalton 1986; Congalton 1991; Foody 2002). The overall accuracy is the percentage of cases (e.g., pixels or sampling units) correctly classified. If only a single accuracy metric is reported for a classification map, it is likely to be the overall accuracy, as shown in an overwhelming preponderance of published research in the field of earth-surface detection and mapping (Fig. 1). We refer to this phenomenon as "overselling overall accuracy," meaning that map handlers overemphasize the importance of map-level accuracy. Table 1 A population error matrix and selected accuracy metrics Histogram comparing the number of publications reporting three types of accuracy based on a topic search from the Web of Science Core Collection. The data cover the top ten journal groups, including remote sensing, imaging science photographic technology, environmental sciences, geoscience multidisciplinary, geography physical, ecology, forestry, water resources, agriculture interdisciplinary, and geography during 1978–2018. The topic keywords include remote sensing or remotely sensed, classification or map* in addition to overall accuracy, producer's accuracy, and user's accuracy in titles, keywords, and abstracts Overselling overall accuracy should be discouraged because the O metric has inherent properties that lead to misleading perceptions about the soundness of classification methods and map-based research. Specifically, the inherent properties of the O metric contribute to at least four application problems: (1) The O metric cannot explicitly explain the accuracy or uncertainties of map derivatives, (2) the O values are not comparable between maps with different numbers of categories, (3) the O values are not comparable between maps with different category proportions, and (4) a greater O value may not necessarily represent a more realistic map. The first O-metric problem arises because the O metric is computed using only diagonal elements in an error matrix and thus does not reflect error distributions among categories. A high O value for a map does not assure the reliability of map derivatives. These derivatives include category area, the most common map derivative for remote-sensing applications (Olofsson et al. 2014). For example, the rate of 90% overall accuracy cannot ensure that the area estimate of a category is accurate to within ± 10%. Consider a case where a 100 ha forest has a carbon density of 100 tons/ha at time 1 and 110 tons/ha at time 2. The forest will sequester 1000 tons of carbon in total between the two times. Given ± 10% uncertainties for the estimates of area and carbon density, the maximum possible range of carbon sequestration between time 1 and time 2 is from − 3190 to 5210 tons. This example illustrates that relatively high O values for a map do not guarantee uncertainties small enough for map derivatives to avoid big uncertainties in map applications. By contrast, the producer's and user's accuracies are more informative than O for area estimates. Despite a reasonable value for O, errors or uncertainties can be surprisingly high in landscape quantification (Arbia et al. 1998; Shao and Wu 2008). That is because the O metric does not reflect error's spatial distribution. The second problem with the O metric appears because an increase in the number of mapping categories leads to an increase in the number of spectrally similar pixels and in the number of edge pixels. When a pixel probability of assignment is nearly equal for each of two or more categories, the chances increase for the pixel to be wrongly classified. Because edge pixels are located at the border of adjacent components, they tend to contain a mixture of two categories, which complicates image classification (Sweeney and Evans 2012; Heydari and Mountrakis 2018). Thus, an increase in mapping categories usually lowers O. The balance between the number of categories and classification accuracy is a practical consideration in classification design and implementation. On the application side, if a single category is the focus, such as deforestation or urban sprawl, a low-O map with multiple categories may be aggregated into a high-O map with only two categories. Such an aggregation approach does not, however, increase the accuracy of the target category. To understand the third problem with the O metric, we include a range of pjj into its formula: $$O = \mathop \sum \limits_{j = 1}^{J} p_{jj} \quad \left( {0 < p_{jj} \le p_{ + j} } \right)$$ A large category means that p+j is large (Olofsson et al. 2014) and that pjj covers a broad range. Thus, a larger category can have a greater role than a smaller category in regulating the value of O. For example, if a dominant category accounts for 90% of the total mapping area (i.e., p+j = 90%) and its pjj value is also 90%, O will be at least 90%. If its pjj value is reduced to 70%, the O value cannot exceed 80% because the other categories have only a 10% share in total, and their pjj values cannot exceed 10%. In contract, a single small category does not exert such a dominant effect. This dominant control of O by a large category has already attracted attention in predictive analytics and machine learning (Fielding and Bell 1997; He and Garcia 2009) and remote sensing (Stehman and Foody 2019). When a map contains two unevenly sized or imbalanced categories (e.g., category 1 is the majority category, and category 2 is the minority category), and the classification accuracies between P and U are balanced for each category (i.e., P1 = U1, P2 = U2, and p12 = p21), we obtain $$\frac{{P_{1} }}{{P_{2} }} = \frac{{\frac{{p_{11} }}{{p_{ + 1} }}}}{{\frac{{p_{22} }}{{p_{ + 2} }}}} = \frac{{1 - \frac{{p_{21} }}{{p_{ + 1} }}}}{{1 - \frac{{p_{21} }}{{p_{ + 2} }}}}$$ Because p+1 > p+2, the accuracy of a large category exceeds that of a small category. In other words, the relative error for the minority category always exceeds that of the majority category. Equation (2) represents a simple case of image-data classification and map assessment. Although multiple unevenly sized categories are more complicated than Eq. (2), they show similar trends. For example, a positive correlation between category areas and accuracies was found in the 1-km-resolution global land-cover data sets (Scepan 1999). Equation (2) also implies that requiring imbalanced categories to be equally accurate may be unrealistic. Thus, O cannot represent the accuracy of every category. Under the circumstances of Eq. (2), the minimum value of the majority category cannot be less than the difference between the majority and minority populations (i.e., their reference totals). Thus, Eq. (1) can be expressed as $$O = p_{11} + p_{22} \quad \left( {p_{ + 1} - p_{ + 2} < p_{11} \le p_{ + 1} , 0 < p_{22} \le p_{ + 2} } \right)$$ This means that as category 1 becomes more dominant, the values of p11 and P1 increase. The same reasoning holds for O: a map with a heavily dominant category is assured of having a relatively high O. The fourth problem with the O metric is an extension of the second and third problems. The metric O can be increased by maximizing the producer's accuracy for the majority category or by expanding the extent of the mapping area. We demonstrate this effect using three classification maps with different O values (Fig. 2b–d). The O values in Fig. 2c are increased as the result of effortless modifications based on initial classification (Fig. 2b). This raises the question of whether a map with a higher O value (Fig. 2c) is better in practice than the map with the lowest O value (Fig. 2b). Increasing O by enlarging the mapping area can lead to false expectations regarding the performance of classification technique used and/or that the quality of the resulting map, which is referred to as an optimistic bias (Hammond and Verbyla 1996). In an extreme case, O will improve if every pixel is classified as the dominant category (Fig. 2d). Obtaining a high value for O in this way is clearly not an indication of a preferable map in the real world, however, which shows that the O metric can be a poor measure of accuracy. In the field of predictive analytics, this phenomenon (i.e., when high-accuracy models do not have greater predictive power than lower-accuracy models) is called the "accuracy paradox" (Thomas 2013; Kim et al. 2017). It is worth it to mention that the value of kappa coefficient for Fig. 2d is 0 and does not cause accuracy paradox in this case. Although kappa coefficient is more conservative than overall accuracy, it has its own problems and its overuse should also be avoided. Comparison of thematic maps under different classification strategies. The mapping system consists of two categories denoted 1 and 2 (a). Wrongly classified pixels are located along the shared edges of the two categories. The gray pixels belong to category 2 (e.g., shrubs) and the white area is for category 1 (e.g., a grassland). Three classification strategies: b the mapping area is defined by the 7 × 7 box and O = 33/49 = 67%, c the mapping area is expanded to the 9 × 9 box and O = 65/81 = 80%, and d every pixel is classified as category 1, resulting in O = 40/49 = 82% Note that these problems exist even though the O metric was computed correctly from an error matrix that represents an estimated population. The inappropriate use of reference sampling data may worsen the problem. For example, O can be unrealistically high if only pure pixels are used to quantify accuracy. Biased pixel sampling can amplify the problems inherent to O. Overselling overall accuracy has misleading effects on research reliability because overall accuracy alone cannot explain the robustness of a classification model on the method-research side or the quality of classification output on the application-research side. Because artificial intelligence is becoming increasingly popular for image recognition, classification, and mapping in various disciplines (e.g., Grekousis 2019), consistent standards of accuracy quantification and analysis are needed to assure users that the technique is adequately dependable. Although overall accuracy has problems, it cannot be ignored. Instead, overall accuracy should be used together with multiple accuracy metrics to promote a comprehensive approach to quantify map accuracy (Lasko et al. 2005; Liu et al. 2007). Many researchers urge map producers to provide complete information on map development and assessment (Congalton et al. 2014; Congalton and Green 2019; Stehman and Foody 2019). This type of transparency is vital because it would allow map stakeholders to compute any accuracy metrics rather than relying solely on those given by the map providers. Arbia G, Griffith D, Haining R (1998) Error propagation modelling in raster GIS: overlay operations. Int J Geogr Inf Sci 12:145–167 Celeb ME, Codella N, Halpern A (2019) Dermoscopy image analysis: overview and future directions. IEEE J Biomed Health Inf 23:474–478 Congalton RG (1991) A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens Environ 37:35–46 Congalton RG, Green G (2019) Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 3rd edn. CRC Press, Boca Raton Congalton RG, Gu J, Yadav K, Thenkabail P, Ozdogan M (2014) Global land cover mapping: a review and uncertainty analysis. Remote Sensing 6:12070–12093 Fielding AH, Bell JF (1997) A review of methods for the assessment of prediction errors in conservation presence/absence models. Environ Conserv 24:38–49 Foody GM (2002) Status of land cover classification accuracy assessment. Remote Sens Environ 80:185–201 Grekousis G (2019) Artificial neural networks and deep learning in urban geography: a systematic review and meta-analysis. Comput Environ Urban Syst 74:244–256 Hammond TO, Verbyla DL (1996) Optimistic bias in classification accuracy assessment. Int J Remote Sens 17:1261–1266 He H, Garcia EA (2009) Learning from Imbalanced Data. IEEE Trans Knowl Data Eng 21:1263–1284 Heydari SS, Mountrakis G (2018) Effect of classifier selection, reference sample size, reference class distribution and scene heterogeneity in per-pixel classification accuracy using 26 Landsat sites. Remote Sens Environ 204:648–658 Kim JK, Han YS, Lee JS (2017) Particle swarm optimization-deep belief network-based rare class prediction model for highly class imbalance problem. Concurr Comput 29:e4128 Lasko TA, Bhagwat JG, Zou KH, Ohno-Machado L (2005) The use of receiver operating characteristic curves in biomedical informatics. J Biomed Inform 38:404–415 Liu C, Frazier P, Kumar L (2007) Comparative assessment of the measures of thematic classification accuracy. Remote Sens Environ 107:606–616 Olofsson P, Foody GM, Herold M, Stehman SV, Woodcock CE, Wulder MA (2014) Good practices for estimating area and assessing accuracy of land change. Remote Sens Environ 148:42–57 Phiri D, Morgenroth J (2017) Developments in Landsat land cover classification methods: a review. Remote Sens 9:967 Scepan J (1999) Thematic validation of high-resolution global land-cover data sets. Photogramm Eng Remote Sens 65:1051–1060 Shao GF, Wu JG (2008) On the accuracy of landscape pattern analysis using remote sensing data. Landscape Ecol 23:505–511 Stehman SV, Foody GM (2019) Key issues in rigorous accuracy assessment of land cover products. Remote Sens Environ 231:111199 Story M, Congalton R (1986) Accuracy assessment: a user's perspective. Photogramm Eng Remote Sens 52:397–399 Sweeney SP, Evans TP (2012) An edge-oriented approach to thematic map error assessment. Geocarto Int 27:31–56 Thomas C (2013) Improving intrusion detection for imbalanced network traffic. Secur Commun Netw 6:309–324 Xiao FY, Gao GY, Shen Q, Wang XF, Ma Y, Lu YH, Fu BJ (2019) Spatio-temporal characteristics and driving forces of landscape structure changes in the middle reach of the Heihe River Basin from 1990 to 2015. Landscape Ecol 34:755–770 This research was supported by the USDA National Institute of Food and Agriculture McIntire Stennis Project (IND011523MS), the National Natural Science Foundation of China (41471137), and the National Key R&D Program of China (2016YFC0502902), and the Natural Science Foundation of Fujian Province (2017J01468). The authors thank Drs. Keith E. Woeste and Russell G. Congalton, and anonymous reviewers for their constructive comments and suggestions that have helped improve the manuscript. Department of Forestry and Natural Resources, Purdue University, West Lafayette, IN, 47907, USA Guofan Shao Key Laboratory of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen, 361021, China Lina Tang College of Computer Engineering, Jimei University, 185 Yinjiang Road, Xiamen, 361021, China Jiangfu Liao Correspondence to Guofan Shao or Lina Tang. Shao, G., Tang, L. & Liao, J. Overselling overall map accuracy misinforms about research reliability. Landscape Ecol 34, 2487–2492 (2019). https://doi.org/10.1007/s10980-019-00916-6 Issue Date: November 2019 Error propagation Imbalanced classes Map accuracy Comprehensive assessment
CommonCrawl
Average - Aptitude MCQ Questions and Solutions with Explanations Home / Arithmetic Ability / Average Important Theory / Shots Trick on Average Ajit has a certain average for 9 innings. In the tenth innings, he scores 100 runs thereby increasing his average by 8 runs. His new average is: Let Ajit's average be x for 9 innings. So, Ajit scored 9x run in 9 innings. In the 10th inning, he scored 100 runs then average became (x+8). And he scored (x + 8) × 10 runs in 10 innings. $$\eqalign{ & \Rightarrow 9x + 100 = 10 \times \left( {x + 8} \right) \cr & {\text{or}},\,9x + 100 = 10x + 80 \cr & {\text{or}},\,x = 100 - 80 \cr & {\text{or}},\,x = 20 \cr & {\text{New}}\,{\text{average}} = \left( {x + 8} \right) \cr & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = 28\,{\text{runs}} \cr} $$ The average temperature for Wednesday, Thursday and Friday was 40°C. The average for Thursday, Friday and Saturday was 41° C. If temperature on Saturday was 42° C, what was the temperature on Wednesday? A. 39° C B. 44° C C. 38° C D. 41° C Average temperature for Wednesday, Thursday and Friday = 40° C Total temperature = 3 × 40 = 120° C Average temperature for Thursday, Friday and Saturday = 41° C Total temperature = 41 × 3 = 123° C Temperature on Saturday = 42° C (Thursday + Friday + Saturday) - (Wednesday + Thursday + Friday) = 123 - 120; Saturday - Wednesday = 3 Wednesday = 42 - 3 = 39° C The average of the first five multiples of 9 is: $$\eqalign{ & {\text{Required}}\,{\text{average}} \cr & = {\frac{{{\text{total}}\,{\text{sum}}\,{\text{of}}\,{\text{multiple}}\,{\text{of}}\,9}}{5}} \cr & = {\frac{{9 + 18 + 27 + 36 + 45}}{5}} \cr & = 27 \cr} $$ Note that, average of 9 and 45 is also 27. And average of 18 and 36 is also 27. The speed of the train going from Nagpur to Allahabad is 100 km/h while when coming back from Allahabad to Nagpur, its speed is 150 km/h. find the average speed during whole journey. A. 125 km/hr B. 75 km/hr C. 135 km/hr D. 120 km/hr $$\eqalign{ & {\text{Average speed}}, \cr & = \frac{{ {2 \times x \times y} }}{{ {x + y} }} \cr & = \frac{{ {2 \times 100 \times 150} }}{{ {100 + 150} }} \cr & = \frac{{ {200 \times 150} }}{{250}} \cr & = 120\,{\text{km/hr}} \cr} $$ Find the average of first 97 natural numbers. E. 49.5 Average of 1st n natural number is given by $$ = \frac{{ {\frac{{ {{\text{n}} \times \left( {{\text{n}} + 1} \right)} }}{2}} }}{{\text{n}}}$$ Average of 1st 97 natural number is given by $$\eqalign{ & {\frac{{ {\frac{{ {97 \times \left( {97 + 1} \right)} }}{2}} }}{{97}}} \cr & = 49 \cr} $$ These numbers are in AP series so average. $$\eqalign{ & = \frac{{ {{\text{sum of corresponding term}}} }}{2} \cr & = \frac{{ {1 + 97} }}{2} = 49 \cr & {\text{or, }}\frac{{ {2 + 96} }}{2} = 49 \cr & {\text{or, }}\frac{{ {3 + 95} }}{2} = 49 \cr & {\text{And so on}}{\text{.}} \cr} $$ Read More Section(Average) Average - Section 2
CommonCrawl
MSC Classifications MSC 2010: Quantum Theory 81Txx Only show content I have access to (14) Physics and Astronomy (1) Statistics and Probability (1) Journal of the Australian Mathematical Society (7) Canadian Journal of Mathematics (4) Compositio Mathematica (3) Forum of Mathematics, Sigma (3) Bulletin of the Australian Mathematical Society (2) Communications in Computational Physics (2) Journal of the Institute of Mathematics of Jussieu (2) Nagoya Mathematical Journal (2) Combinatorics, Probability and Computing (1) Forum of Mathematics, Pi (1) Mathematical Proceedings of the Cambridge Philosophical Society (1) Proceedings of the Royal Society of Edinburgh Section A: Mathematics (1) Australian Mathematical Society Inc (9) Canadian Mathematical Society (4) Forum of Mathematics (4) Global Science Press (2) 29 results in 81Txx A NEW HIGHER ORDER YANG–MILLS–HIGGS FLOW ON RIEMANNIAN $4$ -MANIFOLDS Calculus on manifolds; nonlinear operators Variational problems in infinite-dimensional spaces Quantum field theory; related classical field theories HEMANTH SARATCHANDRAN, JIAOGEN ZHANG, PAN ZHANG Journal: Bulletin of the Australian Mathematical Society , First View Published online by Cambridge University Press: 29 November 2022, pp. 1-10 Let $(M,g)$ be a closed Riemannian $4$ -manifold and let E be a vector bundle over M with structure group G, where G is a compact Lie group. We consider a new higher order Yang–Mills–Higgs functional, in which the Higgs field is a section of $\Omega ^0(\text {ad}E)$ . We show that, under suitable conditions, solutions to the gradient flow do not hit any finite time singularities. In the case that E is a line bundle, we are able to use a different blow-up procedure and obtain an improvement of the long-time result of Zhang ['Gradient flows of higher order Yang–Mills–Higgs functionals', J. Aust. Math. Soc. 113 (2022), 257–287]. The proof relies on properties of the Green function, which is very different from the previous techniques. The mod-p homology of the classifying spaces of certain gauge groups Fiber spaces and bundles Daisuke Kishimoto, Stephen Theriault Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics , First View Published online by Cambridge University Press: 19 September 2022, pp. 1-13 Let $G$ be a compact connected simple Lie group of type $(n_{1},\,\ldots,\,n_{l})$ , where $n_{1}<\cdots < n_{l}$ . Let $\mathcal {G}_k$ be the gauge group of the principal $G$ -bundle over $S^{4}$ corresponding to $k\in \pi _3(G)\cong \mathbb {Z}$ . We calculate the mod-$p$ homology of the classifying space $B\mathcal {G}_k$ provided that $n_{l}< p-1$ . SOLUTIONS OF THE tt*-TODA EQUATIONS AND QUANTUM COHOMOLOGY OF MINUSCULE FLAG MANIFOLDS Symplectic geometry, contact geometry YOSHIKI KANEKO Journal: Nagoya Mathematical Journal / Volume 248 / December 2022 Published online by Cambridge University Press: 10 June 2022, pp. 990-1004 Print publication: December 2022 We relate the quantum cohomology of minuscule flag manifolds to the tt*-Toda equations, a special case of the topological–antitopological fusion equations which were introduced by Cecotti and Vafa in their study of supersymmetric quantum field theories. To do this, we combine the Lie-theoretic treatment of the tt*-Toda equations of Guest–Ho with the Lie-theoretic description of the quantum cohomology of minuscule flag manifolds from Chaput–Manivel–Perrin and Golyshev–Manivel. TAMARKIN–TSYGAN CALCULUS AND CHIRAL POISSON COHOMOLOGY Lie algebras and Lie superalgebras Algebraic geometry: Foundations EMILE BOUAZIZ Published online by Cambridge University Press: 28 February 2022, pp. 751-765 We construct and study some vertex theoretic invariants associated with Poisson varieties, specializing in the conformal weight $0$ case to the familiar package of Poisson homology and cohomology. In order to do this conceptually, we sketch a version of the calculus, in the sense of [12], adapted to the context of vertex algebras. We obtain the standard theorems of Poisson (co)homology in this chiral context. This is part of a larger project related to promoting noncommutative geometric structures to chiral versions of such. Combinatorics of the geometry of Wilson loop diagrams II: Grassmann necklaces, dimensions, and denominators Susama Agarwala, Siân Fryer, Karen Yeats Journal: Canadian Journal of Mathematics / Volume 74 / Issue 6 / December 2022 Published online by Cambridge University Press: 23 July 2021, pp. 1625-1672 Wilson loop diagrams are an important tool in studying scattering amplitudes of SYM $N=4$ theory and are known by previous work to be associated to positroids. In this paper, we study the structure of the associated positroids, as well as the structure of the denominator of the integrand defined by each diagram. We give an algorithm to derive the Grassmann necklace of the associated positroid directly from the Wilson loop diagram, and a recursive proof that the dimension of these cells is thrice the number of propagators in the diagram. We also show that the ideal generated by the denominator in the integrand is the radical of the ideal generated by the product of Grassmann necklace minors. Large N behaviour of the two-dimensional Yang–Mills partition function Abstract harmonic analysis Thibaut Lemoine Journal: Combinatorics, Probability and Computing / Volume 31 / Issue 1 / January 2022 Published online by Cambridge University Press: 30 June 2021, pp. 144-165 We compute the large N limit of the partition function of the Euclidean Yang–Mills measure on orientable compact surfaces with genus $g\geqslant 1$ and non-orientable compact surfaces with genus $g\geqslant 2$ , with structure group the unitary group ${\mathrm U}(N)$ or special unitary group ${\mathrm{SU}}(N)$ . Our proofs are based on asymptotic representation theory: more specifically, we control the dimension and Casimir number of irreducible representations of ${\mathrm U}(N)$ and ${\mathrm{SU}}(N)$ when N tends to infinity. Our main technical tool, involving 'almost flat' Young diagram, makes rigorous the arguments used by Gross and Taylor (1993, Nuclear Phys. B400(1–3) 181–208) in the setting of QCD, and in some cases, we recover formulae given by Douglas (1995, Quantum Field Theory and String Theory (Cargèse, 1993), Vol. 328 of NATO Advanced Science Institutes Series B: Physics, Plenum, New York, pp. 119–135) and Rusakov (1993, Phys. Lett. B303(1) 95–98). GRADIENT FLOWS OF HIGHER ORDER YANG–MILLS–HIGGS FUNCTIONALS PAN ZHANG Journal: Journal of the Australian Mathematical Society / Volume 113 / Issue 2 / October 2022 Published online by Cambridge University Press: 31 May 2021, pp. 257-287 Print publication: October 2022 In this paper, we define a family of functionals generalizing the Yang–Mills–Higgs functionals on a closed Riemannian manifold. Then we prove the short-time existence of the corresponding gradient flow by a gauge-fixing technique. The lack of a maximum principle for the higher order operator brings us a lot of inconvenience during the estimates for the Higgs field. We observe that the $L^2$ -bound of the Higgs field is enough for energy estimates in four dimensions and we show that, provided the order of derivatives appearing in the higher order Yang–Mills–Higgs functionals is strictly greater than one, solutions to the gradient flow do not hit any finite-time singularities. As for the Yang–Mills–Higgs k-functional with Higgs self-interaction, we show that, provided $\dim (M)<2(k+1)$ , for every smooth initial data the associated gradient flow admits long-time existence. The proof depends on local $L^2$ -derivative estimates, energy estimates and blow-up analysis. Combinatorics of the geometry of Wilson loop diagrams I: equivalence classes via matroids and polytopes Journal: Canadian Journal of Mathematics / Volume 74 / Issue 4 / August 2022 Published online by Cambridge University Press: 26 February 2021, pp. 1177-1208 Print publication: August 2022 Wilson loop diagrams are an important tool in studying scattering amplitudes of SYM $N=4$ theory and are known by previous work to be associated to positroids. We characterize the conditions under which two Wilson loop diagrams give the same positroid, prove that an important subclass of subdiagrams (exact subdiagrams) corresponds to uniform matroids, and enumerate the number of different Wilson loop diagrams that correspond to each positroid cell. We also give a correspondence between those positroids which can arise from Wilson loop diagrams and directions in associahedra. Twisted Donaldson invariants Infinite-dimensional manifolds Higher algebraic $K$-theory TSUYOSHI KATO, HIROFUMI SASAHIRA, HANG WANG Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 171 / Issue 3 / November 2021 Print publication: November 2021 Fundamental group of a manifold gives a deep effect on its underlying smooth structure. In this paper we introduce a new variant of the Donaldson invariant in Yang–Mills gauge theory from twisting by the Picard group of a 4-manifold in the case when the fundamental group is free abelian. We then generalise it to the general case of fundamental groups by use of the framework of non commutative geometry. We also verify that our invariant distinguishes smooth structures between some homeomorphic 4-manifolds. A CLASS OF NONHOLOMORPHIC MODULAR FORMS II: EQUIVARIANT ITERATED EISENSTEIN INTEGRALS Zeta and $L$-functions: analytic theory Discontinuous groups and automorphic forms FRANCIS BROWN Journal: Forum of Mathematics, Sigma / Volume 8 / 2020 Published online by Cambridge University Press: 28 May 2020, e31 We introduce a new family of real-analytic modular forms on the upper-half plane. They are arguably the simplest class of 'mixed' versions of modular forms of level one and are constructed out of real and imaginary parts of iterated integrals of holomorphic Eisenstein series. They form an algebra of functions satisfying many properties analogous to classical holomorphic modular forms. In particular, they admit expansions in $q,\overline{q}$ and $\log |q|$ involving only rational numbers and single-valued multiple zeta values. The first nontrivial functions in this class are real-analytic Eisenstein series. Simple Formulas for Constellations and Bipartite Maps with Prescribed Degrees Geometric probability and stochastic geometry Baptiste Louf Journal: Canadian Journal of Mathematics / Volume 73 / Issue 1 / February 2021 Published online by Cambridge University Press: 12 November 2019, pp. 160-176 Print publication: February 2021 We obtain simple quadratic recurrence formulas counting bipartite maps on surfaces with prescribed degrees (in particular, $2k$-angulations) and constellations. These formulas are the fastest known way of computing these numbers. Our work is a natural extension of previous works on integrable hierarchies (2-Toda and KP), namely, the Pandharipande recursion for Hurwitz numbers (proved by Okounkov and simplified by Dubrovin–Yang–Zagier), as well as formulas for several models of maps (Goulden–Jackson, Carrell–Chapuy, Kazarian–Zograf). As for those formulas, a bijective interpretation is still to be found. We also include a formula for monotone simple Hurwitz numbers derived in the same fashion. These formulas also play a key role in subsequent work of the author with T. Budzinski establishing the hyperbolic local limit of random bipartite maps of large genus. ON BORWEIN'S CONJECTURES FOR PLANAR UNIFORM RANDOM WALKS YAJUN ZHOU Journal: Journal of the Australian Mathematical Society / Volume 107 / Issue 3 / December 2019 Published online by Cambridge University Press: 09 October 2019, pp. 392-411 Let $p_{n}(x)=\int _{0}^{\infty }J_{0}(xt)[J_{0}(t)]^{n}xt\,dt$ be Kluyver's probability density for $n$-step uniform random walks in the Euclidean plane. Through connection to a similar problem in two-dimensional quantum field theory, we evaluate the third-order derivative $p_{5}^{\prime \prime \prime }(0^{+})$ in closed form, thereby giving a new proof for a conjecture of J. M. Borwein. By further analogies to Feynman diagrams in quantum field theory, we demonstrate that $p_{n}(x),0\leq x\leq 1$ admits a uniformly convergent Maclaurin expansion for all odd integers $n\geq 5$, thus settling another conjecture of Borwein. The Batalin–Vilkovisky Algebra in the String Topology of Classifying Spaces Homotopy theory Katsuhiko Kuribayashi, Luc Menichi For almost any compact connected Lie group $G$ and any field $\mathbb{F}_{p}$, we compute the Batalin–Vilkovisky algebra $H^{\star +\text{dim}\,G}(\text{LBG};\mathbb{F}_{p})$ on the loop cohomology of the classifying space introduced by Chataur and the second author. In particular, if $p$ is odd or $p=0$, this Batalin–Vilkovisky algebra is isomorphic to the Hochschild cohomology $HH^{\star }(H_{\star }(G),H_{\star }(G))$. Over $\mathbb{F}_{2}$ , such an isomorphism of Batalin–Vilkovisky algebras does not hold when $G=\text{SO}(3)$ or $G=G_{2}$. Our elaborate considerations on the signs in string topology of the classifying spaces give rise to a general theorem on graded homological conformal field theory. A NOTE ON GUNNINGHAM'S FORMULA Projective and enumerative geometry JUNHO LEE Journal: Bulletin of the Australian Mathematical Society / Volume 98 / Issue 3 / December 2018 Gunningham ['Spin Hurwitz numbers and topological quantum field theory', Geom. Topol.20(4) (2016), 1859–1907] constructed an extended topological quantum field theory (TQFT) to obtain a closed formula for all spin Hurwitz numbers. In this note, we use a gluing theorem for spin Hurwitz numbers to re-prove Gunningham's formula. We also describe a TQFT formalism naturally induced by the gluing theorem. CLE PERCOLATIONS Equilibrium statistical mechanics Markov processes JASON MILLER, SCOTT SHEFFIELD, WENDELIN WERNER Journal: Forum of Mathematics, Pi / Volume 5 / 2017 Published online by Cambridge University Press: 03 October 2017, e4 Conformal loop ensembles (CLEs) are random collections of loops in a simply connected domain, whose laws are characterized by a natural conformal invariance property. The set of points not surrounded by any loop is a canonical random connected fractal set — a random and conformally invariant analog of the Sierpinski carpet or gasket. In the present paper, we derive a direct relationship between the CLEs with simple loops ( $\text{CLE}_{\unicode[STIX]{x1D705}}$ for $\unicode[STIX]{x1D705}\in (8/3,4)$, whose loops are Schramm's $\text{SLE}_{\unicode[STIX]{x1D705}}$-type curves) and the corresponding CLEs with nonsimple loops ( $\text{CLE}_{\unicode[STIX]{x1D705}^{\prime }}$ with $\unicode[STIX]{x1D705}^{\prime }:=16/\unicode[STIX]{x1D705}\in (4,6)$, whose loops are $\text{SLE}_{\unicode[STIX]{x1D705}^{\prime }}$-type curves). This correspondence is the continuum analog of the Edwards–Sokal coupling between the $q$-state Potts model and the associated FK random cluster model, and its generalization to noninteger $q$. Like its discrete analog, our continuum correspondence has two directions. First, we show that for each $\unicode[STIX]{x1D705}\in (8/3,4)$, one can construct a variant of $\text{CLE}_{\unicode[STIX]{x1D705}}$ as follows: start with an instance of $\text{CLE}_{\unicode[STIX]{x1D705}^{\prime }}$, then use a biased coin to independently color each $\text{CLE}_{\unicode[STIX]{x1D705}^{\prime }}$ loop in one of two colors, and then consider the outer boundaries of the clusters of loops of a given color. Second, we show how to interpret $\text{CLE}_{\unicode[STIX]{x1D705}^{\prime }}$ loops as interfaces of a continuum analog of critical Bernoulli percolation within $\text{CLE}_{\unicode[STIX]{x1D705}}$ carpets — this is the first construction of continuum percolation on a fractal planar domain. It extends and generalizes the continuum percolation on open domains defined by $\text{SLE}_{6}$ and $\text{CLE}_{6}$. These constructions allow us to prove several conjectures made by the second author and provide new and perhaps surprising interpretations of the relationship between CLEs and the Gaussian free field. Along the way, we obtain new results about generalized $\text{SLE}_{\unicode[STIX]{x1D705}}(\unicode[STIX]{x1D70C})$ curves for $\unicode[STIX]{x1D70C}<-2$, such as their decomposition into collections of $\text{SLE}_{\unicode[STIX]{x1D705}}$-type 'loops' hanging off of $\text{SLE}_{\unicode[STIX]{x1D705}^{\prime }}$-type 'trunks', and vice versa (exchanging $\unicode[STIX]{x1D705}$ and $\unicode[STIX]{x1D705}^{\prime }$). We also define a continuous family of natural $\text{CLE}$ variants called boundary conformal loop ensembles (BCLEs) that share some (but not all) of the conformal symmetries that characterize $\text{CLE}$s, and that should be scaling limits of critical models with special boundary conditions. We extend the $\text{CLE}_{\unicode[STIX]{x1D705}}$/ $\text{CLE}_{\unicode[STIX]{x1D705}^{\prime }}$ correspondence to a $\text{BCLE}_{\unicode[STIX]{x1D705}}$/ $\text{BCLE}_{\unicode[STIX]{x1D705}^{\prime }}$ correspondence that makes sense for the wider range $\unicode[STIX]{x1D705}\in (2,4]$ and $\unicode[STIX]{x1D705}^{\prime }\in [4,8)$. PyCFTBoot: A Flexible Interface for the Conformal Bootstrap Algorithms - Computer Science Connor Behan Journal: Communications in Computational Physics / Volume 22 / Issue 1 / July 2017 Published online by Cambridge University Press: 03 May 2017, pp. 1-38 Print publication: July 2017 We introduce PyCFTBoot, a wrapper designed to reduce the barrier to entry in conformal bootstrap calculations that require semidefinite programming. Symengine and SDPB are used for the most intensive symbolic and numerical steps respectively. After reviewing the built-in algorithms for conformal blocks, we explain how to use the code through a number of examples that verify past results. As an application, we show that the multi-correlator bootstrap still appears to single out the Wilson-Fisher fixed points as special theories in dimensions between 3 and 4 despite the recent proof that they violate unitarity. DUBROVIN'S SUPERPOTENTIAL AS A GLOBAL SPECTRAL CURVE Deformations of analytic structures P. Dunin-Barkowski, P. Norbury, N. Orantin, A. Popolitov, S. Shadrin Journal: Journal of the Institute of Mathematics of Jussieu / Volume 18 / Issue 3 / May 2019 Published online by Cambridge University Press: 17 April 2017, pp. 449-497 Print publication: May 2019 We apply the spectral curve topological recursion to Dubrovin's universal Landau–Ginzburg superpotential associated to a semi-simple point of any conformal Frobenius manifold. We show that under some conditions the expansion of the correlation differentials reproduces the cohomological field theory associated with the same point of the initial Frobenius manifold. Lattice Boltzmann Simulation of Steady Flow in a Semi-Elliptical Cavity Basic methods in fluid mechanics Incompressible viscous fluids Junjie Ren, Ping Guo Journal: Communications in Computational Physics / Volume 21 / Issue 3 / March 2017 Print publication: March 2017 The lattice Boltzmann method is employed to simulate the steady flow in a two-dimensional lid-driven semi-elliptical cavity. Reynolds number (Re) and vertical-to-horizontal semi-axis ratio (D) are in the range of 500-5000 and 0.1-4, respectively. The effects of Re and D on the vortex structure and pressure field are investigated, and the evolutionary features of the vortex structure with Re and D are analyzed in detail. Simulation results show that the vortex structure and its evolutionary features significantly depend on Re and D. The steady flow is characterized by one vortex in the semi-elliptical cavity when both Re and D are small. As Re increases, the appearance of the vortex structure becomes more complex. When D is less than 1, increasing D makes the large vortexes more round, and the evolution of the vortexes with D becomes more complex with increasing Re. When D is greater than 1, the steady flow consists of a series of large vortexes which superimpose on each other. As Re and D increase, the number of the large vortexes increases. Additionally, a small vortex in the upper-left corner of the semi-elliptical cavity appears at a large Re and its size increases slowly as Re increases. The highest pressures appear in the upper-right corner and the pressure changes drastically in the upper-right region of the cavity. The total pressure differences in the semi-elliptical cavity with a fixed D decrease with increasing Re. In the region of themain vortex, the pressure contours nearly coincide with the streamlines, especially for the cavity flow with a large Re. Homogeneity of cohomology classes associated with Koszul matrix factorizations Local theory Homological methods Alexander Polishchuk Journal: Compositio Mathematica / Volume 152 / Issue 10 / October 2016 In this work we prove the so-called dimension property for the cohomological field theory associated with a homogeneous polynomial $W$ with an isolated singularity, in the algebraic framework of [A. Polishchuk and A. Vaintrob, Matrix factorizations and cohomological field theories, J. Reine Angew. Math. 714 (2016), 1–122]. This amounts to showing that some cohomology classes on the Deligne–Mumford moduli spaces of stable curves, constructed using Fourier–Mukai-type functors associated with matrix factorizations, live in prescribed dimension. The proof is based on a homogeneity result established in [A. Polishchuk and A. Vaintrob, Algebraic construction of Witten's top Chern class, in Advances in algebraic geometry motivated by physics (Lowell, MA, 2000) (American Mathematical Society, Providence, RI, 2001), 229–249] for certain characteristic classes of Koszul matrix factorizations of $0$. To reduce to this result, we use the theory of Fourier–Mukai-type functors involving matrix factorizations and the natural rational lattices in the relevant Hochschild homology spaces, as well as a version of Hodge–Riemann bilinear relations for Hochschild homology of matrix factorizations. Our approach also gives a proof of the dimension property for the cohomological field theories associated with some quasihomogeneous polynomials with an isolated singularity. A Feynman integral via higher normal functions $K$-theory in number theory Surfaces and higher-dimensional varieties Families, fibrations Spencer Bloch, Matt Kerr, Pierre Vanhove Journal: Compositio Mathematica / Volume 151 / Issue 12 / December 2015 Published online by Cambridge University Press: 06 August 2015, pp. 2329-2375 We study the Feynman integral for the three-banana graph defined as the scalar two-point self-energy at three-loop order. The Feynman integral is evaluated for all identical internal masses in two space-time dimensions. Two calculations are given for the Feynman integral: one based on an interpretation of the integral as an inhomogeneous solution of a classical Picard–Fuchs differential equation, and the other using arithmetic algebraic geometry, motivic cohomology, and Eisenstein series. Both methods use the rather special fact that the Feynman integral is a family of regulator periods associated to a family of $K3$ surfaces. We show that the integral is given by a sum of elliptic trilogarithms evaluated at sixth roots of unity. This elliptic trilogarithm value is related to the regulator of a class in the motivic cohomology of the $K3$ family. We prove a conjecture by David Broadhurst which states that at a special kinematical point the Feynman integral is given by a critical value of the Hasse–Weil $L$-function of the $K3$ surface. This result is shown to be a particular case of Deligne's conjectures relating values of $L$-functions inside the critical strip to periods.
CommonCrawl
scAI: an unsupervised approach for the integrative analysis of parallel single-cell transcriptomic and epigenomic profiles Suoqin Jin1 na1, Lihua Zhang1,2 na1 & Qing Nie ORCID: orcid.org/0000-0002-8804-33681,2,3 Genome Biology volume 21, Article number: 25 (2020) Cite this article Simultaneous measurements of transcriptomic and epigenomic profiles in the same individual cells provide an unprecedented opportunity to understand cell fates. However, effective approaches for the integrative analysis of such data are lacking. Here, we present a single-cell aggregation and integration (scAI) method to deconvolute cellular heterogeneity from parallel transcriptomic and epigenomic profiles. Through iterative learning, scAI aggregates sparse epigenomic signals in similar cells learned in an unsupervised manner, allowing coherent fusion with transcriptomic measurements. Simulation studies and applications to three real datasets demonstrate its capability of dissecting cellular heterogeneity within both transcriptomic and epigenomic layers and understanding transcriptional regulatory mechanisms. The rapid development of single-cell technologies allows for dissecting cellular heterogeneity more comprehensively at an unprecedented resolution. Many protocols have been developed to quantify transcriptome [1], such as CEL-seq2, Smart-seq2, Drop-seq, and 10X Chromium, and techniques that measure single-cell chromatin accessibility (scATAC-seq) and DNA methylation have also become available [2]. More recently, several single-cell multiomics technologies have emerged for measuring multiple types of molecules in the same individual cell, such as scM&T-seq [3], scNMT-seq [4], scTrio-seq [5], sci-CAR-seq [6], and scCAT-seq [7]. The resulting single-cell multiomics data has potential of providing new insights regarding the multiple regulatory layers that control cellular heterogeneity [8, 9]. Gene expression is often regulated by transcription factors (TFs) via interaction with cis-regulatory genomic DNA sequences located in or around target genes [10, 11]. Epigenetic modifications, including changes in chromatin accessibility and DNA methylation, play crucial roles in the regulation of gene expression [12, 13]. Many tools have been developed for the integrative analysis of transcriptomic and epigenomic profiles in bulk samples [14,15,16]. For example, Zhang et al. integrated the analysis of bulk gene expression, DNA methylation, and microRNA expression using joint nonnegative matrix factorization [16]. Argelaguet et al. [17] presented MOFA, a generalization of principal component analysis (PCA) which is applicable to both bulk and single-cell datasets [18, 19]. Single-cell multiomics data are inherently heterogenous and highly sparse [9]. Although many integration methods initially developed for bulk data might be applicable to such data, it has become increasingly clear that new and different computational strategies are required due to unique characteristics of single-cell data [9]. In particular, scATAC-seq data are extremely sparse (e.g., over 99% zeros in sci-CAR-seq) and nearly binary [20], thus making it difficult to reliably identify accessible (or methylated) regions in a cell. A growing number of methods have been developed for scRNA-seq data integration [21,22,23]. However, only few methods have been proposed for integrating multiomics profiles, and these methods were designed for data measured in different cells (i.e., not the same single cells) but sampled from the same cell population [22,23,24,25]. MATCHER used a Gaussian process latent variable model to compute the "pseudotime" for every cell in each omics layer and to predict the correlations between transcriptomic and epigenomic measurements from different cells of the same type [24]. A coupled nonnegative matrix factorization method performed clustering of single cells sequenced by scRNA-seq and scATAC-seq through constructing a "coupling matrix" for regulatory elements and gene associations [25]. Recently, Seurat (version 3) [22] and LIGER [23] were developed for integrating scRNA-seq and single-cell epigenomic data. Both of these methods first transform the epigenomic data into a synthetic scRNA-seq data through estimating a "gene activity matrix," and then identify "anchors" between this synthetic data and scRNA-seq data through aligning them in a low-dimensional space. The gene activity matrix is created by simply summing all counts within the gene body +2 kb upstream. Such strategy may introduce improper synthetic data due to complex transcriptional regulatory mechanisms between gene expression and chromatin accessibility. The improper synthetic data may further lead to imperfect alignment when they are applied to parallel transcriptomic and epigenomic profiles, and likely affect downstream analysis. Moreover, the inference of interactions between transcriptomics and epigenetics often requires both measurements from the same single cell [8]. Here, we present a single-cell aggregation and integration (scAI) approach to integrate transcriptomic and epigenomic profiles (i.e., chromatin accessibility or DNA methylation) that are derived from the same cells. Unlike existing integration methods [16, 17, 22, 24,25,26], scAI takes into consideration the extremely sparse and near-binary nature of single-cell epigenomic data. Through iterative learning in an unsupervised manner, scAI aggregates epigenomic data in subgroups of cells that exhibit similar gene expression and epigenomic profiles. Those similar cells are computed through learning a cell-cell similarity matrix simultaneously from both transcriptomic and aggregated epigenomic data using a unified matrix factorization model. As such, scAI represents the transcriptomic and epigenomic profiles with biologically meaningful low-rank matrices, allowing identification of cell subpopulations; simultaneous visualization of cells, genes, and loci in a shared two-dimensional space; and inference of the transcriptional regulatory relationships. Through applications to eight simulated datasets and three published datasets, and comparisons with recent multiomics data integration methods, scAI is found to be an efficient approach to reveal cellular heterogeneity by dissecting multiple regulatory layers of single-cell data. Overview of scAI To deconvolute heterogeneous single cells from both transcriptomic and epigenomic profiles, we aggregate the sparse/binary epigenomic profile in an unsupervised manner to allow coherent fusion with transcriptomic profile while projecting cells into the same representation space using both the transcriptomic and epigenomic data. Using the normalized scRNA-seq data matrix X1 (p genes in n cells) and the single-cell chromatin accessibility or DNA methylation data matrix X2 (q loci in n cells) as an example, we infer the low-dimensional representations via the following matrix factorization model: $$ {\min}_{W_1,{W}_2,H,Z\ge 0}\alpha {\left\Vert {X}_1-{W}_1H\right\Vert}_F^2+{\left\Vert {X}_2\left(Z\circ R\right)-{W}_2H\right\Vert}_F^2+\lambda {\left\Vert Z-{H}^TH\right\Vert}_F^2+\gamma \sum \limits_j{\left\Vert {H}_{.j}\right\Vert}_1^2, $$ where W1 and W2 are the gene loading and locus loading matrices with sizes p × K and q × K (K is the rank), respectively. Each of the K columns is considered as a factor, which often corresponds to a known biological process/signal relating to a particular cell type. \( {W}_1^{ik} \) and \( {W}_2^{ik} \) are the loading values of gene i and locus i in factor k, and the loading values represent the contributions of gene i and locus i in factor k. H is the cell loading matrix with size K × n (H.j is the jth column of H), and the entry Hkj is the loading value of cell j when mapped onto factor k. Z is the cell-cell similarity matrix. R is a binary matrix generated by a binomial distribution with a probability s. α, λ, γ are regularization parameters, and the symbol ∘ represents dot multiplication. The model aims to address two major challenges simultaneously: (i) the extremely sparse and near-binary nature of single-cell epigenomic data and (ii) the integration of this binary epigenomic data with the scRNA-seq data, which are often continuous after being normalized. Aggregation of epigenomic profiles through iterative refinement in an unsupervised manner To address the extremely sparse and binary nature of the epigenomic data, we aggregate epigenomic data of similar cells based on the cell-cell similarity matrix Z, which is simultaneously learned from both transcriptomic and epigenomic data iteratively. Epigenomic data can be simply aggregated by X2Z. However, this strategy may lead to over-aggregation, for example, in one subpopulation, similar cells exhibit almost the same aggregated epigenomic signals, which improperly reduces the cellular heterogeneity. To reduce such over-aggregation, a binary matrix R, generated from a binomial distribution with probability s, is utilized for randomly sampling of similar cells. After normalizing H with the sum of each row equaling 1 in each iteration step and Z°R with the sum of each column equaling 1, then the aggregated epigenomic profiles are represented by X2(Z ∘ R). The ith column of X2(Z ∘ R) represents the weighted combination of epigenomic signals from some cells similar to the ith cell. These strategies not only enhance epigenomic signals, but also maintain cellular heterogeneity within and between different subpopulations. Integration of binary and count-valued data via projection onto the same low-dimensional space Through aggregation, the extremely sparse and near-binary data matrix X2 is transformed into the signal-enhanced continuous matrix X2(Z ∘ R), allowing coherent fusion with transcriptomic measurements (Fig. 1a). These two matrices are projected onto a common coordinate system represented by the first two terms in the optimization model (Eq. (1)). In this way, cells are mapped onto a K-dimensional space with the cell loading matrix H, and the cell-cell similarity matrix Z is approximated by H′H, as represented by the third term in Eq. (1). The sparseness constraint on each column of H is added by the last term of Eq. (1). Overview of scAI. a scAI learns aggregated epigenomic profiles and low-dimensional representations from both transcriptomic and epigenomic data in an iterative manner. scAI uses parallel scRNA-seq and scATAC-seq/single cell DNA methylation data as inputs. Each row represents one gene or one locus, and each column represents one cell. In the first step, the epigenomic profile is aggregated based on a cell-cell similarity matrix that is randomly initiated. In the second step, transcriptomic and aggregated epigenomic data are simultaneously decomposed into a set of low-rank matrices. Entries in each factor (column) of the gene loading matrix (gene space), locus loading matrix (epigenomic space), and cell loading matrix (cell space) represent the contributions of genes, loci, and cells for the factor, respectively. In the third step, a cell-cell similarity matrix is computed based on the cell loading matrix. These three steps are repeated iteratively until the stop criterion is satisfied. b scAI ranks genes and loci in each factor based on their loadings. For example, four genes and loci are labeled with the highest loadings in factor 3. c Simultaneous visualization of cells, marker genes, marker loci, and factors in a 2D space by an integrative visualization method VscAI, which is constructed based on the four low-rank matrices learned by scAI. Small filled dots represent the individual cells, colored by true labels. Large red circles, black filled dots, and diamonds represent projected factors, marker genes, and marker loci, respectively. d The regulatory relationships are inferred via correlation analysis and nonnegative least square regression modeling of the identified marker genes and loci. An arch represents a regulatory link between one locus and the transcription start site (TSS) of each marker gene. The arch colors indicate the Pearson correlation coefficients for gene expression and loci accessibility. The red stem represents the TSS region of the gene, and the black stem represents each locus Downstream analysis using the inferred low-dimensional representations scAI simultaneously decomposes transcriptomic and epigenomic data into multiple biologically relevant factors, which are useful for a variety of downstream analyses (Fig. 1b–d). (1) The cell subpopulations can be identified from the cell loading matrix H using a Leiden community detection method (see the "Methods" section). (2) The genes and loci in the ith factor are ranked based on the loading values in the ith columns of W1 and W2 (see Fig. 1b and the "Methods" section). (3) To simultaneously analyze both gene and loci information associated with cell states, we introduce an integrative visualization method, VscAI. By combining these learned low-rank matrices (W1, W2, H, and Z) with the Sammon mapping [27] (see the "Methods" section), VscAI simultaneously projects genes and loci that separate the cell states into a two-dimensional space alongside the cells (Fig. 1c). (4) Finally, the regulatory relationships between the marker genes and the chromosome regions in each factor or cell subpopulation are inferred by combining the correlation analysis and the nonnegative least square regression modeling of gene expression and chromatin accessibility (see Fig. 1d and the "Methods" section). Overall, these functionalities allow the deconvolution of cellular heterogeneity and reveal regulatory links from transcriptomic and epigenomic layers. Model validation and comparison using simulated data To evaluate scAI, we simulated eight single-cell datasets with the sparse count data matrix X1 and the sparse binary data matrix X2 (i.e., paired scRNA-seq and scATAC-seq). To recapitulate the properties of the single-cell multiomics data (e.g., a high abundance of zeros and binary epigenetic data), we generated bulk RNA-seq and DNase-seq profiles from the same sample with MOSim [28]. Then, we added the effects of dropout and binarized the data. A detailed description of the simulation approach and the simulated data are shown in Additional file 1: Supplementary methods (Simulation datasets) and Additional file 2: Table S1. These datasets encompass eight scenarios with different transcriptomic/epigenomic properties: different sparsity levels (dataset 1), different noise levels (dataset 2), missing clusters in the epigenomic profiles (i.e., clusters defined from gene expression do not reflect epigenetic distinctions) (dataset 3), missing clusters in the transcriptomic profiles (i.e., clusters defined from epigenetic profile do not reflect gene expression distinctions) (dataset 4), discrete cell states (dataset 5), a continuous biological process (dataset 6), imbalanced cluster sizes with the same number of clusters defined from both transcriptomic and epigenomic profiles (dataset 7), and imbalanced cluster sizes with missing clusters in the epigenomic profiles (dataset 8). First, we compared the visualization of cells using the scRNA-seq data, scATAC-seq data, and aggregated scATAC-seq data, respectively (Fig. 2a). Due to the inherent sparsity and noise in the data, the cells were not well separated in the scRNA-seq data and the scATAC-seq data using Uniform Manifold Approximation and Projection [29] (UMAP) (Fig. 2a) and t-SNE (Additional file 2: Figure S1), in particular for datasets 5 and 6. However, the cell subpopulations were clearly distinguishable in the low-dimensional space when using the aggregated scATAC-seq data generated by scAI for all eight different scenarios (Fig. 2a). In addition, the cell subpopulations were well separated when visualized by VscAI, which embedded cells in two dimensions by leveraging the information from both scRNA-seq and scATAC-seq data (Fig. 2b). For dataset 3 and dataset 4, in which one cluster was missing in either the transcriptomic or the epigenomic data alone, scAI was able to reveal all the anticipated clusters. For example, in dataset 4, only four clusters were revealed in the scRNA-seq data, but five clusters were embedded in the scATAC-seq data (the fourth row of Fig. 2a). Without the addition of the scATAC-seq information, four clusters were detected (Additional file 2: Figure S2), whereas the integration of both the scRNA-seq and the scATAC-seq data revealed five clusters. In the first five datasets, the cell states are discrete whereas dataset 6 depicts a continuous transition process at five different time points. The continuous transitions in these five cell states were well characterized by scAI with the aggregated scATAC-seq data but could not be captured by using only the sparse scATAC-seq data with UMAP (the sixth row of Fig. 2a) and t-SNE (Additional file 2: Figure S1). For the datasets 7 and 8 with imbalanced cluster sizes, scAI accurately revealed all the expected clusters. In particular, three cell clusters were observed in the low-dimensional space of both scATAC-seq and aggregated scATAC-seq data in the dataset 8 (the eighth row of Fig. 2a). However, five cell clusters were well distinguished after integrating with scRNA-seq data, as shown in the VscAI space (the eighth row of Fig. 2b). Performance of scAI and its comparison with MOFA using eight simulated datasets. a 2D visualization of cells by applying UMAP to scRNA-seq, scATAC-seq, and aggregated scATAC-seq data obtained from scAI. Each row shows one example of each scenario from the simulated datasets. Cells are colored based on their true labels. b Cells are visualized by VscAI. c Accuracy of scAI (evaluated by AUC) in reconstructing cell loading (blue color), gene loading (orange color), and locus loading (yellow color) matrices, respectively. For each scenario, we generated a set of simulated data using five different parameters, which are indicated on the x-labels. The numbers outside and inside the brackets represent the parameters in the simulated scRNA-seq and scATAC-seq data, respectively. We applied scAI to each dataset 10 times with different seeds and then calculated the average AUCs with respect to the ground truth of the loading matrices. Datasets 5 and 6 were generated based on real datasets, which do not have ground truth of the gene/locus loading matrices. d Variance explained by each latent factor (LF) using MOFA. e Comparison of the accuracy (evaluated by normalized mutual information, NMI) of scAI and MOFA in identifying cell clusters Next, we used the area under receiver operating characteristic curve (AUC) to quantitively evaluate the accuracy of scAI in reconstructing cell loading matrix H, gene loading matrix W1, and locus loading matrix W2, which were used for identifying cell clusters, factor-specific genes, and loci in the downstream analyses, respectively. scAI was found to perform robustly and accurately with different sparsity levels and noise levels (Fig. 2c). For example, even with the sparsity levels of X1 and X2 at 98% and 99.6% in dataset 1, and 79.4% and 97.5% in dataset 5, scAI was able to reconstruct these loading matrices with high accuracy (Fig. 2c). Moreover, to study whether stronger noise or the initial data with less discriminative patterns have effects on the performance of scAI, we added stronger noise and sparsity levels, and also made the initial data less discriminative among clusters by increasing the parameter value coph, on the simulation dataset 8. We found that the noise levels and parameter coph values have little effects on the reconstructed loading matrices. The sparsity level affects the performance if it is larger than some threshold (e.g., the sparsity of scRNA-seq and scATAC-seq data is larger than 98.9% and 99.5%, respectively), as shown in Additional file 2: Figure S3. Finally, we applied MOFA [17], a multiomics data integration model designed for bulk data and single-cell data, to the eight datasets (Fig. 2d, e). MOFA decomposes multiomics data matrices into several weight matrices and one factor matrix using a statistically generalized principal component analysis method. For all the datasets except for dataset 7, the factors learned by MOFA only accounted for the variability of the scRNA-seq data, and could not capture the variance in the scATAC-seq data (Fig. 2d). We compared scAI with MOFA on cell clustering (Fig. 2e), finding MOFA does not perform as well as scAI for these simulation datasets (Fig. 2e). The analysis on simulation data indicates scAI's potential in aggregating scATAC-seq data, identifying important genes and loci, and uncovering discrete and continuous cell states in single-cell transcriptomic and epigenomic data with inherently high sparsity and noise levels. Identifying subpopulations with subtle transcriptomic differences but strong chromatin accessibility differences To evaluate scAI in capturing cell subpopulations in complex tissues, we analyzed 8837 cells from mammalian kidney using the paired chromatin accessibility and transcriptome data [6]. In a previous study, a semi-supervised clustering method was applied to the scRNA-seq data, and then, aggregated epigenomic profiles were generated based on the identified cell clusters [6]. As such, the cellular heterogeneity induced by epigenetics was unable to be captured in this method. scAI identified 17 subpopulations with either distinct gene expression or chromatin accessibility profiles with the default resolution parameter equaling 1 (see the "Methods" section; Fig. 3a, b, d; Additional file 1). Compared to the original findings [6], our integrative analysis of transcriptomic and chromatin accessibility profiles indicated that the known cell types such as Collecting Duct Principal Cells (CDPC) were much more heterogeneous. We identified two subpopulations of CDPC (C9 and C12, Additional file 2: Figure S4a) that were captured by factor 2 and factor 8, respectively (Fig. 3c). Gene loading analysis of these two factors revealed that Fxyd4 and Frmpd4 are the specific markers of C9, while Egfem1 and Calb1 are the specific makers of C12 (Fig. 3c, and Additional file 2: Figure S4b and c). Importantly, while some identified subpopulations showed only subtle differences in their transcriptomic profiles, they exhibited distinct patterns in their epigenomic profiles (Fig. 3b, d). For example, C2 and C7 (subpopulations of proximal tubule S3 cells (type 1)), and C8 and C10 (subpopulations of proximal tubule S1/S2 cells) have similar gene expression profiles (Fig. 3b), but, exhibit strong differential accessibility patterns (Fig. 3e). The average signals of each locus across cells in each subpopulation are significantly different (Fig. 3e). Identifying new epigenomics-induced subpopulations by simultaneously analyzing transcriptomic and epigenomic profiles in mouse kidney. a UMAP visualization of cells, which are colored by the inferred subpopulations. b Heatmap of differentially expressed genes. For each cluster, the top 10 marker genes and their relative expression levels are shown. Selected genes for each cluster are color-coded and shown on the right. c UMAP plots show the cell cluster-specific patterns of the identified factors (left), and the ranking plots show the top marker genes in the corresponding factors (right). In the projected factor pattern plots, cells are colored based on the loading values in the factor from the inferred cell loading matrix. In gene ranking plots, genes are ranked based on the gene scores in the factor from the gene loading matrix. Labeled genes are representative markers. d Heatmap showing the relative chromatin accessibility of cluster-specific loci. e Heatmap of the raw chromatin accessibility of individual cells (left) and the average chromatin accessibility of cell clusters (including C2, C7, C8, and C10) (right) using differential accessible loci among the cell clusters. f Regulatory information of eight identified cell clusters. The identities of these subpopulations were shown on the most left To further characterize these differential accessible loci and identify the specific transcriptional regulatory mechanisms of these epigenetics-induced subpopulations, we performed gene ontology enrichment and motif discovery analysis using GREAT and HOMER, respectively (Fig. 3f). Notably, for the two subpopulations C8 and C10 of proximal tubule S1/S2 cells, the C8-specific accessible loci were related to the chromatin binding and histone deacetylase complex, and were further enriched for binding motifs of MAFB and JUNB, both of which are known regulators of proximal tubule development [30]. Differential accessible loci of C10 were enriched in VEGFR signaling pathway, consistent with the role in the maintenance of tubulointerstitial integrity and the stimulation of proximal tubule cell proliferation [31]. Moreover, we applied chromVAR [32] to analyzing the differential accessible loci between C2 and C7, and C8 and C10, respectively. chromVAR calculates the bias corrected deviations in accessibility. For each motif, there is a value for each cell, which measures how different the accessibility for loci with that motif is from the expected accessibility based on the average of all the cells. By performing hierarchical clustering of the calculated deviations of top 30 most variable TFs, we found that these TFs were divided into 2 clusters, and each TF cluster was specific to 1 particular cell subpopulation, which was found to be consistent with the clustering by scAI (Additional file 2: Figure S5). Revealing underlying transition dynamics by analyzing transcription and chromatin accessibility simultaneously Next, we applied scAI to data from lung adenocarcinoma-derived A549 cells after 0, 1, and 3 h of 100 nM dexamethasone (DEX) treatment, including scRNA-seq and scATAC-seq data from 2641 co-assayed cells [6]. scAI revealed two factors, where factor 1 was enriched with cells from 0 h and factor 2 was enriched with cells from 3 h (Fig. 4a). Factor-specific genes and loci were identified by analyzing the gene and locus loading matrices (Fig. 4b). Among them, known markers of glucocorticoid receptor (GR) activation [33,34,35] (e.g., CKB and NKFBIA) were enriched in factor 2, and markers of early events after treatment [36] (e.g., ZSWIM6 and NR3C1) were enriched in factor 1. We collected TFs of these known markers from hTFtarget database (http://bioinfo.life.hust.edu.cn/hTFtarget/). Interestingly, the TF motifs, such as FOXA1 [37], CEBPB [38], CREB1, NR3C1, SP1, and GATA3 [39], also had high enrichment scores in the inferred factors (Fig. 4c), in agreement with that these motifs are key transcriptional factors of GR activation markers [40]. Particularly, CEBPB binding was shown positively associated with early GR binding [41], and GR binds near CREB1 binding sites that makes enhancer chromatin structure more accessible [42]. In the low-dimensional space visualized by VscAI, markers of early events, such as ZSWIM6 and NR3C1, were located near cells from 0 h, while markers of GR activation, such as CKB, NKFBIA, and ABHD12, were located near cells from 3 h (Fig. 4d), providing a direct and intuitive way to interpret the data. Revealing cellular heterogeneity and regulatory links of dexamethasone-treated A549 cells. a Heatmap of the cell loading matrix H obtained by scAI. Cells are ordered and divided into early, transition, and late stages based on hierarchical clustering. The bar at the bottom indicates the collection time of each cell. b Genes are ranked in each factor based on gene scores calculated from gene loading matrix, in which the known markers are indicated. c Loci are also ranked based on locus scores from locus loading matrix, in which the motifs and the corresponding logo of some TFs of the known marker genes are indicated. The binding TFs of the known marker genes and the chromosome loci of these motifs were found from hTFtarget database. d Visualization of cells by VscAI. Known marker genes (left panel) and motifs related with these marker genes (right panel) were projected onto the same low-dimensional space. The same motifs such as SMAD3 and NR3C1 are shown in two opposite positions, as they are enriched in different loci. These loci were located within 10 kb of marker genes' regulatory regions, which were extracted from the database (http://bioinfo.life.hust.edu.cn/hTFtarget/) in lung tissue. Here, we visualized the motifs instead of individual loci for easier understanding. e The fold enrichment (FE) values of inferred regulatory links of the known markers, which were validated by the hTFtarget database. f Inferred regulatory links of gene ABHD12 for each factor and the epigenome browser visualization of DNase-seq data and NR3C1 ChIP-seq data derived from chromatin regions near TSS of ABHD12. The red stem represents the TSS region of the gene, and the black stem represents each locus. Five regulators which correspond to the inferred regulatory links were indicated. The gray region shows the distinct regulatory links (regulated by NR3C1) between 0 h and 1 and 3 h. g The pseudotemporal chromatin accessibility trajectory was inferred with the aggregated scATAC-seq data. Cells were visualized in the first two diffusion components (DCs). The gray line is the fitted principal curve. Bottom: the percentages of cells at the three time points during the inferred pseudotime, which was divided into 10 bins. h Inferred pseudotime of three key genes. The black line indicates the fitted expression levels using cubic splines. i Left: "Rolling wave" plot shows the normalized smoothed accessibility data for the pseudotime-dependent accessible loci clustered into two groups. Middle: the normalized smoothed gene expression data for the pseudotime-dependent genes along the inferred accessibility trajectory using the aggregated scATAC-seq data. Loci and genes are ordered based on the onset of activation. Right: the corresponding gene dynamics along the cellular trajectory inferred only using scRNA-seq data To systematically assess the top ranked genes and loci in the identified factors, we performed pathway enrichment analysis of genes with MSigDB [43] and loci with GREAT [44]. As expected, several processes relevant to GR activation were uncovered, such as the "neurotrophin signaling pathway," a pathway previously reported to have a direct effect on GR function [45]. The "Fc epsilon RI signaling pathway" was enriched in factor 2 (Additional file 2: Figure S6a), which is in good agreement with that the reduction of Fc epsilon RI levels might be one of the favorable anti-allergic functions of glucocorticoids in mice [46]. Furthermore, processes such as "genes involved in glycogen breakdown (glycogenolysis)," "genes involved in glycerophospholipid biosynthesis," and "pentose and glucuronate interconversions" were enriched in the nearby genes of the factor-specific loci (Additional file 2: Figure S6b). While the DEX treatment of A549 cells is known to increase both transcription and promoter accessibility for markers of GR activation [6], little is known on the regulatory relationships. We inferred regulatory links between cis-regulatory elements and target marker genes using perturbation-based correlation analysis and further identified bounded TFs that regulate target marker genes using nonnegative least square regression (see the "Methods" section). To assess the accuracy of the inference, we evaluated whether these regulatory relationships were enriched in an independent database of TF-target relationships for human (hTFtarget, http://bioinfo.life.hust.edu.cn/hTFtarget/) (see the "Methods" section). Encouragingly, high enrichment of the inferred regulatory relationships for the key markers of GR activation was observed (Fig. 4e), and the inferred regulatory relationships were able to be validated using ChIP-seq and DNase-seq data from ENCODE (https://www.encodeproject.org/). For the GR activation marker ABHD12 that was highly enriched in factor 2, we identified distinct regulatory links between factor 1 (enriched with cells from 0 h) and factor 2 (enriched with cells from 3 h). Among its regulators, the glucocorticoid receptor NR3C1 was revealed in factor 2 (Fig. 4f). Visualizing the chromatin signals of ChIP-seq data of NR3C1 and DNase-seq data using WashU Epigenome Browser (https://epigenomegateway.wustl.edu/browser), we found that most cis-regulatory elements are located in the open regions of the DNase-seq data, and that NR3C1 exhibits signals within 50 kb of the transcription start site (TSS) of ABHD12 at 1 and 3 h but no signals at 0 h in the ChIP-seq data. This is consistent with our prediction on the regulation between NR3C1 and ABHD12 existing in factor 2, but not in factor 1. scAI provides an unsupervised way to aggregate sparse scATAC-seq data from similar cells through iterative refinement, which facilitates and enhances the direct analysis of scATAC-seq data. We next assess the performance of the aggregated scATAC-seq data in comparison with the raw scATAC-seq or scRNA data, in terms of the identification of cell states, the low-dimensional visualization of cells, and the reconstruction of the pseudotemporal dynamics. The previous study [6] identified two clusters that comprised a group of untreated cells and a group of DEX-treated cells, in which treated cells collected from 1 and 3 h form one cluster. Our analysis recovered three cell states, including an early state enriched by cells from 0 h, a transition state enriched by cells from 1 h, and a late state enriched by cells from 3 h (Fig. 4a). Due to the high sparsity (96.8% for scRNA-seq and 99.2% for scATAC-seq) and the near-binary nature of the scATAC-seq data, dimension reduction methods, such as t-SNE, were found to fail to distinguish the different cell states (Additional file 2: Figure S6c). However, scAI uncovered distinct cell subpopulations, as seen in the low-dimensional space, based on the aggregated data (Additional file 2: Figure S6c). We next study the pseudotemporal dynamics of A549 cells using our previously developed method scEpath [47]. Compared to the trajectory inferred using only the scRNA-seq data, which lacks well-characterized GR activation trends for cells measured at three different time points (Additional file 2: Figure S6d), a clear and consistent trajectory was inferred when using the aggregated scATAC-seq data (Fig. 4g, h). We identified pseudotime-dependent genes and loci that were significantly changed along the inferred trajectories. The pseudotemporal dynamics of these genes along the trajectory inferred using only the scRNA-seq data were found to be discontinuous, in contrast to the aggregated scATAC-seq data obtained from scAI led to continuous trajectory (Fig. 4i). Previously, we used the measure scEnergy to quantify the developmental process [47]. Here, we found no significant differences in the single-cell energies between different time points when only using the scRNA-seq data. However, significantly decreased scEnergy values were seen during treatment according to the aggregated scATAC-seq data (Additional file 2: Figure S6e). Overall, the aggregated scATAC-seq data by scAI can better characterize the dynamics of DEX treatment, and scAI suggests new mechanisms regarding the GR activation process in DEX-treated A549 cells, including a transition state and differential cis-regulatory relationships. Uncovering coordinated changes in the transcriptome and DNA methylation along a differentiation trajectory To study data with simultaneous single-cell methylome and transcriptome sequencing [3, 8, 48], we applied scAI to a dataset obtained from 77 mouse embryonic stem cells (mESCs), including 13 cells cultured in "2i" media and 64 serum-grown cells, which were profiled by parallel single-cell methylation and transcriptome sequencing technique scM&T-seq [3]. The DNA methylation levels were characterized in three different genomic contexts, including CpG islands, promoters, and enhancers, which are usually linked to transcriptional repression [49, 50]. Because DNA methylation data are sparse and binary, direct dimensional reduction may fail to capture cell subpopulations (Fig. 5a). scAI was able to distinguish cell subpopulations after aggregation (Fig. 5a), showing three subpopulations, C1, C2, and C3. Among them, C3 was captured by factor 1 with cells cultured in "2i" media and a few serum-grown cells, while C1 and C2 were captured by factors 2 and 3, respectively, with other serum-grown cells (Additional file 2: Figure S7). Uncovering coordinated changes between the transcriptome and DNA methylation within an embryonic stem cell differentiation trajectory. a Comparisons of principal component analyses (PCA) of scRNA-seq data, single-cell DNA methylation data, and aggregated single-cell DNA methylation data learned by scAI. Cells are colored based on the cell subpopulations identified using scAI. Marker shapes denote the culture conditions. b Heatmap of the expression level and methylation level of cluster-specific marker genes (left), loci (middle), and loci within 500 kb of the TSS of marker genes (right) in the three cell clusters. c Genes and loci are ranked based on their enrichment scores in each factor. Labeled genes (top) are known pluripotency markers or differentiation markers. Labeled loci (bottom), including CpG sites, enhancers, and promoters, are located within 500 kb of the TSS of marker genes in each factor. d VscAI visualization of cells and known pluripotency markers and differentiation markers derived from transcriptome (top) and DNA methylation (bottom) data Based on the top gene and locus loadings in each factor, we identified 688, 877 and 422 marker genes and 2164, 953 and 4461 differential methylated loci in C1, C2, and C3, respectively, with distinct gene expression and methylation patterns among these three groups (Fig. 5b). Moreover, methylation levels of loci near marker genes also showed group-specific patterns (Fig. 5b). Several known pluripotency markers (e.g., Essrb, Tcl1, Tbx3, Fbxo15, and Zpf42) exhibited the highest gene enrichment scores in factor 1 but the lowest gene enrichment scores in factors 2 and 3. In contrast, differentiation markers, such as Krt8, Tagln, and Krt19, exhibited higher gene enrichment scores in factor 3 but lower enrichment scores in factors 1 and 2 (Fig. 5c). Factor 2 exhibited an intermediate state with a relatively low expression level of both pluripotency and differentiation markers. Interestingly, several new marker genes of this intermediate state were observed, such as Fgf5, an early differentiation marker involved in neural differentiation in human embryonic stem cells [51]. Factor-specific loci located in the CpG, promoter, and enhancer regions of marker genes are also shown in Fig. 5c. The pluripotency markers Essrb and Tcl1 had higher gene enrichment scores, and their corresponding CpG, promoter, and enhancer regions had higher locus enrichment scores in factor 1. This relationship is consistent with the fact that some DNA methylation located in the CpG, promoter, and enhancer regions exhibit a negative relationship with the expression level of target genes. A continuous differentiation trajectory, which was characterized by the differentiation of naïve pluripotent cells (NPCs) into primed pluripotent cells and ultimately into differentiated cells (DCs), was observed using VscAI (Fig. 5d). Additionally, the embedded genes and factors showed how specific genes and factors contribute to the differentiation trajectory. For example, pluripotency markers, such as Zpf42, Tex19.1, Fbxo15 Morc1, Jam2, and Esrrb [52, 53], were visually close to factor 1, while differentiation markers, such as Krt19 and Krt8 [54], were close to factor 3 (Fig. 5d). Interestingly, although both pluripotency and differentiation markers were not highly expressed in the early differentiated state in factor 2, some methylated loci of these markers (e.g., CpG regions of Zfp42 and Tex19.1, enhancer region of Jam2 and Tcl1, and promoter region of Anxa3) were enriched in factor 2 (Fig. 5d). These observations might be because their other regions (CpG, enhancer, or promoter) are methylated or DNA methylation is not the main driven force for transcriptional silencing. Overall, scAI shows coordinated changes between transcriptome and DNA methylation along the differentiation process. Comparison with three multiomics data integration methods We next compared scAI with three recent single-cell integration methods, MOFA [17], Seurat (version 3) [22], and LIGER [23], on A549 and kidney datasets. Similar to the observations on the simulation datasets (Fig. 2d), MOFA cannot capture the variations in the scATAC-seq data as the variances explained by the learned factors in the scATAC-seq data were nearly zero (Additional file 1: Supplementary methods (Details of data analysis by MOFA) and Additional file 2: Figure S8a-e). While Seurat and LIGER were designed for connecting cells measured in different experiments, we applied them to the two co-assayed single-cell multiomics data to test whether they are able to make links between co-assayed cells. We assessed the comparison using two metrics: (a) entropy of batch mixing and (b) silhouette coefficient. The entropy of batch mixing measures the uniformity of mixing for two samples in the aligned space [55], for which scRNA-seq and scATAC-seq profiles were treated as two batches, and a higher entropy value means better alignment. The silhouette coefficient quantifies the separation between cell groups using distance matrices calculated from a low-dimensional space [55], for which cell group labels were taken from the original study [6] and a higher silhouette coefficient indicates better preservation of the differences and structures between different cell groups. The t-SNE analysis shows the co-assayed cells were aligned better by LIGER than Seurat when the two methods were applied to A549 dataset (Fig. 6a). This observation is further confirmed by computing the entropy of the batch mixing based on the aligned t-SNE space. We also computed the entropy of perfect alignment (i.e., the t-SNE coordinates of each pair of co-assayed cells are the same), and found that LIGER showed higher entropy value than Seurat, but lower entropy than the perfect alignment (Fig. 6a). In addition, we explored the quality of time point-based grouping of cells on the t-SNE space. Cells from 1 and 3 h were mixed together on the t-SNE space generated by Seurat, while there was a gradual change of cells from 0 to 3 h on the t-SNE space generated by LIGER (Fig. 6b). We also performed t-SNE on the cell loading matrix inferred by scAI (Additional file 2: Figure S8f), and found that scAI was able to capture the gradual change of cells transitioning from 0 to 3 h. Quantitatively, scAI produced significantly higher silhouette coefficients than those from both Seurat and LIGER (Fig. 6b). Comparisons with multiomics data integration methods. a t-SNE visualizations of scRNA and scATAC-seq data from co-assayed A549 cells, colored by measurements (RNA vs. ATAC) after integration with Seurat (left) and LIGER (middle). Right panel: comparisons of alignment score (quantified by the entropy of batch mixing) from perfect alignment (termed as gold-standard) with that computed from the aligned t-SNE space using Seurat and LIGER. p values are from the Wilcoxon rank-sum test. b Cells are colored by the data collection times. Right panel: comparisons of silhouette coefficient computed from the t-SNE coordinates of each cell generated by scAI with that computed from the aligned t-SNE space using Seurat and LIGER. c, d UMAP visualizations of scRNA and scATAC-seq data from co-assayed mouse kidney cells colored by measurements (RNA vs. ATAC) (c) and published cell labels (d) after integration with Seurat and LIGER. The alignment score and silhouette coefficient were also shown In the kidney dataset, by computing the entropy of the batch mixing based on the aligned UMAP space, we observed significantly lower entropy of Seurat and LIGER than that of the perfect alignment (Fig. 6c). We then also calculated the silhouette coefficient using the UMAP space for all three methods (Fig. 6d and Additional file 2: Figure S8f). Again, significantly higher silhouette coefficients were observed in scAI, in comparison with those in Seurat and LIGER (Fig. 6d). Together, these results suggest that integration methods designed for measurements in different cells (e.g., Seurat and LIGER) may not accurately identify correspondences between the co-assayed cells, leading to errors in downstream analysis, and the integration of parallel single-cell omics data needs specialized methods, such as scAI, to deal with the epigenomic data with inherently high sparsity and to better preserve intrinsic differences between cell subpopulations. Comparison with methods using single omics data To evaluate the significance of the parallel profiling of multiomics over single omics data, we compared scAI with methods that use only transcriptomic data or only epigenomic data on both simulation and real datasets. Specifically, we compared scAI with two methods designed for only scRNA-seq data, including Seurat and SC3 [56], and two methods designed for only scATAC-seq data, including Signac (https://satijalab.org/signac/) and scABC [57]. On simulation datasets, we evaluated the performance of cell clustering using normalized mutual information (NMI). On real datasets, we compared the clustering based on those four methods with prior labels using UMAP. On simulation datasets, we observed comparable NMI values between scAI and SC3, but slightly lower values of Seurat (Additional file 2: Figure S9). For the clustering of scATAC-seq data, both Signac and scABC showed significantly lower NMI values compared to those by scAI using both scRNA-seq and scATAC-seq data. On A549 real datasets, by visualizing cells in UMAP, we found that both Seurat and SC3 were unable to detect the transition stage and distinguish cells from 1 and 3 h. Cell clusters identified by Signac and scABC using scATAC-seq data alone were found to be inconsistent with the prior labels (Additional file 2: Figure S10a). On kidney dataset, Seurat was unable to distinguish the DCTC cells and CDPC cells, and Signac and scABC were also producing clusters inconsistent with prior labels (Additional file 2: Figure S10b). On mESC dataset, while both Seurat and SC3 correctly identified the cell subpopulations, clusters identified by Signac and scABC also mixed together in UMAP (Additional file 2: Figure S10c). Overall, scAI is able to consistently identify the expected clusters and also the clusters with subtle transcriptomic differences but strong chromatin accessibility differences (as shown in kidney dataset), showing the importance of integrating parallel single-cell multiomics data. A key challenge in analyzing single-cell multiomics data is to integrate and characterize multiple types of measurements coherently in a biologically meaningful manner. Often, different components in such multiomics measurements exhibit fundamentally different features, for example, some data are binary and inherently sparse whereas the other are more akin to a continuous distribution after normalization [9]. We presented an unsupervised method, scAI, for integrating scRNA-seq data and single-cell chromatin accessibility or DNA methylation data obtained from the same single cells. scAI learned three sets of low-dimensional representations of high-dimensional data: the gene, locus, and cell loading matrices describing the relative contributions of genes, loci, and cells in the inferred factors, and the cell-cell similarity matrix used for aggregating sparse epigenomic data. These learned low-rank matrices allow direct identification of cell subpopulations/states and the associated marker genes or loci that characterize each subpopulation, and provide a convenient visualization of cells, genes, and loci in the same low-dimensional space. Simultaneous analyses of the gene and locus loading matrices enable inference of the regulatory relationships between the transcriptome and the epigenome. Together, scAI provides an effective and biologically meaningful way to dissect heterogeneous single cells from both transcriptomic and epigenomic layers. The sparse and binary nature of single-cell ATAC-seq or DNA methylation data poses a computational challenge in analysis. Aggregation has been a primary method for analyzing such data [20]. For example, Cicero, an algorithm used for predicting cis-regulatory DNA interactions from scATAC-seq data, aggregates similar cells using a k-nearest neighbors approach based on a reduced dimensional space (e.g., t-SNE and DDRTree) [58]. However, as shown in our simulated data and real co-assayed data, dimensional reduction techniques often fail to capture cell similarity from the chromatin accessibility or DNA methylation profiles. To deal with this difficulty, scAI first combines sparse epigenomic profiles from subgroups of cells that exhibit similar gene expression and epigenomic profiles. These similar cells are analyzed by learning a cell-cell similarity matrix based on a matrix factorization model. The differences between such learned similarity matrix and the similarity matrix computed using only scRNA-seq or only aggregated scATAC-seq data were also investigated (Additional file 1 (Comparison of cell-cell similarity matrix) and Additional file 2: Figure S11 and Figure S12). Our iterative and unsupervised approach combines information from multiple-omics layers by taking advantages of the strengths in optimization models. To investigate whether scAI might make epigenomic data seemingly more distinct than they actually are, we employed the following two strategies on simulation datasets. Firstly, we compared the aggregated scATAC-seq data obtained from scAI with the raw ATAC-seq data prior to making them sparse and binarization (termed as bulk ATAC-seq data hereafter) in two ways: the direct visualization of loci patterns using heatmap and the low-dimensional visualization of cells using UMAP. The bulk ATAC-seq data and the aggregated scATAC-seq data were found to exhibit the same loci patterns (Additional file 2: Figure S13a). Both bulk ATAC-seq and aggregated scATAC-seq data were found to be distinct across clusters (Additional file 2: Figure S13b). These observations were consistent across all the eight simulation datasets. Secondly, we randomly permuted scATAC-seq data across all cells before applying scAI to the scRNA-seq data and the permuted scATAC-seq data. We found that the aggregated permuted scATAC-seq data were still distinct across clusters in some cases in UMAP (Additional file 2: Figure S13c), partly because there were still differential accessibility patterns across these clusters after permutation (Additional file 2: Figure S13d). Next, we considered an extreme case where all the values of scATAC-seq data are equal and found aggregated scATAC-seq data did not produce any artificial clusters, partly due to our normalization strategy in which scAI aggregates scATAC-seq profile after normalizing Z°R with the sum of each column equaling 1. On the other hand, scAI is able to identify cell clusters with high accuracy on all simulation datasets (Fig. 2e). Our analysis suggests scAI robustly maintains cellular heterogeneity within and between different subpopulations when it enhances epigenomic signals. To investigate whether scAI introduces high portion of false positives during differential accessibility analysis using aggregated scATAC-seq data, we calculated the percentage of false positive differential accessible loci based on the aggregated scATAC-seq data by comparing them to the differential accessible loci identified using the bulk ATAC-seq data. Specifically, the percentage of false positives was defined as the percentage of differential accessible loci that were not in the set of differential accessible loci identified using the bulk ATAC-seq data. We adopt the Wilcoxon rank sum test for accessibility of cells in each subpopulation and the remaining cells. We found that the percentages of false positive differential accessible loci were less than 7% on simulation datasets (Additional file 2: Figure S14). A direct visualization for the datasets 7 and 8 with imbalanced cluster size shows consistent loci patterns and highly overlapped differential accessible loci between the aggregated scATAC-seq data and bulk ATAC-seq data (Additional file 2: Figure S15). These results suggest that the aggregation strategy has a good control of false positives for differential accessibility analysis. The single-cell multiomics data are sparse and have large amounts of missing values. The scRNA-seq data have two states: non-zero and zero values. The zero values might be either non-expressed values or due to dropout events [59]. The single-cell methylation data have three states: methylated, unmethylated, and missing values. While replacing missing values by zeros and adopting a model that can potentially impute the missing values, a strategy used in scAI, might improve downstream analysis due to the fact that the large portions of missing values contain true zero values, such approach likely has several limitations. First, it might introduce false signals when the missing values might actually correspond to non-zero signals. Second, such approach cannot distinguish methylated and missing states for the DNA methylation data. One way to address such difficulty is to throw away the missing values, which is particularly useful for the methylation data (e.g., scM&T-seq [3]) because it allows to distinguish methylated and missing values. One powerful approach is to use probabilistic models, such as MOFA [17] and its successor MOFA+ [19], which do not include those missing value regions when computing the likelihood. In principle, we can throw away the missing values in scAI by incorporating a binary matrix into the second term of our model (Eq. (1)), an approach similar to incomplete nonnegative matrix factorization model [60]. Comparing with recent methods, such as MOFA [17], Seurat [22], and LIGER [23], scAI is able to capture cell states with higher accuracy for the multiomics data in which only gene expression or chromatin accessibility may be discriminated between cell states, for example, to uncover novel cell subpopulations with distinct epigenomic profiles but similar transcriptomic profiles, as seen in the kidney dataset. Such capability of identifying cell subpopulation exhibiting only distinct epigenetic profiles will facilitate further analysis of epigenetics in controlling cell fate decision and may help to reveal important transcriptional regulatory mechanism [61]. Similar to uncovering new cell subpopulation, scAI can uncover new cell transition states induced by epigenetics as seen in the analysis of the dexamethasone-treated A549 cell dataset [6], and identify co-regulations coordinated between transcriptome and DNA methylation, as seen in the mESC dataset. For the methods (e.g., Seurat and LIGER) that are designed for integrating single-cell data measured in different cells, in principal, they can be applied to the parallel single-cell multiomics data. However, we found that these two methods yield deficient alignment between co-assayed cells, as seen in the A549 and kidney datasets. Such alignment errors might affect downstream analysis such as inferring regulatory links. Moreover, these two methods, unlike scAI, need to transform other types of features such as chromatin accessibility or DNA methylation into gene level, which leads to limited resolution and cannot make full use of epigenomic information. As parallel single-cell multiomics data becomes more widely available, methods like scAI will be essential to make sense of this new type of data. Parallel single-cell sequencing provides a great opportunity to infer the regulatory links between transcriptome and epigenome [9]. In this study, the regulatory links between chromatin regions and marker genes were inferred by combining the correlation analysis and the nonnegative least square regression, as seen in the A549 dataset. Because many factors such as chromatin regulators, histone modification, and the microenvironment can affect the transcriptional regulation [62], more complex and accurate models are needed to improve the accuracy of regulatory relationship inference. While it remains to be done, scAI provides a computational tool for integrating parallel single-cell omics data, including visualization, clustering, differential expression/chromatin accessibility analysis, and regulatory relationship inference. Here, we present scAI, which is one of the first computational methods for the integrative analysis of single-cell transcriptomic and epigenomic profiles that are measured in the same cell. scAI was shown to be an effective tool to characterize multiple types of measurements in a biologically meaningful manner, dissect cellular heterogeneity within both transcriptomic and epigenomic layers, and understand transcriptional regulatory mechanisms. Due to rapid development of single-cell multiomics technologies, scAI will facilitate the integrative analysis of the current and upcoming multiomics data profiled in the Human Cell Atlas as well as the Pediatric Cell Atlas [63]. Optimization algorithm for scAI The optimization problem (Eq. (1)) is solved by a multiplicative update algorithm, which updates variables W1, W2, H, and Z iteratively according to the following equations (Additional file 1: Supplementary methods (Details of scAI) and Additional file 2: Figure S16): $$ {W}_1^{ij}\leftarrow {W}_1^{ij}\frac{{\left({X}_1{H}^T\right)}^{ij}}{{\left({W}_1H{H}^T\right)}^{ij}} $$ $$ {W}_2^{ij}\leftarrow {W}_2^{ij}\frac{{\left({X}_2\left(Z\circ R\right){H}^T\right)}^{ij}}{{\left({W}_2H{H}^T\right)}^{ij}} $$ $$ {H}^{ij}\leftarrow {H}^{ij}\frac{{\left(\alpha {W}_1^T{X}_1+{W}_2^T{X}_2\left(Z\circ R\right)+\lambda H\left(Z+{Z}^T\right)\right)}^{ij}}{{\left(\left(\alpha {W}_1^T{W}_1+{W}_2^T{W}_2+2\lambda H{H}^T+\gamma e{e}^T\right)H\right)}^{ij}} $$ $$ {Z}^{ij}\leftarrow {Z}^{ij}\frac{{\left(\left({X}_2^T{W}_2H\right)\circ R+\lambda {H}^TH\right)}^{ij}}{{\left(\left({X}_2^T{X}_2\left(Z\circ R\right)\right)\circ R+\lambda Z\right)}^{ij}}, $$ where \( {W}_I^{ij},I=1,2 \) represent the entry in the ith row and jth column of W1 (p × K) and W2 (q × K). Hij and Zij represent the ith row and the jth column of H (K × n) and Z (n × n). e (K × 1) represents a vector with all elements being 1. In each iteration step, H is scaled with the sum of each row equaling 1. In this algorithm, we initialize W1, W2, H, and Z using a 0–1 uniform distribution and generate a binary matrix R using a Bernoulli distribution with a probability s. α and λ are parameters to balance each term, and γ is a parameter to control sparsity of each row of H. The default values for those parameters are as follows: s = 0.25, α = 1, λ = 10,000, and γ = 1. The rank K is determined by a stability-based method [28] (Additional file 1: Supplementary methods (Rank selection) and Additional file 2: Figure S17 and Figure S18). Since H is scaled by row, the entry of matrix H is less than 1. Thus, the magnitude of the third term is small and λ usually is large to ensure the importance of this term. The parameter α is set to be small because the magnitude of this term is usually relatively large, which does not mean that W1 and W2 are not important in the model. The parameters used in all the datasets are summarized in Additional file 2: Table S2. Robustness analysis on the parameter indicates that the overall performance of scAI is relatively robust to choices of parameter values within certain ranges (Additional file 1: Supplementary methods (Robustness analysis) and Additional file 2: Figure S19). Identification of cell subpopulations From transcriptomic and epigenomic profiles, scAI projects cells into a cell loading matrix H, which is a low-dimensional representation of both profiles. The subpopulations are then identified by clustering through H using the Leiden community detection method [64]. Specifically, a shared nearest neighbor (SNN) graph is first constructed by calculating the k-nearest neighbors (20 by default) for each cell based on the matrix H. Then, the fraction of shared nearest neighbors between the cell and its neighbors is used as weights of the SNN graph. Next, we identify cell subpopulations by applying the Leiden algorithm [64] to the constructed SNN graph with a default resolution parameter setting of 1. Identification of cell subpopulation-specific marker genes and epigenomic features After determining the cell subpopulations, we adopt a likelihood-ratio test for gene expression of cells in the kth cell subpopulation and cells not in the kth cell subpopulation. Genes are considered as the kth cell subpopulation-specific marker genes if (i) the p values are less than 0.05, (ii) the log fold-changes are higher than 0.25, and (iii) the percentage of cells with expression in the kth cell subpopulation is higher than 25%. Cell subpopulation specific-epigenomic features are identified using a similar approach. Visualization of cells, genes, and loci in a 2D space scAI simultaneously decomposes gene expression matrix and accessibility or methylation matrix into a set of low-rank matrices, including the gene loading matrix W1, locus loading matrix W2, cell loading matrix H, and cell-cell similarity matrix Z. Based on these inferred low-dimensional representations, we simultaneously visualize cells, genes, and loci in a single two-dimensional space using similarity weighted nonnegative embedding [65]. Specifically, we first compute the coordinates of the inferred factors. H is smoothed by the similarity matrix Z using Hs = H × Z. Then, we compute pairwise similarity matrix S between factors (rows of Hs) by cosine distance. The similarity matrix S is converted into a distance matrix D according to \( D=\sqrt{2\left(1-S\right)}. \) The Sammon mapping method [27] is then used to project the distance matrix D onto a two-dimensional space (a matrix with K rows (K is the number of factors) and 2 columns). The values in this two-dimensional matrix are scaled (ranging from zero to one) to obtain the coordinates of factor C according to C = (Ckx, Cky), where Ckx and Cky represent the x and y coordinates of the kth factor. Next, we compute the coordinates of cell j (E = (Ejx, Ejy)) in the two-dimensional space according to: \( {E}_{jx}=\frac{\sum_k{\left({H}^{kj}{C}_{kx}\right)}^{\alpha }}{\sum_k{\left({H}^{kj}\right)}^{\alpha }},{E}_{jy}=\frac{\sum_k{\left({H}^{kj}{C}_{ky}\right)}^{\alpha }}{\sum_k{\left({H}^{kj}\right)}^{\alpha }}, \) where the parameter α controls how tight the allowed embedding is between the cells and the factors. The reasonable value range is from 1 to 2. Large values move the cells closer to the factors, while it may distort the data when α is higher than 2. α = 1.9 is used as default. The coordinates of cells E are further smoothed by the similarity matrix Z using Es = E × Z and then are used for visualization. Finally, we embed the marker genes and loci into the same two-dimensional space according to W1 and W2 as follows: \( {F}_{jx}^I=\frac{\sum_k{\left({W}_I^{jk}{C}_{kx}\right)}^{\alpha }}{\sum_k{\left({W}_I^{jk}\right)}^{\alpha }},{F}_{jy}^I=\frac{\sum_k{\left({W}_I^{jk}{C}_{ky}\right)}^{\alpha }}{\sum_k{\left({W}_I^{jk}\right)}^{\alpha }}, \) where I = 1,2 represents the embedding of genes and loci, respectively. Accordingly, using this integrative dimension-reduction approach, the marker genes and loci that separate cell states alongside the cells can be visualized together to help interpretation of multiomics data in an intuitive way. Identification of factor-specific marker genes and epigenomic features Using scAI, we obtain gene loading and locus loading matrices, W1 and W2, and the values in each column of W1 and W2 are respectively used to identify the genes and epigenomic features associated with each factor. To rank the gene i in factor k, we define a gene score:\( {S}_1^{ik}={W}_1^{ik}/\sum \limits_j{W}_1^{jk} \). Similarly, we rank the loci in each factor by defining a locus score based on W2. To identify factor-specific marker genes and epigenomic features, we divide the genes and loci into two groups for each factor. The z-score is computed for each entry in each column of W1 and W2: \( {z}_1^{ik}=\left({W}_1^{ik}-{\mu}_1^j\right)/{\sigma}_1^k \) and \( {z}_2^{ik}=\left({W}_2^{ik}-{\mu}_2^j\right)/{\sigma}_2^k \), where \( {\mu}_1^k,{\mu}_2^k \) are the average values of the kth column in W1 and W2, respectively, and \( {\sigma}_1^k,{\sigma}_2^k \) are the corresponding standard deviations. Let AGk and ALk represent the sets of candidate genes and loci, respectively, associated with the kth factor if \( {z}_1^{ik},{z}_2^{ik} \) are greater than T (0.5 by default). Smaller T value gives more features that might contain redundant information, whereas larger T value might leave key features out. We also divide the cells into two groups for each factor using the similar method. In more detail, we compute the z-score for each entry in each row of the cell loading matrix H by zkj = (Hkj − μk)/σk. If zkj is greater than T, cell j is assigned to \( {C}_1^k \); otherwise, it is assigned to \( {C}_2^k \). Next, using a Wilcoxon rank-sum test for the candidate genes in AGk in cells in \( {C}_1^k \) and \( {C}_2^k \), we statistically test the differences of the candidate genes in the different cell groups. Candidate genes are considered as factor-specific marker genes if (i) the p values are less than 0.05, (ii) the log fold-changes are higher than 0.25, and (iii) the percentage of cells with expression in \( {C}_1^k \) is greater than 25%. Factor-specific epigenomic features are identified using the similar approach. Inference of factor-specific transcriptional regulatory relationships Once the factor-specific marker genes and loci are determined, we next infer the regulatory links between them. For factor k, the two sets AGk and ALk consist of the identified factor-specific marker genes and loci, respectively. For a gene gi in AGk, we select a locus set \( {L}_k^i\left(\subseteq A{L}_k\right) \), which includes loci within 500 kb of the transcription start site (TSS) of gi, as candidate regulatory regions for a gene gi. To determine whether the expression level of gi is influenced by the accessible status of the candidate regions in \( {L}_k^i \), we use a perturbation approach based on the correlations between the expression level and accessibility. In this approach, first, we compute the Pearson correlation P1 between the gi expression level and the accessibility of each locus in \( {L}_k^i \) in all cells. Second, we perturb the gi expression levels by setting its expression in cells in cell group \( {C}_1^k \) to 0 and then compute the weighted correlation P2 between the perturbed gi expression level and the accessibility of \( {L}_k^i \) in all cells with Hk. as its weight, where Hk. represents the kth row of H. Third, we set the accessibility of \( {L}_k^i \) in cells in cell group \( {C}_1^k \)to 0 and then compute the weighted correlation P3 between the original gi expression level and the perturbed accessibility of \( {L}_k^i \) in all cells with Hk.. Finally, we compute the differential correlation according to dP1 = ∣ P1 − P2 ∣ , dP2 = ∣ P1 − P3∣. The regulatory links between gene gi and loci \( {l}_k^{s_i}\subseteq {L}_k^i \) are indicated if the differential correlation of dP1 or dP2 is greater than the average value of P1 and the original correlation P1 is greater than the average value of P1. For the identified regulatory links between genes and loci, to determine which transcription factors (TFs) regulate each gene gi, we first identified TF motifs enriched in the loci set \( {l}_k^{s_i} \) using chromVAR [32]. When running chromVAR using default parameters, the raw scATAC-seq data matrix of all loci was used as an input. Then, we regressed the gene expression level \( {E}_{C_1^k}^i \) of each gene across cells in \( {C}_1^k \) with that of the identified TFs \( {E}_{C_1^k}^{i_{TF}} \) using nonnegative least squares regression, i.e., \( {\hat{\beta}}^i=\arg {\min}_{\beta^i}{\left\Vert {E}_{C_1^k}^{i_{TF}}-{E}_{C_1^k}^i{\beta}^i\right\Vert}_2^2,s.t.{\beta}^i\ge 0 \). Regulatory relationships were inferred if the regression coefficients \( {\hat{\beta}}^i \) of the TFs were greater than zero. Validation of the inferred regulatory relationships To validate the inferred regulatory relationships in A549 dataset, we collected all TFs that regulate the marker genes (ABHD12, BASP1, CDH16, CKB, NFKBIA, NR3C1, PER1, SCNN1A, and TXNRD1) from the hTFtarget database (http://bioinfo.life.hust.edu.cn/hTFtarget/), which curated a comprehensive TF-target regulation from various ChIP-seq datasets of human TFs from NCBI Sequence Read Archive (SRA) and ENCODE databases. We take ABHD12 as an example to compute the fold enrichment of the inferred regulatory relationships in this database. Among the total 374 collected TFs in chromVAR, 92 TFs are found to regulate ABHD12 in hTFtraget. Among our identified 12 TFs of ABHD12 using chromVAR, 7 TFs are found to regulate ABHD12 in hTFtraget. Thus, the fold enrichment of our predicted regulations of ABHD12 is calculated by (7/12)/(92/374) = 2.37. A fold enrichment value greater than 1 indicates an over-representation of the inferred regulations in the database. Datasets and preprocessing The kidney and A549 datasets were downloaded from GSM3271044 and GSM3271045, and GSM3271040 and GSM3271041, respectively. The preprocessed mESC dataset was obtained from a previous study [17]. The detailed description of these datasets and their preprocessing were shown in Additional file 1: Supplementary methods (Details of datasets and preprocessing). Feature selection Two feature selection methods were used in this study. If the cell groups were known (e.g., at the time of data collection), the most informative genes were selected using a Wilcoxon rank-sum test with the same parameters as in the identification of factor-specific features. For example, for the scRNA-seq data in the A549 dataset, we identified the differentially expressed genes at different time points and used these genes as informative genes for the downstream analyses. For other datasets, the average expression of each gene and the Fano factor were first calculated. The Fano factor, defined as the variance divided by the mean, is a measure of dispersion. Next, the average expression of all genes was binned into 20 evenly sized groups, and the Fano factor within each bin was normalized using z-score. Then, genes with normalized Fano factors larger than 0.5 and average expressions larger than 0.01 were selected. Moreover, we also selected genes with larger Gini index values [66]. GiniClust R package was run with default parameters. Briefly, genes whose normalized Gini index is significantly above zero (p value < 0.0001) are labeled high Gini genes and selected for further analysis. For the kidney dataset, we selected the informative genes using the second method and loci that were within 50 kb of the TSS of these informative genes. Method comparisons on three datasets We compare the performance of scAI with three other methods, including MOFA [17], Seurat (version 3) [22], and LIGER [23]. MOFA takes normalized scRNA and scATAC-seq data as inputs, then infers latent factors using a generalized PCA and assesses the proportion of variance explained by each factor in each type of data. Seurat derives a "gene activity matrix" from the peak matrix of the scATAC-seq data by simply summing all counts within the gene body + 2 kb upstream, representing a synthetic scRNA-seq dataset to leverage for integration. Seurat then co-embeds the scRNA-seq and scATAC-seq cells in the same low-dimensional space by identifying "anchors" between the ATAC-seq and RNA-seq datasets. Since LIGER does not provide specific functions for integrating scRNA-seq and scATAC-seq or DNA methylation data, we used scRNA-seq data and the inferred "gene activity matrix" from Seurat as inputs for integrative analysis. The detailed description of how these comparisons were performed is available in Additional file 1: Supplementary methods (Details of method comparisons on three datasets). Based on the first two dimensions of t-SNE or UMAP, we quantify the alignment score of the scRNA-seq and scATAC-seq cells using entropy of batch mixing, and assess the separation of the cell groups using silhouette coefficient. These two evaluation metrics were defined in [55]. The detailed description is available in Additional file 1: Supplementary methods (Details of method comparisons on three datasets). scAI is implemented as both MATLAB and R packages, which are freely available under the GPL-3 license. Source codes as well as the workflows of simulation and real datasets have been deposited at the GitHub repository (MATLAB package: https://github.com/amsszlh/scAI [67] and R package: https://github.com/sqjin/scAI) [68]. The datasets analyzed in this study are available from the Gene Expression Omnibus (GEO) repository under the following accession numbers: GSM3271044 and GSM3271045 [6], GSM3271040 and GSM3271041 [6], and GSE74535 [3]. Ziegenhain C, Vieth B, Parekh S, Reinius B, Guillaumet-Adkins A, Smets M, Leonhardt H, Heyn H, Hellmann I, Enard W. Comparative analysis of single-cell RNA sequencing methods. Mol Cell. 2017;65:631–43. Kelsey G, Stegle O, Reik W. Single-cell epigenomics: recording the past and predicting the future. Science. 2017;358:69–75. Angermueller C, Clark SJ, Lee HJ, Macaulay IC, Teng MJ, Hu TX, Krueger F, Smallwood S, Ponting CP, Voet T, et al. Parallel single-cell sequencing links transcriptional and epigenetic heterogeneity. Nat Methods. 2016;13:229–32. Clark SJ, Argelaguet R, Kapourani CA, Stubbs TM, Lee HJ, Alda-Catalinas C, Krueger F, Sanguinetti G, Kelsey G, Marioni JC, et al. scNMT-seq enables joint profiling of chromatin accessibility DNA methylation and transcription in single cells. Nat Commun. 2018;9:781. Bian S, Hou Y, Zhou X, Li X, Yong J, Wang Y, Wang W, Yan J, Hu B, Guo H, et al. Single-cell multiomics sequencing and analyses of human colorectal cancer. Science. 2018;362:1060–3. Cao J, Cusanovich D, Ramani V, Aghamirzaie D, Pliner H, Hill AJ, Daza R, McFaline-Figueroa J, Packer J, Christiansen L, et al. Joint profiling of chromatin accessibility and gene expression in thousands of single cells. Science. 2018;361:1380–5. Liu LQ, Liu CY, Quintero A, Wu L, Yuan Y, Wang MY, Cheng MN, Leng LZ, Xu LQ, Dong GY, et al. Deconvolution of single-cell multi-omics layers reveals regulatory heterogeneity. Nat Commun. 2019;10:470. Macaulay IC, Ponting CP, Voet T. Single-cell multiomics: multiple measurements from single cells. Trends Genet. 2017;33:155–68. Colomé-Tatché M, Theis FJ. Statistical single cell multi-omics integration. Curr Opin Syst Biol. 2018;7:54–9. Macneil LT, Walhout AJ. Gene regulatory networks and the role of robustness and stochasticity in the control of gene expression. Genome Res. 2011;21:645–57. He B, Tan K. Understanding transcriptional regulatory networks using computational models. Curr Opin Genet Dev. 2016;37:101–8. Berger SL. The complex language of chromatin regulation during transcription. Nature. 2007;447:407–12. Nicetto D, Donahue G, Jain T, Peng T, Sidoli S, Sheng LH, Montavon T, Becker JS, Grindheim JM, Blahnik K, et al. H3K9me3-heterochromatin loss at protein-coding genes enables developmental lineage specification. Science. 2019;363:294–7. Zhang L, Zhang S. A general joint matrix factorization framework for data integration and its systematic algorithmic exploration. IEEE T FUZZY SYST 2019:doi: https://doi.org/10.1109/TFUZZ.2019.2928518. Rappoport N, Shamir R. Multi-omic and multi-view clustering algorithms: review and cancer benchmark. Nucleic Acids Res. 2018;46:10546–62. Zhang S, Liu CC, Li W, Shen H, Laird PW, Zhou XJ. Discovery of multi-dimensional modules by integrative analysis of cancer genomic data. Nucleic Acids Res. 2012;40:9379–91. Argelaguet R, Velten B, Arnol D, Dietrich S, Zenz T, Marioni JC, Buettner F, Huber W, Stegle O. Multi-omics factor analysis-a framework for unsupervised integration of multi-omics data sets. Mol Syst Biol. 2018;14:e8124. Argelaguet R, Clark SJ, Mohammed H, Stapel LC, Krueger C, Kapourani C-A, Imaz-Rosshandler I, Lohoff T, Xiang Y, Hanna CW, et al. Multi-omics profiling of mouse gastrulation at single-cell resolution. Nature. 2019:487–91. Argelaguet R, Arnol D, Bredikhin D, Deloro Y, Velten B, Marioni JC, Stegle O. MOFA+: a probabilistic framework for comprehensive integration of structured single-cell data. bbioRxiv. 2019;837104. https://doi.org/10.1101/837104. Pott S, Lieb JD. Single-cell ATAC-seq: strength in numbers. Genome Biol. 2015;16:172. Hie B, Bryson B, Berger B. Efficient integration of heterogeneous single-cell transcriptomes using Scanorama. Nat Biotechnol. 2019;37:685–91. Stuart T, Butler A, Hoffman P, Hafemeister C, Papalexi E, Mauck WM 3rd, Hao Y, Stoeckius M, Smibert P, Satija R. Comprehensive integration of single-cell data. Cell. 2019;177:1888–902. Welch JD, Kozareva V, Ferreira A, Vanderburg C, Martin C, Macosko EZ. Single-cell multi-omic integration compares and contrasts features of brain cell identity. Cell. 2019;177:1873–87. Welch JD, Hartemink AJ, Prins JF. MATCHER: manifold alignment reveals correspondence between single cell transcriptome and epigenome dynamics. Genome Biol. 2017;18:138. Duren Z, Chen X, Zamanighomi M, Zeng W, Satpathy AT, Chang HY, Wang Y, Wong WH. Integrative analysis of single-cell genomics data by coupled nonnegative matrix factorizations. Proc Natl Acad Sci U S A. 2018;115:7723–8. Shen RL, Olshen AB, Ladanyi M. Integrative clustering of multiple genomic data types using a joint latent variable model with application to breast and lung cancer subtype analysis. Bioinformatics. 2010;26:292–3. CAS PubMed Central Article PubMed Google Scholar Sammon JW. A nonlinear mapping for data structure analysis. IEEE T Comput. 1969;C-18:401–9. Martínez-Mira C, Conesa A, Tarazona S. MOSim: multi-omics simulation in R. bioRxiv. 2018;421834. https://doi.org/10.1101/421834. Becht E, McInnes L, Healy J, Dutertre CA, Kwok IWH, Ng LG, Ginhoux F, Newell EW. Dimensionality reduction for visualizing single-cell data using UMAP. Nat Biotechnol. 2019;37:38–44. Morito N, Usui T, Takahashi S, Yamagata K. MAFB may play an important role in proximal tubules development. Nephrol Dial Transpl. 2019;34:gfz106.FP048. Zepeda-Orozco D, Wen HM, Hamilton BA, Raikwar NS, Thomas CP. EGF regulation of proximal tubule cell proliferation and VEGF-A secretion. Physiol Rep. 2017;5:e13453. Schep AN, Wu B, Buenrostro JD, Greenleaf WJ. chromVAR: inferring transcription-factor-associated accessibility from single-cell epigenomic data. Nat Methods. 2017;14:975–8. Reddy TE, Pauli F, Sprouse RO, Neff NF, Newberry KM, Garabedian MJ, Myers RM. Genomic determination of the glucocorticoid response reveals unexpected mechanisms of gene regulation. Genome Res. 2009;19:2163–71. Bittencourt D, Wu DY, Jeong KW, Gerke DS, Herviou L, Ianculescu I, Chodankar R, Siegmund KD, Stallcup MR. G9a functions as a molecular scaffold for assembly of transcriptional coactivators on a subset of glucocorticoid receptor target genes. P Natl Acad Sci USA. 2012;109:19673–8. Reddy TE, Gertz J, Crawford GE, Garabedian MJ, Myers RM. The hypersensitive glucocorticoid response specifically regulates period 1 and expression of circadian genes. Mol Cell Biol. 2012;32:3756–67. Lu NZ, Wardell SE, Burnstein KL, Defranco D, Fuller PJ, Giguere V, Hochberg RB, McKay L, Renoir JM, Weigel NL, et al. International Union of Pharmacology. LXV. The pharmacology and classification of the nuclear receptor superfamily: glucocorticoid, mineralocorticoid, progesterone, and androgen receptors. Pharmacol Rev. 2006;58:782–97. Starick SR, Ibn-Salem J, Jurk M, Hernandez C, Love MI, Chung HR, Vingron M, Thomas-Chollier M, Meijsing SH. ChIP-exo signal associated with DNA-binding motifs provides insight into the genomic binding of the glucocorticoid receptor and cooperating transcription factors. Genome Res. 2015;25:825–35. Steger DJ, Grant GR, Schupp M, Tomaru T, Lefterova MI, Schug J, Manduchi E, Stoeckert CJ, Lazar MA. Propagation of adipogenic signals through an epigenomic transition state. Gene Dev. 2010;24:1035–44. Liberman AC, Druker J, Refojo D, Holsboer F, Arzt E. Glucocorticoids inhibit GATA-3 phosphorylation and activity in T cells. FASEB J. 2009;23:1558–71. Lucibello FC, Slater EP, Jooss KU, Beato M, Muller R. Mutual transrepression of Fos and the glucocorticoid receptor - involvement of a functional domain in Fos which Is absent in Fosb. EMBO J. 1990;9:2827–34. McDowell IC, Barrera A, D'Ippolito AM, Vockley CM, Hong LK, Leichter SM, Bartelt LC, Majoros WH, Song L, Safi A, et al. Glucocorticoid receptor recruits to enhancers and drives activation by motif-directed binding. Genome Res. 2018;28:1272–84. Goldstein I, Baek S, Presman DM, Paakinaho V, Swinstead EE, Hager GL. Transcription factor assisted loading and enhancer dynamics dictate the hepatic fasting response. Genome Res. 2017;27:427–39. Liberzon A, Birger C, Thorvaldsdottir H, Ghandi M, Mesirov JP, Tamayo P. The Molecular Signatures Database (MSigDB) hallmark gene set collection. Cell Syst. 2015;1:417–25. McLean CY, Bristor D, Hiller M, Clarke SL, Schaar BT, Lowe CB, Wenger AM, Bejerano G. GREAT improves functional interpretation of cis-regulatory regions. Nat Biotechnol. 2010;28:495–501. Lambert WM, Xu CF, Neubert TA, Chao MV, Garabedian MJ, Jeanneteau FD. Brain-derived neurotrophic factor signaling rewrites the glucocorticoid transcriptome via glucocorticoid receptor phosphorylation. Mol Cell Biol. 2013;33:3700–14. Yamaguchi M, Hirai K, Komiya A, Miyamasu M, Furumoto Y, Teshima R, Ohta K, Morita Y, Galli SJ, Ra C, Yamamoto K. Regulation of mouse mast cell surface Fc epsilon RI expression by dexamethasone. Int Immunol. 2001;13:843–51. Jin S, MacLean AL, Peng T, Nie Q. scEpath: energy landscape-based inference of transition probabilities and cellular trajectories from single-cell transcriptomic data. Bioinformatics. 2018;34:2077–86. Smallwood SA, Lee HJ, Angermueller C, Krueger F, Saadeh H, Peat J, Andrews SR, Stegle O, Reik W, Kelsey G. Single-cell genome-wide bisulfite sequencing for assessing epigenetic heterogeneity. Nat Methods. 2014;11:817–20. Zemach A, McDaniel IE, Silva P, Zilberman D. Genome-wide evolutionary analysis of eukaryotic DNA methylation. Science. 2010;328:916–9. Feng S, Cokus SJ, Zhang X, Chen PY, Bostick M, Goll MG, Hetzel J, Jain J, Strauss SH, Halpern ME, et al. Conservation and divergence of methylation patterning in plants and animals. Proc Natl Acad Sci U S A. 2010;107:8689–94. Noisa P, Ramasamy TS, Lamont FR, Yu JS, Sheldon MJ, Russell A, Jin X, Cui W. Identification and characterisation of the early differentiating cells in neural differentiation of human embryonic stem cells. PLoS One. 2012;7:e37129. Mohammed H, Hernando-Herraez I, Savino A, Scialdone A, Macaulay I, Mulas C, Chandra T, Voet T, Dean W, Nichols J, et al. Single-cell landscape of transcriptional heterogeneity and cell fate decisions during mouse early gastrulation. Cell Rep. 2017;20:1215–28. Kuntz S, Kieffer E, Bianchetti L, Lamoureux N, Fuhrmann G, Viville S. Tex19, a mammalian-specific protein with a restricted expression in pluripotent stem cells and germ line. Stem Cells. 2008;26:734–44. Davidson KC, Mason EA, Pera MF. The pluripotent state in mouse and human. Development. 2015;142:3090–9. Haghverdi L, Lun ATL, Morgan MD, Marioni JC. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors. Nat Biotechnol. 2018;36:421–7. Kiselev VY, Kirschner K, Schaub MT, Andrews T, Yiu A, Chandra T, Natarajan KN, Reik W, Barahona M, Green AR, Hemberg M. SC3: consensus clustering of single-cell RNA-seq data. Nat Methods. 2017;14:483–6. Zamanighomi M, Lin ZX, Daley T, Chen X, Duren Z, Schep A, Greenleaf WJ, Wong WH. Unsupervised clustering and epigenetic classification of single cells. Nat Commun. 2018;9:2410. Pliner HA, Packer JS, McFaline-Figueroa JL, Cusanovich DA, Daza RM, Aghamirzaie D, Srivatsan S, Qiu X, Jackson D, Minkina A, et al. Cicero predicts cis-regulatory DNA interactions from single-cell chromatin accessibility data. Mol Cell. 2018;71:858–71. Zhang L, Zhang S. Comparison of computational methods for imputing single-cell RNA-sequencing data. IEEE/ACM Trans Comput Biol Bioinform. 2018. https://doi.org/10.1109/TCBB.2018.2848633. Zhang L, Zhang S. PBLR: an accurate single cell RNA-seq data imputation tool considering cell heterogeneity and prior expression level of dropouts. bioRxiv. 2018;379883. https://doi.org/10.1101/379883. Klemm SL, Shipony Z, Greenleaf WJ. Chromatin accessibility and the regulatory epigenome. Nat Rev Genet. 2019;20:207–20. Duren Z, Chen X, Jiang R, Wang Y, Wong WH. Modeling gene regulation from paired expression and chromatin accessibility data. Proc Natl Acad Sci U S A. 2017;114:E4914–E23. Taylor DM, Aronow BJ, Tan K, Bernt K, Salomonis N, Greene CS, Frolova A, Henrickson SE, Wells A, Pei LM, et al. The Pediatric Cell Atlas: defining the growth phase of human development at single-cell resolution. Dev Cell. 2019;49:10–29. Traag VA, Waltman L, van Eck NJ. From Louvain to Leiden: guaranteeing well-connected communities. Sci Rep. 2019;9:5233. Wu Y, Tamayo P, Zhang K. Visualizing and interpreting single-cell gene expression datasets with similarity weighted nonnegative embedding. Cell Syst. 2018;7:656–66. Jiang L, Chen HD, Pinello L, Yuan GC. GiniClust: detecting rare cell types from single-cell gene expression data with Gini index. Genome Biol. 2016;17:144. Jin S, Zhang L, Nie Q. scAI: an unsupervised approach for the integrative analysis of parallel single-cell transcriptomic and epigenomic profiles. Github 2019;https://github.com/amsszlh/scAI. Jin S, Zhang L, Nie Q. scAI: an unsupervised approach for the integrative analysis of parallel single-cell transcriptomic and epigenomic profiles. Github 2019;https://github.com/sqjin/scAI. Review history The review history is available as Additional file 3. Barbara Cheifet was the primary editor on this article and handled its editorial process and peer review in collaboration with the rest of the editorial team. This work was supported by a NSF grant DMS1763272, a grant from the Simons Foundation (594598, QN), and NIH grants U01AR073159, R01GM123731, and P30AR07504. Suoqin Jin and Lihua Zhang contributed equally to this work. Department of Mathematics, University of California, Irvine, CA, 92697, USA Suoqin Jin, Lihua Zhang & Qing Nie The NSF-Simons Center for Multiscale Cell Fate Research, University of California, Irvine, CA, 92697, USA Lihua Zhang & Qing Nie Department of Developmental and Cell Biology, University of California, Irvine, CA, 92697, USA Qing Nie Suoqin Jin Lihua Zhang LZ, SJ, and QN conceived the project. LZ and SJ contributed equally to this work. LZ and SJ conducted the research. QN supervised the research. LZ, SJ, and QN contributed to the writing of the manuscript. All authors read and approved the final manuscript. Correspondence to Qing Nie. Supplementary Methods. Supplementary Figures and Tables. Review history. Jin, S., Zhang, L. & Nie, Q. scAI: an unsupervised approach for the integrative analysis of parallel single-cell transcriptomic and epigenomic profiles. Genome Biol 21, 25 (2020). https://doi.org/10.1186/s13059-020-1932-8 Integrative analysis Single-cell multiomics Simultaneous measurements Sparse epigenomic profile
CommonCrawl
What is a formal definition of "predicate logic"? I'm currently trying to get clear about some terms that are often used in computer science (I'm a computer science student), but were never formally introduced. Especially, I would like to know what a "predicate logic" is. Or is it "the predicate logic"? Defintions I think the following definitions are correct, by I'm not sure about it. This is what I came up with when I tried to answer my question: What is a formal definition of "predicate logic"? As I've wrote this question, I came across many other terms that I could not formally defined, but were used in my definition for "predicate logic". So maybe you can do this shorter. But please keep in mind that I'm not looking for examples, but for a formal definition. Propositional calculus is a formal system. It contains propositions that can either be false or true. Those propositions can be combined ($\land, \lor, \Rightarrow, \Leftrightarrow, \neg$ and more, but all others can be represented by those logical connectives). What is the difference between boolean algebra and propostional calculus? A predicate is a function $p:X \rightarrow \{true, false\}$ where $X$ is any set. What is the difference between a propsition and a predicate? A predicate logic is a formal system that uses variables and quantifiers ($\forall$, $\exists$, $\exists!$) to formulate propositions. Are there axioms for the / a predicate logic? logic definition predicate-logic Martin ThomaMartin Thoma $\begingroup$ en.wikipedia.org/wiki/First-order_logic $\endgroup$ – Berci Feb 23 '14 at 15:43 $\begingroup$ Unfortunately, "first-order logic" is a term that has too many meanings. First-order logic can refer to a couple of different languages, each of which corresponds to a variety of theories, each of which corresponds to a whole smorgasboard of proof systems. Furthermore, first-order logic can also refer to a set $\mathcal{L}$ of languages, such that, for example, first-order group theory is an element of $\mathcal{L}$, and so too is first-order ring theory. Each of these variants has both a semantics-free, and a semantics-eqiupped variant. So you see, its really a very hard question you ask. $\endgroup$ – goblin Mar 31 '14 at 13:31 I'm afraid the definition you suggested has some shortcomings. To begin with your definition essentially makes use of the notion of a formal system, a notion that has not been formally defined. Furthermore, classical first-order logic satisfies a certain isomorphism condition: First-order sentences cannot be distinguished in isomorphic models. But your definition does not seem to capture this fact. For a better start, let a logic, $\mathcal{L}$, consist of a function $L$ and a 2-place relation $\models_{\mathcal{L}}$. $L$ assigns a set $L(\sigma)$ (the set of $\sigma$-sentences of $\mathcal{L}$) to each signature $\sigma$ (set of non-logical constants) such that the following holds: 1. If $\sigma_0 \subseteq \sigma_1$, $L(\sigma_0) \subseteq L(\sigma_1)$. 2. If $\mathfrak{A} \models_{\mathcal{L}} \phi$, then there is some signature $\sigma$ such that $\mathfrak{A}$ is a model interpreting $\sigma$ and $\phi \in L(\sigma)$. 3. (isomorphism condition) If $\mathfrak{A} \models _{\mathcal{L}} \phi$ and $\mathfrak{A}, \mathfrak{B}$ are isomorphic, then $\mathfrak{B} \models_{\mathcal{L}} \phi$. 4. If $\sigma_0 \subseteq \sigma_1, \phi \in L(\sigma_0)$, and $\mathfrak{A}$ is a model interpreting $\sigma_1$, then $\mathfrak{A} \models_{\mathcal{L}} \phi$ iff $\mathfrak{A}|_{\sigma_0} \models_{\mathcal{L}} \phi$ (where $\mathfrak{A}|_{\sigma_0}$ is the $\sigma_0$-model having the same domain as $\mathfrak{A}$ and coinciding with $\mathfrak{A}$ on $\sigma_0$). Now first-order logic, $\mathcal{L}_{I}$, is the logic whose function $L_I$ assigns to each signature $\sigma$ the set of first-order $\sigma$-sentences and whose 2-place relation $\models_{\mathcal{L}_I}$ is the usual satisfaction relation between first-order models and first-order sentences. $\begingroup$ I like your approach (+1). Could you please tell me how to "speak" it ($\mathfrak{A} \models_{\mathcal{L}} \phi$ and $\phi \in L(\sigma)$ - I guess the second will be something like "the language derived from the set of non-logical constants sigma) $\endgroup$ – Martin Thoma Feb 24 '14 at 20:31 $\begingroup$ $\mathfrak{A} \models_{\mathcal{L}} \phi$ can be read as "sentence $\phi$ is true in model $\mathfrak{A}$ (relative to the semantic rules of logic $\mathcal{L})"$. Your guess about the second expression is quite right. $L(\sigma)$ is the set of sentences built up from the constants from $\sigma$ and logical expressions. My approach is that of so-called abstract model theory, which studies the general properties of logics stronger than first-order. Indeed, what I call logics figure prominently in Lindström's theorems, the first significant result of the discipline. $\endgroup$ – Jon Feb 24 '14 at 21:27 $\begingroup$ I think you are assuming that the "models" here are "structures" in the sense of first order logic - is that right? You may want to state that more explicitly, since I think you are trying to define not a logic in general, but a logic that has a semantics via first-order structures. $\endgroup$ – Carl Mummert Mar 31 '14 at 13:24 It is more helpful to view "predicate logic" as a taxonomic term (the same goes for the term "logic" itself). So the question becomes: what properties of a logic cause us to call it a "predicate logic"? That's a hard question partially because "logic" itself is so broad. We can identify at least a few common properties, but not every predicate logic will have all of them. The basic examples, of course, are the logics that are called "first-order logic" in the literature. But there are also higher-order logics, modal predicate logics, temporal predicate logics, etc. Here are a few common traits: Predicate logics may have variables to range over "individual" objects. There many be more than one sort of "individual". Predicate logics may have variables that range over higher types or predicates, with syntax to match. Predicate logics often have quantifiers over the individuals and other sorts of objects Predicate logics often come with semantics in which the predicate symbols in formulas are interpreted as relations on a set of "individuals". Carl MummertCarl Mummert As you can read in the reference to Wiki or into a lot of good mathematical logic textbooks, there are some basic concepts in play : formal system Propositional logic is based on a specific language; first-order logic (or predicate logic) is based on a "more wide" (i.e.more expressive) language. In both cases we need formation rules, to build "correct expression", like terms and formulas. Then we need transformation rules, usually called inference rules (e.g modus ponens): at least one, ususally more than one, and axioms (zero or more). With this in place, we have a calculus or proof system: the basic concept of a proof system is that of derivation (from axioms or assumptions) of theorems. Upo to now we have introduced the syntax; then we add the semantics that allows us to "give meaning" to terns (they have denotation) and formula (that stay for sentences). In first-order logic (or predicate calculus) a predicate is a symbol; when we interpret it, the "standard" semantics for predicates are subset of the domain of our interpretation. If we "apply" f-o logic to arithemetic, we may use a predicate like $\le$, and a term like $0$, in order to build a formula like : $x \le 0$. This formula is constructed with the binary predicate $\le$ (a binary realtion), and it holds for all numbers that are less or equal than $0$. So, in the domain of the natural numbers, this formula will be true for a number $k$ iff $k \in \{ n \in \mathbb{N} : n \le 0 \} = \{ 0 \}$. For a "formal" approach, you can see Heinz-Dieter Ebbinghaus & Jörg Flum & Wolfgang Thomas, Mathematical logic (1984), Ch.XII : Characterizing First-Order Logic : some results, due to Lindstrom, [...] show that first-order logic occupies a unique place among logical systems [...] : (a) There is no logical system with more expressive power than first-order logic, for which both the compactness theorem and the Löwenheim-Skolem theorem hold. (b) There is no logical system with more expressive power than first-order logic, for which the Löwenheim-Skolem theorem holds and for which the set of valid sentences is enumerable. Definition 1.1. A logical system $\mathscr L$ consists of a function $L$ and a binary relation $\vDash_{\mathscr L}$. $L$ associates with every symbol set $S$ a set $L(S)$, the set of $S$-sentences of $\mathscr L$. The following properties are required: (a) If $S_0 \subset S_1$, then $L(S_0) \subset L(S_1)$. (b) If $\mathfrak A \vDash_{\mathscr L} \varphi$ (i.e., if $\mathfrak A$ and $\varphi$ are related under $\vDash_{\mathscr L}$), then, for some $S, \mathfrak A$ is an $S$-structure and $\varphi \in L(S)$. (c) (Isomorphism property) If $\mathfrak A \vDash_{\mathscr L} \varphi$ and $\mathfrak A \cong \mathfrak B$ then $\mathfrak B \vDash_{\mathscr L} \varphi$. (d) (Reduct property) If $S_0 \subset S_1, \varphi \in L(S_0)$, and $\mathfrak A$ is an $S_1$-structure then $\mathfrak A \vDash_{\mathscr L} \varphi$ iff $\mathfrak A \upharpoonright S_0 \vDash_{\mathscr L} \varphi$. [...] in the case of [first-order logic] $\mathscr L_I$ we choose $L_I$ to be the function which assigns to a symbol set $S$ the set $L_I(S) := L_0^S$ of first-order $S$-sentences, and we take $\vDash_{\mathscr L}$ to be the usual satisfaction relation between structures and first-order sentences. Mauro ALLEGRANZAMauro ALLEGRANZA Not the answer you're looking for? Browse other questions tagged logic definition predicate-logic or ask your own question. What's the difference between the main types of logic? Formal System and Formal Logical System What do I (a CS student) need for learning Logic? What strongly normalizing lambda calculi exist that can be integrated with/as logic? What does it mean to axiomatize a logic? Decidability of predicate calculus with equality only Questions about Gödel, formal systems, propositional calculus and first order logic. Mathematics of term (Aristotle) logic Motivation for predicate logic. In what sense is propositional logic "zeroth-order logic?" Any good free formal logic PDFs?
CommonCrawl
Browse IJAC Editor's Suggestion About IJAC Springer Page Endnote Template Review Procedure For Referees Referees Acknowledgement IJAC Event IJAC News IJAC Awards AllTitleAuthorKeywordAbstractDOICategoryAddressFund Kinematic Analysis of an Under-actuated, Closed-loop Front-end Assembly of a Dragline Manipulator Muhammad A. Wardeh, Samuel Frimpong 文章导航 > International Journal of Automation and Computing > 2020 > 17(4): 527-538 Muhammad A. Wardeh, Samuel Frimpong. Kinematic Analysis of an Under-actuated, Closed-loop Front-end Assembly of a Dragline Manipulator. International Journal of Automation and Computing, 2020, 17(4): 527-538. doi: 10.1007/s11633-019-1217-4 Citation: Muhammad A. Wardeh, Samuel Frimpong. Kinematic Analysis of an Under-actuated, Closed-loop Front-end Assembly of a Dragline Manipulator. International Journal of Automation and Computing, 2020, 17(4): 527-538. doi: 10.1007/s11633-019-1217-4 1, 2, , Muhammad A. Wardeh1, 2 , , Samuel Frimpong1 , Department of Mining and Nuclear Engineering, Missouri University of Science and Technology, Rolla 65409, USA Center for Infrastructure Engineering Studies, Missouri University of Science and Technology, Rolla 65409, USA Muhammad A. Wardeh received the B. Eng. degree in mechanical design and production from Damascus University, Syria in 2007. He received two M. Eng. degrees in material science and engineering from the Universities of Paris 6 and 11, France in 2011 and 2012, respectively. He received the Ph. D. degree in mining engineering (engineering mechanics with a focus on computational multibody dynamics and virtual modeling) from Missouri University of Science and Technology (Missouri S&T), USA in 2018. From 2006 to 2009, he worked for multiple engineering firms in China and Europe. From 2010 to 2012, he was a graduate research assistant in a master′s program (Master MAGIS Materials and Engineering Sciences in Paris), France. In 2019, he served as a research associate in the Center for Infrastructure Engineering Studies at Missouri S&T, USA.His research interests include computational dynamics, virtual modeling, finite element analysis, and materials constitutive modeling, microstructural materials modeling and their testing and surface characterization.E-mail: [email protected] (Corresponding author) ORCID iD: 0000-0001-6343-2087 Samuel Frimpong received the Ph. D. degree from the University of Alberta, Canada in 1992. He is a professor and the Robert H. Quenon Endowed Chair at Missouri S&T and Director of the Heavy Machinery Research Laboratory. His professional experience includes over 30 years of research and teaching, over 20 years of university administration, and several years of industry practice. He has been recognized with the 2018 Faculty External Recognition Award by Missouri S&T; 2018 Outstanding Faculty of the Year Award by Sigma Chi Fraternity at Missouri S&T; 2017 Daniel C. Jackling Award by Society for Mining, Metallurgy and Exploration (SME); 2010 Missouri S&T Chancellor′s Leadership Award; Robert H. Quenon Endowed Chair by Missouri S&T, USA (2004); Distinguished Lecturer Award by Canadian Petroleum Institute (1998–2004); 1997 Award of Distinction by World Mining Congress; University of Alberta/Canadian International Development Agency Ph. D. Scholar (1989–1992); Life Patron of George Grant University of Mines and Technology Alumni Association (2001); 1989 Grand Award by the NW Mining Association, UNESCO Research Fellowship (1986–1988) and State Gold Mining Corporation (SGMC) Gold Scholar (1981–1986). Frimpong is a member of APLU Board on Natural Resources, College of Reviewers for Canada Foundation for Innovation and Canada Research Chairs′ Program and ASCE-UNESCO Scientific Committee on Emerging Energy Technologies (ASCE-UNESCO SCEET). He is currently the Editor-In-Chief of the Journal of Powder Metallurgy and Mining; Editor-In-Chief of International Journal of Mining Engineering and Technology; Editor of Research and Reports on Metals; Editorial Board Member for International Journal of Mining Science; Editor of the Journal of MOJ Mining and Metallurgy; Editorial Board Member for International Journal of Mining, Reclamation and Environment; and Associate Editor for Mining and Minerals Engineering. He is a registered professional engineer and a member of the Association of Professional Engineers and Geoscientists of Alberta, Canadian Institute of Mining, Metallurgy and Petroleum, The Society for Mining, Metallurgy, and Exploration (SME), American Society of Civil Engineers (ASCE), and Society for Modeling & Simulation International.His research interests include formation excavation engineering, mine automation and intelligent mining systems, synthetic and renewable energy, machine dynamics and fatigue modeling, and mine safety and health.E-mail: [email protected] 图(10) 表(3) 参考文献(25) Dragline mining manipulator / underactuated closed-loop mechanism / generalized speeds / Baumgarte′s stabilization technique (BST) / feedforward displacement Abstract: Dragline excavators are closed-loop mining manipulators that operate using a rigid multilink framework and rope and rigging system, which constitute its front-end assembly. The arrangements of dragline front-end assembly provide the necessary motion of the dragline bucket within its operating radius. The assembly resembles a five-link closed kinematic chain that has two independent generalized coordinates of drag and hoist ropes and one dependent generalized coordinate of dump rope. Previous models failed to represent the actual closed loop of dragline front-end assembly, nor did they describe the maneuverability of dragline ropes under imposed geometric constraints. Therefore, a three degrees of freedom kinematic model of the dragline front-end is developed using the concept of generalized speeds. It contains all relevant configuration and kinematic constraint conditions to perform complete digging and swinging cycles. The model also uses three inputs of hoist and drag ropes linear and a rotational displacement of swinging along their trajectories. The inverse kinematics is resolved using a feedforward displacement algorithm coupled with the Newton-Raphson method to accurately estimate the trajectories of the ropes. The trajectories are solved only during the digging phase and the singularity was eliminated using Baumgarte′s stabilization technique (BST), with appropriate inequality constraint equations. It is shown that the feedforward displacement algorithm can produce accurate trajectories without the need to manually solve the inverse kinematics from the geometry. The research findings are well in agreement with the dragline real operational limits and they contribute to the efficiency and the reduction in machine downtime due to better control strategies of the dragline cycles. 1The machine house is not part of the front-end assembly, but it has been incorporated into the model to create a robust model framework. Figure 1. Dragline model 9020 Figure 2. Dragline kinematics diagram and its vector loop closure Figure 3. Kinematic modeling of the dragline closed-loop Figure 4. Singularity positions of dragline front-end structure Figure 5. Trajectories of hoist, dump, and drag ropes versus time during digging phase Figure 6. Absolute errors in the trajectories of ropes of the front-end assembly Figure 7. Trajectories of ropes using Baumgarte′s stabilization technique (BTS) during digging phase Figure 8. Displacements of hoist and drag ropes Figure 9. Dump rope angular displacement Figure 10. Bucket trajectory in a 3D space during swinging-back phase Table 1. Input data for the mathematical model Parameter Value (m) Parameter Value L0 7 L11 2.29 (m) L1 10.76 L12 7.14 (m) L3 7.95 Rs 1.715 (m) L4 7.95 D1E1 q8 (m) L5 45.7 B1F1 q7 (m) L6 45.7 E1F1 10.5 (m) L7 1.715 B1D 110 (m) L8 1.715 ca 5π/180 L9 5.25 q2 32π/180 L10 5.25 λ 37π/180 下载: 导出CSV A1. Notion of the elements in the vector loop ${ {L_0} = \overrightarrow {{B_1}{B_2}} .\overrightarrow {{b_2}} }$ Machine house height ${ \begin{array}{l} {L_1} = \overrightarrow {{B^*}{B_1}} .\overrightarrow {{b_1}} \\ {L_2} = - \overrightarrow {{B^*}{B_1}} .\overrightarrow {{b_2}} \end{array}}$ Distance from the first boom pinned point to machine house center of mass COM ${ \begin{array}{l} {L_3} = \overrightarrow {{B^*}{B_2}} .\overrightarrow {{b_1}} \\{L_4} = \overrightarrow {{B^*}{B_2}} .\overrightarrow {{b_2}} \end{array}}$ Distance from second boom pinned point to the machine house COM ${ \begin{array}{l} {L_5} = - \overrightarrow {{C^*}{B_2}} .\overrightarrow {{c_1}} \\ {L_6} = \overrightarrow {{C^*}{C_1}} .\overrightarrow {{c_1}} \end{array}}$ Distance from boom COM to its end ${ \begin{array}{l} {L_7} = \overrightarrow {{D^*}{C_1}} .\overrightarrow {{c_2}} \\ {L_8} = \overrightarrow {{D^*}{D_1}} .\overrightarrow {{e_1}} \end{array}}$ Sheave-radius Rs ${ \begin{array}{l} {L_9} = \overrightarrow {{F^*}{E_1}} .\overrightarrow {{f_1}} \\ {L_{10}} = - \overrightarrow {{F^*}{F_1}} .\overrightarrow {{f_1}} \end{array}}$ Half-length of the dump rope ${ {L_{11}} = \overrightarrow {{B_3}{B^*}} .\overrightarrow {{b_1}} }$ Distance from machine house COM to axis of swinging ${ \begin{array}{l} {L_{12}} = \overrightarrow {{F^*}{H_1}} .\overrightarrow {{f_1}} \\ {L_{13}} = - \overrightarrow {{F^*}{H_1}} .\overrightarrow {{f_2}} \end{array}}$ Distance from dump rope COM to the bucket ${ {q_7} = \overrightarrow {{B_1}{F_1}} .\overrightarrow {{g_1}} }$ Drag rope displacement ${ {q_8} = \overrightarrow {{E_1}{D_1}} .\overrightarrow {{e_2}} }$ Hoist rope displacement λ Angle between ${ \overrightarrow {{B_1}D} }$ and horizontal axis q2 Boom angle with horizontal axis Table 2. Initial angular displacements and angular velocities of ropes Rope Initial angle (rad) Initial velocity (rad/s) Hoist rope q4[0] = – 0.043 7 $ {\dot q_4}\left[ 0 \right] = 0.0$ Dump rope q5[0] = 0.283 1 $ {\dot q_5}\left[ 0 \right] = 0.0$ Drag rope q6[0] = – 0.469 2 $ {\dot q_6}\left[ 0 \right] = 0.0$ [1] Komatsu Company. Available: https://mining.komatsu/surface-mining/draglines, 2019. [2] G. Lumley, M. McKee. Mining for Efficiency, Technical Report (Mining Intelligence & Benchmarking), Pricewaterhouse Coopers, Sydney, Australia, 2014. [3] R. A. Carter. Moving and maintaining the world′s biggest diggers. Engineering &Mining Journal, vol. 216, no. 11, pp. 40–59, 2015. [4] A. K. Kemp. Computerized system analyzed dragline performance prints out data. Coal Age, vol. 79, no. 9, pp. 92–97, 1974. [5] C. E. McCoy Jr, L. J. Crowgey. Anti-tightline control system for draglines used in the surface mining industry. In Proceedings of Conference Record, Industry Applications Society, IEEE-IAS Annual Meeting, BehrendCollege Graduate Center, Pennsylvania State University, Pennsylvania, USA, pp. 140–145, 1980. [6] N. Godfrey, A. Susanto. Partial automation of a dragline working in conjunction with a hopper/crusher/conveyor overburden removal system. In Proceedings of the 15th Large Open Pit Mining Conference, Institute of Electrical and Electronics Engineers, New York, USA, 1980. [7] H. L. Hartman. Introductory Mining Engineering, New York, USA: Wiley, Article number 633, 1987. [8] D. K. Haneman, H. Hayes, G. I. Lumley. Dragline performance evaluations for tarong coal using physical modelling. In Proceedings of the 3rd Large Open Pit Mining Conference, The Australasian Institute of Mining and Metallurgy, Mackay, Australia, 1992. [9] S. W. P. Esterhuyse. The Influence of Geometry on Dragline Bucket Filling Performance, Master dissertation, Stellenbosch University, Stellenbosch, South Africa, 1997. [10] P. F. Knights, D. H. Shanks. Dragline productivity improvements through short-term monitoring. In Proceedings of Institution of Engineers, Coal Handling and Utilization Conference, Sydney, Australia, pp. 100–103, 1990. [11] P. Corke, J. Roberts, G. Winstanley. Experiments and experiences in developing a large robot mining system. Experimental Robotics VI, P. Corke, J. Trevelyan, Eds., London, UK: Springer, pp. 183–192, 2000. DOI: 10.1007/BFb0119397. [12] J. Roberts, G. Winstanley, P. Corke. Three-dimensional imaging for a very large excavator. The International Journal of Robotics Research, vol. 22, no. 7–8, pp. 467–477, 2003. DOI: 10.1177/02783649030227003. [13] J. Kyle, M. Costello. Comparison of measured and simulated motion of a scaled dragline excavation system. Mathematical and Computer Modelling, vol. 44, no. 9–10, pp. 816–833, 2006. DOI: 10.1016/j.mcm.2006.02.015. [14] T. Yang, N. Sun, H. Chen, Y. C. Fang. Neural network-based adaptive antiswing control of an underactuated ship-mounted crane with roll motions and input dead zones. IEEE Transactions on Neural Networks and Learning Systems, to be published. DOI: 10.1109/TNNLS.2019.2910580. [15] H. J. Yang, M. Tan. Sliding mode control for flexible-link manipulators based on adaptive neural networks. International Journal of Automation and Computing, vol. 15, no. 2, pp. 239–248, 2018. DOI: 10.1007/s11633-018-1122-2. [16] G. W. Zhang, P. Yang, J. Wang, J. J. Sun, Y. Zhang. Integrated observer-based fixed-time control with backstepping method for exoskeleton robot. International Journal of Automation and Computing, to be published. DOI: 10.1007/s11633-019-1201-z. [17] M. Ponnusamy, T. Maity. Recent advancements in dragline control systems. Journal of Mining Science, vol. 52, no. 1, pp. 160–168, 2016. DOI: 10.1134/S106273911601025X. [18] Y. Liu, M. S. Hasan, H. N. Yu. Modelling and remote control of an excavator. International Journal of Automation and Computing, vol. 7, no. 3, pp. 349–358, 2010. DOI: 10.1007/s11633-010-0514-8. [19] X. M. Niu, G. Q. Gao, X. J. Liu, Z. D. Bao. Dynamics and control of a novel 3-DOF parallel manipulator with actuation redundancy. International Journal of Automation and Computing, vol. 10, no. 6, pp. 552–562, 2013. DOI: 10.1007/s11633-013-0753-6. [20] N. Demirel, S. Frimpong. Dragline dynamic modelling for efficient excavation. International Journal of Mining,Reclamation and Environment, vol. 23, no. 1, pp. 4–20, 2009. DOI: 10.1080/17480930802091166. [21] Y. Li, W. Y. Liu. Dynamic dragline modeling for operation performance simulation and fatigue life prediction. Engineering Failure Analysis, vol. 34, pp. 93–101, 2013. DOI: 10.1016/j.engfailanal.2013.07.020. [22] T. R. Kane, D. A. Levinson. Dynamics: Theory and Applications, New York, USA: McGraw-Hill, 1985. [23] A. K. Banerjee. Flexible Multibody Dynamics: Efficient Formulations and Applications, West Sussex, UK: John Wiley and Sons, Ltd, 2016. [24] M. Wardeh. Computational Dynamics and Virtual Dragline Simulation for Extended Rope Service Life, Ph. D. dissertation, Missouri University of Science and Technology, Rolla, USA, 2018. [25] J. Baumgarte. Stabilization of constraints and integrals of motion in dynamical systems. Computer Methods in Applied Mechanics and Engineering, vol. 1, no. 1, pp. 1–16, 1972. DOI: 10.1016/0045-7825(72)90018-7. [1] Harita Reddy, Namratha Raj, Manali Gala, Annappa Basava. Text-mining-based Fake News Detection Using Ensemble Methods . International Journal of Automation and Computing, 2020, 17(2): 210-221. doi: 10.1007/s11633-019-1216-5 [2] Carlos Beltran-Perez, Hua-Liang Wei, Adrian Rubio-Solis. Generalized Multiscale RBF Networks and the DCT for Breast Cancer Detection . International Journal of Automation and Computing, 2020, 17(1): 55-70. doi: 10.1007/s11633-019-1210-y [3] Hai-Qiang Zhang, Hai-Rong Fang, Bing-Shan Jiang, Shuai-Guo Wang. Dynamic Performance Evaluation of a Redundantly Actuated and Over-constrained Parallel Manipulator . International Journal of Automation and Computing, 2019, 16(3): 274-285. doi: 10.1007/s11633-018-1147-6 [4] Basant Kumar Sahu, Bidyadhar Subudhi, Madan Mohan Gupta. Stability Analysis of an Underactuated Autonomous Underwater Vehicle Using Extended-Routh's Stability Method . International Journal of Automation and Computing, 2018, 15(3): 299-309. doi: 10.1007/s11633-016-0992-4 [5] Jie Bao, Hong Yue, William E. Leithead, Ji-Qiang Wang. Feedforward Control for Wind Turbine Load Reduction with Pseudo-LIDAR Measurement . International Journal of Automation and Computing, 2018, 15(2): 142-155. doi: 10.1007/s11633-017-1103-x [6] S. Arumugadevi, V. Seenivasagam. Color Image Segmentation Using Feedforward Neural Networks with FCM . International Journal of Automation and Computing, 2016, 13(5): 491-500. doi: 10.1007/s11633-016-0975-5 [7] David H. Owens, Chris T. Freeman, Bing Chu. Generalized Norm Optimal Iterative Learning Control with Intermediate Point and Sub-interval Tracking . International Journal of Automation and Computing, 2015, 12(3): 243-253. doi: 10.1007/s11633-015-0888-8 [8] Fu-Cai Liu, Li-Huan Liang, Juan-Juan Gao. Fuzzy PID Control of Space Manipulator for Both Ground Alignment and Space Applications . International Journal of Automation and Computing, 2014, 11(4): 353-360. doi: 10.1007/s11633-014-0800-y [9] Roland E. Best, Nikolay V. Kuznetsov, Gennady A. Leonov, Marat V. Yuldashev, Renat V. Yuldashev. Simulation of Analog Costas Loop Circuits . International Journal of Automation and Computing, 2014, 11(6): 571-579. doi: 10.1007/s11633-014-0846-x [10] . Developing Bi-CG and Bi-CR Methods to Solve Generalized Sylvester-transpose Matrix Equations . International Journal of Automation and Computing, 2014, 11(1): 25-29. doi: 10.1007/s11633-014-0762-0 [11] Nongnuch Poolsawad, Lisa Moore, Chandrasekhar Kambhampati. Issues in the Mining of Heart Failure Datasets . International Journal of Automation and Computing, 2014, 11(2): 162-179. doi: 10.1007/s11633-014-0778-5 [12] Xue-Mei Niu, Guo-Qin Gao, Xin-Jun Liu, Zhi-Da Bao. Dynamics and Control of a Novel 3-DOF Parallel Manipulator with Actuation Redundancy . International Journal of Automation and Computing, 2013, 10(6): 552-562. doi: 10.1007/s11633-013-0753-6 [13] P. Balasubramaniam, T. Senthilkumar. Delay-dependent Robust Stabilization and H∞ Control for Uncertain Stochastic T-S Fuzzy Systems with Discrete Interval and Distributed Time-varying Delay . International Journal of Automation and Computing, 2013, 10(1): 18-31 . doi: 10.1007/s11633-013-0692-2 [14] . Error Bound for the Generalized Complementarity Problem with Analytic Functions . International Journal of Automation and Computing, 2012, 9(3): 288-291. doi: 10.1007/s11633-012-0646-0 [15] Wu-Yi Yang, Sheng-Xing Liu, Tai-Song Jin, Xiao-Mei Xu. An Optimization Criterion for Generalized Marginal Fisher Analysis on Undersampled Problems . International Journal of Automation and Computing, 2011, 8(2): 193-200. doi: 10.1007/s11633-011-0573-5 [16] Zhuo Zhang, Dong-Dai Zhou, Hong-Ji Yang, Shao-Chun Zhong. A Service Composition Approach Based on Sequence Mining for Migrating E-learning Legacy System to SOA . International Journal of Automation and Computing, 2010, 7(4): 584-595. doi: 10.1007/s11633-010-0544-2 [17] Hong-Chun Sun, Yan-Liang Dong. A New Type of Solution Method for the Generalized Linear Complementarity Problem over a Polyhedral Cone . International Journal of Automation and Computing, 2009, 6(3): 228-233. doi: 10.1007/s11633-009-0228-y [18] O. M. Mohamed Vall, R. M'hiri. An Approach to Polynomial NARX/NARMAX Systems Identification in a Closed-loop with Variable Structure Control . International Journal of Automation and Computing, 2008, 5(3): 313-318. doi: 10.1007/s11633-008-0313-7 [19] Jian-Zhi Li, Zhuo-Peng Zhang, Bing Qiao, Hong-Ji Yang. A Component Mining Approach to Incubate Grid Services in Object-Oriented Legacy Systems . International Journal of Automation and Computing, 2006, 3(1): 47-55. doi: 10.1007/s11633-006-0047-3 [20] De Xu, Carlos A. Acosta Calderon, John Q. Gan, Huosheng Hu, Min Tan. An Analysis of the Inverse Kinematics for a 5-DOF Manipulator . International Journal of Automation and Computing, 2005, 2(2): 114-124. doi: 10.1007/s11633-005-0114-1 PDF下载 ( 1531 KB) 图(10) / 表(3) 文章访问数: 480 HTML全文浏览量: 270 PDF下载量: 47 录用日期: 2019-12-18 网络出版日期: 2020-03-05 Muhammad A. Wardeh 1, 2, , Samuel Frimpong 1, 1. Department of Mining and Nuclear Engineering, Missouri University of Science and Technology, Rolla 65409, USA 2. Center for Infrastructure Engineering Studies, Missouri University of Science and Technology, Rolla 65409, USA Accepted Date: 2019-12-18 Available Online: 2020-03-05 The global increase in energy demands necessitates energy producers to develop methods and techniques that expedite the exploration and extraction of energy minerals. The strip mining method is one of these methods in the surface coal mining industry. This method is primarily used to excavate overburden (waste materials) and to expose the coal seams beneath the overburden materials. Dragline excavators play a key role in the stripping method due to their unique designs, large bulk handling capacity, and quick return on investment. Draglines are a special kind of cable driven robotic excavators that operate using wire ropes, which reel in/out on sheaves, and carry suspended loads of between 50 tons and 500 tons[1]. The ropes and rigging, boom, bucket and their support structures are termed as the front-end assembly as depicted in Fig. 1 (b). Dragline operations are cyclic in nature and include digging by dragging the bucket to excavate materials, lifting and swinging, and dumping the materials in the bucket. During the digging phase, the drag rope retracts, and the hoist rope increases in length to engage the bucket into the bench for achieving a proper filling with the materials. However, during the dumping phase, the change of rope lengths is reversed, and the machine house swings to lift and maneuver the bucket for dumping on the spoil area as shown in Fig. 1 (a). The dragline cycle constitutes the following phases: empty-bucket swinging back, dragging bucket to excavate material, filled-bucket swinging, and dumping. It usually lasts for 60 s and that depends on the operator skills, as well as machine availability, utilization, and bucket filling efficiency. Several researchers have investigated the dragline performance and developed models to monitor its key performance indicators (KPIs). KPIs are critical parameters for defining the dragline mining efficiencies, including mine production targets, reliability of equipment, and maintenance schedules, workplace safety and health[2]. For example, dragline suspended load can be monitored instantly to ensure that the operator runs the machine within its limits. Other KPIs include, but are not limited to, bucket pose, swinging angle, and sequence of operation[3]. Kemp[4] used an on board sensor monitor that tracks machine variables, such as total power consumption, hoist and drag ropes wear index, and swinging angle. A minicomputer is used to print out data on production and delays and serves reducing power consumption per unit cost of operation. McCoy Jr. and Crowgey[5] designed static and dynamic anti-tight line control systems that use signals derived from the rope′s variable lengths and speeds. The systems can control the bucket movement and provide corrective actions that prevent it from colliding with the machine. Godfrey and Susanto[6] developed mathematical models that describe the static and dynamic characteristics for the hoist, drag, and swing drive systems. Characteristics that pertain to the hoist and drag drive include magnetic and electrical models for the generators and motors, electrical torque generation, mechanical losses, inertia and forces in the ropes. The characteristics of the swing drive include variable inertia, inclination of the tub of machine house, pendulum effect of the bucket, but not those Coriolis and centrifugal forces. The kinematics analysis showed a general agreement with the physical model. However, these mathematical models cannot be elaborated in further analysis due to the limitations of stated assumptions. A dragline machinery weighs from 500 to 13 500 metric tons and its boom length ranges from 100 ft to 400 ft. A dragline is a very capital-intensive investment for overburden stripping in surface coal mining operations with very high operating costs[7]. This equipment can only be designed and fabricated upon a customer request. Its productivity, the amount of materials moved per hour (bucket size/hour) and the operating cost per ton ($/tons), is greatly impacted by poor mining process, unscheduled maintenance, and breakdowns. There are also multiple factors that affect the dragline productivity, such as cycle time, bucket size, availability, and utilization. Different scaled physical prototypes were developed in order to measure the effect of each factor on the productivity. Haneman et al.[8] performed multiple physical testing on small-scale and large-scale buckets to assess its effect on the performance. The authors found that large-scale models do not affect the filling behavior, but they improve the productivity when compared to the field data. Esterhuyse[9] investigated the effects of bucket geometry on the filling behavior using a scaled down, physical prototype of a dragline. His findings indicate that filling behavior is the same for different bucket geometries. Knights and Shanks[10] reported an increase of the dragline productivity by 2% with the aid of short-term monitoring of a dragline bucket. Corke et al.[11] and Roberts et al.[12] tested a physical prototype of a dragline. The authors used a real-time kinematic (RTK)-GPS receiver to track the operations of a dragline and with a 3D digital map terrain, they were able to perform 50 autonomous cycles. Kyle and Costello[13] constructed a 1/16-scale dragline physical prototype to capture the dynamic effects in the bucket during digging. They also formulated a simplified analytical model that uses discrete element ropes and three Euler angles for the bucket orientations. Their model has shown good agreement for the bucket motion and rope lengths, except for the bucket and hoist pitch dynamics due to unmolded damping effects. Yang et al.[14] developed a dynamic model for a ship-mounted crane with an adaptive anti-swing control. The controller design uses a double-layer neural-network structure to handle issues associated with dead zones and unidirectional input constraints. Lyapunov-based functions are used to prove the stability of the system and the convergence of the payload swing angle. According to the authors, payload swing is suppressed to zero degrees and the rope positioning is achieved in a finite time. Yang and Tan[15] designed a sliding mode control scheme to control the position of a single flexible-link manipulator based on an adaptive radial basis function (RBF) neural network. The authors used Lyapunov′s method based on an infinite dimensional model to ensure the stability of the closed-loop system. Zhang et al.[16] developed second-order sliding mode controllers and fixed-time disturbances observer for a 5 degrees of freedom (DOF) exoskeleton robot. The stability of position and velocity subsystems are analyzed using Lyapunov theory. According to the authors, these designs achieve fast convergence and excellent tracking performance. In the recent developments in the power devices, the dragline control has evolved more and new adaptive parameter identification with predictive control techniques are used in active front end (AFE) rectifiers[17]. Liu et al.[18] used computed torque control (CTC) and robust control (RC) approaches to control the arm of a hydraulic excavator. The authors show that the RC reduces the tracking error and improves the excavator performance. Niu et al.[19] applied an adaptive sliding mode control (SMD) to handle the stability issues in a three degrees of freedom parallel manipulator with actuation redundancy. The authors claim a substantial improvement in the trajectory of the end-effector when the controller is coupled with a synchronization error. Demirel and Frimpong[20] used a vector approach and simultaneous constraint method (SCM) to construct a two 2D planar kinematic model of dragline during the digging phase. However, their model is limited to a 6-link closed kinematic chain and the bucket and its rigging are modeled as a single rigid element. Li and Liu[21] formulated a 3-link closed chain kinematic model of a dragline using a vector approach and SCM. The bucket and its rigging as well as the boom sheave are modeled as a point mass. Research findings using the mathematical formulations of dragline kinematics immensely contribute to the body of knowledge and provide a basis for better machinery design and analysis. Unfortunately, these studies and their results do not capture a real-world representation of the dragline front-end assembly, nor do they provide accurate measures of the kinematics and dynamic behaviors in a 3D operational space. For example, the exclusion of a heavy mechanical element adversely impacts the results. Moreover, the lack of field data and use of simplified physical models make the kinematics and dynamic analysis unrealistic. Thus, it is critical to formulate a new kinematic model of a dragline robotic excavator that can realistically capture the real dragline kinematics during its operational cycle, overcome the shortcomings of previous models, and improve mining machinery design and analysis. This paper presents a new kinematics formulation of dragline kinematics using the concept of generalized speeds from Kane method in Kane and Levinson[22]. The kinematics model is a 3 DOF model that captures dragging and swinging motions of the bucket in a 3D working space. The model accounts for the excluded components of the front-end assembly such as boom sheave, rigging system, sliding effect of the bucket. Once the kinematics model is formulated, the solution is sought using a feedforward solution algorithm. Then, the solution approach is analyzed against singularity using a minimal set of constraint equations. Finally, the linear and angular displacements of ropes and bucket are plotted and verified against field data. 2. Geometry of the dragline front-end assembly Dragline is a massive mining equipment, whose unique design allows excavating, hauling, and dumping waste materials in a cyclic nature within the mine area. Fig. 1 shows a dragline with a tabular structural steel boom pinned in its foot to the machine house and holds a boom-point sheave at its farthest end. The boom is fixed at an angle of 30° to 40° using galvanized bridge-strands. The machine house sits on a tub with rollers and can rotate 360° around an axis, which passes through the center of its tub. The house contains electric drives for swinging, hoisting, dragging, and propelling, as well as the operator cabin. The machine house (B)①, boom (C), boom sheave (D), ropes (E, F, G and H), and bucket (H1) constitute the front-end assembly as shown in Fig. 2. The kinematics modeling is carried out under the following assumptions: 1) The inertia reference frame is fixed in a Newtonian reference frame N. 2) Swinging axis is coincident with $ {\vec {b}_2}$ . It is, however, not coincident with the machine house center of mass (COM). This allows capturing the inertia effect during swinging motion. 3) Machine house, boom, ropes, and chains are inextensible and rigid. Hoist and drag ropes are, however, changeable in length and weight in reaction to the duty cycle of the machine. 4) Angular displacements of each element in the loop is measured from y-axis and is positive if the link rotates clockwise. 5) The machine house has already made an angular displacement of $ {q_1}.{\vec b_2}$ to position the bucket in front of bank to start the excavation process. 6) Linear velocities of drag and hoist ropes as well as swinging displacement are known. Machine house height is modeled as a vector $ \overrightarrow {({B_1}{B_2}} )$ , and is followed by the boom whose length vector is $ \overrightarrow {{B_2}{C_1}} $ and is inclined by a constant angle (q2) with respect to a global reference frame $ {\vec n_i}$ . The boom-sheave interaction is represented by vector $ \overrightarrow {{C_1}{D^*}} $ . The orientations of the hoist, dump, and drag ropes are not constants and are defined by $q_4 $ , $q_5 $ and $q_6 $ , respectively. The dump rope $ \left( {\overrightarrow {{E_1}{F_1}} } \right)$ has a fixed length during filling and full bucket swinging phases. During the digging phase, the position and orientation of the bucket are defined by the vector $ \left( {\overrightarrow {{E_1}{H_1}} } \right)$ and the angle ($q_5 $ ), respectively. During the full-bucket swinging phase, its motion and orientation are defined by angles (${q_4},\;{q_5},\;{q_6},\;{q_7} $ and ${q_8} $ ). 3. Kinematic model of the front-end assembly The productivity of any excavator is a measure of the amount of materials being excavated and moved in tons per hour. This productivity is driven by a trajectory that the bucket should follow within machine reach and mine constraints. This motion resembles the orientations and paths that a rigid multilink robot end-effector must follow from an initial position to a desired one in order to reach the required productivity. However, this path planning is not trivial in a case of a multilink robot-like excavator that has ropes or cables. The search for an optimal path that meets productivity targets becomes a challenging task, especially when the system is a type of under-actuated closed loop mechanisms. The under-actuated closed loop mechanism has fewer inputs than the number of DOFs. For these mechanisms, if the solution is to find the joint space angles for a given bucket pose, the analyst would then attempt to exclude some important components in the mechanism. Consequently, this exclusion reduces the accuracy of the model and its capability of reflecting the real motion. The solution approach starts with the definition of a set of constraint equations that guarantee loop closure of the front-end assembly. Numerical procedures are developed with the aid of user-defined functions in Mathematica. At the beginning of digging, the configuration constraint equations are first solved in order to find the initial conditions of the ropes and bucket trajectories. Input data for the mathematical models are given in Table 1 for a Marion 7 800 dragline and parameter notations are given in the Appendix (Table A1). Table A1. Notion of the elements in the vector loop 3.1. Configuration constraint equations Fig. 2 shows the closed-loop mechanism of the dragline front-end with three independent generalized coordinates q1, q4 and q6 and three dependent ones q5, q7 and q8. Generalized coordinates q1, q7 and q8 represent the angular displacement of the machine house, linear displacement of the drag rope and linear displacement of the hoist rope, respectively. These entities are known functions of time, and they are defined from the field observations of the machine during operation. For example, the linear displacements of the hoist and drag ropes during digging are 2.54t+75 and –1.32t+75, respectively. The negative sign indicates the drag rope is retracted during digging. The loop must satisfy the configuration constraint equations as provided in (1) and (2) and an additional (3) defined by an imaginary link D1F1. $$ \begin{split} & \left( {{L_5} + {L_6}} \right){c_2} + {L_7}{s_2} + {L_8}{c_4} - \\ & \qquad {q_8}{s_4} - \left( {{L_9} + {L_{10}}} \right){c_5} - {q_7}{c_6} = 0 \end{split} $$ $$ \begin{split} & {L_0} + \left( {{L_5} + {L_6}} \right){s_2} - {L_7}{c_2} - {L_8}{s_4} + \\ & \qquad {q_8}{c_4} - ({L_9} + {L_{10}}){s_5} + {q_7}{s_6} = 0 \end{split} $$ $$ \begin{split} & - 11\, 986.81 + 220c\left( {\frac{{37\pi }}{{180}} + {q_6}} \right){q_7} - q_7^2 +\\ & \quad\quad q_8^2 - 21c\left( {1.57 + {q_4} + {q_5}} \right)\sqrt {2.94 + q_8^2} = 0. \end{split} $$ Equations (1)–(3) are nonlinear, non-differential algebraic equations (AE) and are commonly found in the kinematics analysis of mechanisms or robots. These equations must be converted from AE to differential algebraic equations (DAE) by taking the first differentiation with respect to time. Li denotes the length of the i-th link in the front-end assembly and ci and si are the abbreviations of the trigonometric functions Sin and Cos. It is worth noting that the type of the mechanism and the functionality of its links affect the structure of the constraint equations. In other words, when a mechanical system has a closed-loop mechanism of many rigid and flexible links, it will be very difficult to establish a geometrical relationship among them. For example, Fig. 3 (a) shows the same model with the vector loop of interest shown in dotted-line vectors, but some vectors are assembled to facilitate the analysis. However, this loop is not a pentagon shape because it does not have the properties of pentagon. Therefore, (3) is necessary to complement the analysis and interrelate the generalized coordinates. From Fig. 3 (a), it can be seen the wire flexibility is not included in this kinematics model. This assumption is valid during the digging phase and is based on the shapes of hoist and drag ropes. Field observations show that the hoist rope is straight under the effect of bucket weight, as well as the drag rope under the effect of direct tension provided by the drag motors. After filling the bucket, the operator partially releases the drag clutch and fully engages the hoist motors, which allow the drag rope to sag. Rope flexibility can be modeled using finite segment or finite element approach, or by introducing component model representation with cubic polynomial shape functions. The first method relies on discretizing the rope into a finite number of rigid links with torsional springs at nodes between two elements, as shown in Fig. 3 (b), while the second approach depends on selecting modal displacements and modal coordinate matrices whose elements are a function of position vectors and time. For the first method, it is suggested to lump the masses at nodes to eliminate the effect of rigidity of links. The stiffness, at the first discretized point, can be calculated by equating the bending moment at that point subject to a tip load P to the restoring moment of spring at joint, Banerjee[23]. For joint 1, at distance x, the stiffness coefficient κ of spring and the deflection can be found as given in (4) and (5): $$ {\kappa _1}{\theta _1} = P\left( {nL - x} \right) $$ $$ {\theta _1} = \frac{{{\delta _2}}}{L} $$ where δ2 is the deflection at joint 2 of the rope, which can also be computed using the linear beam theory as given in (6). $$ {\delta _2} = \frac{P}{{6EL}}{x^2}\left( {3nL - x} \right). $$ Now, for the i-th joint, the relative rotation of θi can be calculated using the finite difference method, as given in (7). $$ {\theta _i} = \frac{{\left( {{\delta _{i + 1}} - 2{\delta _i} + {\delta _{i + 1}}} \right)}}{L}. $$ There are other different approaches to deal with the rope flexibility and this topic is out of the scope of this paper. 3.2. Initial condition search The configuration constraint equations resulting from loop closure are a set of nonlinear, non-differential algebraic equations that cannot be solved without differentiation. Thus, to solve the kinematics and dynamic equations of the dragline front-end assembly, these equations must be differentiated and integrated at every time step. By taking time derivatives of equations (1)–(3), the equations take a matrix form and are given in (8). $$ \begin{split} & \left[\!\!\!{\begin{array}{*{20}{c}} {\left( {75 \!+\! 2.54\, t} \right){c_4} \!- \!1.715\,{s_4}}\!\!&\!\!{10.5\,{s_5}}\!\!&\!\!{\left( {75\! -\! 1.32\, t} \right){s_6}}\\ { - 1.715\,{c_4}\, \!- \!\left( {75 \!+\! 2.54\, t} \right){s_4}}\!\!&\!\!{ - 10.5\,{c_5}}\!&\!\!{\left( {75\! -\! 1.32\, t} \right){c_6}}\\ {{{Z}_6}}&{{{Z}_6}\,}&{{{Z}_7}} \end{array}}\!\!\! \right]\times\\ &\quad\quad \left[ {\begin{array}{*{20}{c}} {{{{\dot{q}}}_4}}\\ {{{{\dot{q}}}_5}}\\ {{{{\dot{q}}}_6}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} { - 1.32\,{c_6} - 2.54\,{s_4}}\\ { - 2.54\,{c_4} + 1.32\,{s_6}}\\ {{{Z}_8}} \end{array}} \right]\\[-22pt] \end{split} $$ where Z6, Z7 and Z8 are intermediate variables, which can be written as $$ \begin{split} & {Z_6} = 21{\rm{}}\sqrt {2.94 + {{\left( {75 + 2.54t} \right)}^2}} s\left( {1.57 + {q_4} + {q_5}} \right)\\ & {Z_7} = 220\left( {75 - 1.32t} \right)s\left( {\frac{{37{\rm{\pi }}}}{{180}} + {q_6}} \right)\\ & {Z_8} = 2.64\left( {75 - 1.32t} \right) + 5.08{\rm{}}\left( {75 + 2.54t} \right) - \\ & \;\;\;\;\;\;\;\;\;\;\;\;\frac{{53.34\left( {75 + 2.54t} \right)\left( {1.57 + {q_4} + {q_5}} \right)}}{{\sqrt {2.941 + {{\left( {75 + 2.54t} \right)}^2}} }} - \\ & \;\;\;\;\;\;\;\;\;\;\;290.4c\left( {\frac{{37{\rm{\pi }}}}{{180}} + {q_6}} \right) \end{split} $$ where $ {\dot q_i}$ is the time derivative of the i-th generalized coordinate. By inspecting (8), one can see that linear displacements of ropes are also included. The solution is achieved by integrating (8) at interval [0, tdig] using a good guess of trajectories q4, q5 and q6. Equation (8) will have a solution only when the Jacobian, is a full rank matrix. If the Jacobian is rank deficient, singularity will result during the integration. Singularity means that two or more links in the mechanism are coincident and the mechanism becomes very stiff. For more details about the constraint equations and initial conditions search, refer to Wardeh[24]. 3.3. Singularity of the dragline front-end mechanism Singularity is a common issue in the kinematics analysis of multibody systems when integrating a set of nonlinear, constraint differential equations at time interval [0, t]. During the integration, the step size becomes very small at the singular position and the algorithm fails to converge to a solution with a minimal error. Singular positions can be found by setting the determinant of the Jacobian matrix to zero. From a geometrical point of view, the singularity of a dragline closed-loop can result when the drag and dump ropes are coincident, either making 0 or 180°, as shown in Fig. 4. Singularity can be eliminated by applying appropriate numerical constraints to the solution algorithm. However, adding more constraints to the solution space would make the numerical model stiffer and may not lead to a quick solution. Thus, one must carefully bound some parameters that greatly affect the solution to upper and lower bounds. This approach is equivalent to an optimization method with inequality constraints. For example, the kinematics constraints, which are the limits of the lengths of both hoist and drag ropes, are added to the solution algorithm and they become inequality constraints as per (9). $$ \left. \begin{array}{l} 61.67 \le {q_7} \le 75.00\\ 75.00 \le {q_8} \le 100.18 \end{array} \right\} $$ where q7 and q8 are the linear displacements of the drag and hoist ropes, respectively. These are given functions of time. Starting the integration with a set of initial conditions, ${q_{4.1}}[0]=\dfrac{10\pi}{180} $ , ${q_{5.1}}[0]=\dfrac{30\pi}{180} $ , and ${q_{6.1}}[0]=\dfrac{-30\pi}{180} $ , whose values do not violate the machine′s limits, it was seen that singularity appears at t = 1.52 s of the integration time as shown in Fig. 5 (a). It can be concluded that this guess estimate makes the system stiffer and affects the solution progress of the numerical solver. Integrating (8) at interval [0, 10] s was performed successfully using zero initial conditions of trajectories q4, q5 and q6 as given in Fig. 5 (b). The singularity problem was not an issue, but these trajectories are not bounded to the operational limits of the machine. Making the search space unbounded with zero initial guess of trajectories leads to a violation and undesirable evolution of the trajectories over time. The reason is that, in the next step of the solution, the algorithm quickly tries to find consistent initial values to reduce the residual error. In addition, the initial conditions must be consistent and must satisfy constraint equations and their derivatives. It is, therefore, recommended to provide consistent initial conditions for all variables and their time derivatives. From these experiments, it can be concluded that there is a trade-off between the initial value search and numerical stability of the applied method. In addition, the resulting trajectories are not consistent with the real-world dragline kinematics due to the presence of hidden constraints upon differentiation of (8). The hidden constraints are shown in the right hand side (RHS) of (8) and they can be determined using the second term in the RHS of (10). $$ \frac{{\rm d}}{{{\rm d}t}}\left( {{{{F}}_j}\left( {{{{q}}_{{i}}}\left( t \right),{{t}}} \right)} \right) = J\left( {{{q}}\left( {{t}} \right),{{t}}} \right).{\dot{ q}}\left( {{t}} \right) + {{{h}}_t}\left( {{{q}}\left( {{t}} \right),{{t}}} \right) = 0. $$ Further differentiation will accumulate the numerical error due to the generation of additional hidden constraints as shown in (11). $$ \begin{split} \frac{{{{\rm d}^2}}}{{{\rm d}{t^2}}}\left( {{{{F}}_j}\left( {{{{q}}_{{i}}}\left( t \right),{{t}}} \right)} \right) =\, & J\left( {{{q}}\left( {{t}} \right),{{t}}} \right).{\ddot{ q}}\left( {{t}} \right) + {{{J}}_q}\left( {{{q}}\left( {{t}} \right),{{t}}} \right). \left( {\dot q . \dot q} \right) +\\ & {{{h}}_{tt}}\left( {{{q}}\left( {{t}} \right),{{t}}} \right) + 2{{{h}}_{tq}}\left( {{{q}}\left( {{t}} \right),{{t}}} \right) = 0 \end{split} $$ $$ \begin{split} & {{{J}}_q}\left( {{{q}}\left( {{t}} \right),{{t}}} \right)=\frac{{\partial {{J}}\left( {{{q}}\left( t \right),{{t}}} \right)}}{{\partial q}}\\ & {{{h}}_{tt}}\left( {{{q}}\left( {{t}} \right),{{t}}} \right) = \frac{{\partial ({h_t}\left( {{{q}}\left( t \right),{{t}}} \right))}}{{\partial t}}\\ & {{{h}}_{tq}}\left( {{{q}}\left( {{t}} \right),{{t}}} \right) = \frac{{\partial \left( {{h_t}\left( {{{q}}\left( t \right),{{t}}} \right)} \right)}}{{\partial q}}. \end{split} $$ 4. Solution approach 4.1. Baumgarte′s stabilization technique (BST) To overcome the stability issue of the numerical solver and to improve the accuracy of the results, the integration must include constraint (8), as well as their first-time and second-time derivatives as per equations (10) and (11). This inclusion is called the Baumgarte′s stabilization technique following Baumgarte[25] and is given by (12). $$ {\ddot{ F}} + {\alpha _B}.{\dot{F}} + {\beta _B}.{{F}} = 0 $$ where αB and βB are parameters defined by the user and must meet the following rule, as given in (13): $$ {\alpha _B} \ge 0\; \& \; {\alpha _B}^2 = 4{\beta _B}. $$ The benefit of using BST is the reduction of the numerical error from the hidden constraints at velocity and acceleration levels. As will be seen in the next section, the application of Baumgarte′s technique solves the system singularity problem and improves the accuracy of the resulting trajectories. 4.2. Feedforward displacement algorithm The productivity of a dragline excavator depends on the capacity and trajectory of its bucket, cycle time, and other factors. A reduction in cycle time, by some seconds, can result in significant production increases with reductions in production costs. The operations, therefore, must be performed using robust control schemes, which guarantee a short path and an effective filling of the bucket. Accurate bucket motion planning requires solving the inverse kinematics problem of the closed-loop of the dragline front-end assembly. The inverse kinematics analysis resolves the unknown rope trajectories, which are required in order to move the bucket through a desired path. It is just a mapping between task space velocities of bucket (end effector) and ropes angular velocities (joint velocities). This definition is already shown in (8) and it requires a substantial work for the kinematics model shown in Fig. 2. Instead of building an inverse kinematics using closed loop geometry, it is possible to derive the trajectories using the following scheme based on the Newton method in Mathematica: 1) Input: List of constraints algebraic equations G(qi) = 0, kinematics model parameters, bucket trajectory based on q7 and q8 2) Initialize: trajectories q4init, q5init and q6init 3) Define a trajectory vector: q4={}, q5={}, and q6={} 4) Setup: start time t0, end time tfinal, time increment dt, and max-error 5) Loop: While {t0 < tfinal do Setup max rotational displacement to $\dfrac{\pi}{2} $ ; Find Root: qiinit = Solve {G(qi(t = t0))} using Newton method Update q4init = q4(t0), q5init = q5(t0), and q6init = q6(t0) Find the reminder of qinit: qinit = Mod[qiinit, $\dfrac{\pi}{2} $ ] Update q4Trjct = q4(t0), q5Trjct = q5(t0), and q6Trjct = q6(t0) maxerror1= Max[Abs[G(qi(t)) = 0], t= t0]; t0 = t0 + dt; If maxerror1 > max-error max-error = maxerror1; Interpolate: qiInterp = Interpolation[qiTrjct, Interpolation Order] If qimin < qiInterp and qiInterp> qimax qi = qiInterp; Replace the constraint equations G(qi) = 0 by the equation (12) Redo Step 5 6) Plot: Output qi and maxerror1 If the algorithm outputs interpolated trajectory functions, whose values are not consistent with the machine′s operational limits, the structure of the constraint equations or the initial trajectory guess must be changed to satisfy consistency conditions. The output of the solution algorithm is used directly to calculate the velocities and accelerations of the links. These kinematics entities can be used to compute forces and torques that actuate the dragline front-end mechanism. A solution of the kinematics model can also be performed using neural networks, which are usually trained against field data to provide acceptable values of trajectories. However, their application can result in over-fitting and requires model regularization or simplification. Over-fitting is not an issue in the solution algorithm due to the robustness of the kinematics model, selection of appropriate initial conditions based on machine operational limits, application of penalty constraints of ropes trajectories and the ability to converge to a solution with minimal residual. The trajectories do not necessarily increase/decrease at the same rate at each cycle operation, but they must be consistent with machine′s limits. In some cases, at the beginning of digging, the operator may throw the bucket in zones beyond the boom point sheave or accelerates the bucket during the filling. Therefore, there is no unique solution for the trajectory′s evolution with time. As a result, it is not recommended to solve this model using machine learning techniques. The solution algorithm takes only 2.125 s to compute the initial values of the trajectories of the ropes and 1.095 s to evaluate the trajectories over 10 s of digging. The machine used for the computation is an 8 Gb RAM personal computer and has an Intel Core (TM) i7-4710HQ CPU at 2.5 GHz. That confirms the robustness of the solution algorithm and the efficiency of using Baumgarte′s stabilization technique (BST) to reduce the error. 5. Results and discussions 5.1. Verification of the solution scheme Although the inverse kinematics is solved directly from joint constraint equations and their first and second derivatives, at the velocity and acceleration levels, it is necessary to verify the model against resulting errors. The kinematics model also must generate acceptable trajectories that meet the machine′s operational limits. These limits are determined by the fixed structures, maximum reach of the bucket, as well as dynamics loading on the dragline structural components. The constraint (1)–(3) can be chosen as system invariants for calculating the numerical tolerance. The invariant means that the system properties do not change within time due to an external disturbance. Since these equations serve as a mathematical model that defines the geometric relationship among the links, their variations must be within the machine′s design standards. Fig. 6 shows the plots of the numerical errors that resulted from the violation of the geometric constraint equations. The absolute error of each variable (q4, q5 and q6) is retuned at the end of each numerical experiment. It can be seen that the trajectory′s absolute error in each of the hoist and dump ropes is well bounded, but the drag rope trajectory error increases with time and has the maximum value at 5 s. It can be noticed that the errors of all trajectories are less than 10–4 degrees, which proves the accuracy of the kinematics model and the robustness of the solution scheme. 5.2. Verification of the machine operational limits Fig. 7 shows the trajectories generated using the BST for two sets of parameters αB=1 and βB=0.25 and αB=6 and βB=9. From experiments, it was noticed that small values of the parameters improve the results, whereas the higher values tend to destabilize the algorithm. This behavior is due to the structure of the constraint equations and their derivatives, which are solved together as given by (8). It can be seen that αB = 1 and βB = 0.25 minimize the influence of configuration constraint equations, while it does not change the velocity equations. These values resulted in trajectories that follow a real behavior and meet the machines operational limits as shown in Fig. 7 (a). For example, after 1 s of digging, there is initial tension in the hoist rope due to its self-weight and bucket weight. And, the operator has already engaged the drag motor clutch to correctly position the bucket against the bank. These actions control the orientations of the ropes with respect to the vertical axis in their local reference frame. Consequently, the hoist and drag ropes make –10° and –20°, but the dump rope has already made –60° since it has more mobility than the other ropes. Fig. 7 (b) analyzes the effect of selecting higher values of Baumgarte′s parameters on the solution accuracy. These trajectories are not representative of the angular motions of the dragline ropes. Also, there are no quick variations in rope displacement at short interval [0, 1] s. Thus, BST does not improve the simulation results when bigger value parameters are used in the model. However, the technique may work efficiently with the selection of higher values in different mechanisms. The initial conditions of the rope′s angular displacements and their velocities are listed in Table 2. 5.3. Validation of the kinematics model The kinematics model of the dragline′s closed-loop mechanism must also be validated against the real-world data. The numerical simulations are performed during the digging phase to predict the evolution of ropes trajectories with time. Fig. 8 (a) shows that both hoist and drag ropes have initial length of 75 m at the beginning of digging when the bucket is empty. As the bucket engages the bank, the hoist rope reels out and the drag rope is paid out by engaging the drag motor clutch and releasing the hoist motor clutch. The operator performs these actions simultaneously to avoid toppling the bucket and to guarantee a proper filling behavior. In the course of digging, the bucket becomes partially submerged in the bank and slides on the digging face. This control, from the operator, makes the hoist rope longer to reach a maximum length of 100 m and a maximum angular displacement of –27° at 10 s as given in Fig. 8 (b). However, the drag rope becomes the shortest of 61 m when it has already made a maximum angular displacement of – 65° at 10 s. The bucket and its rigging system have a trajectory defined by the trajectory of dump rope since both bucket and dump rope are assumed to be rigidly connected. This assumption is valid and is based on field observations within the digging phase where these links do not change their orientations. From Fig. 9, it can be noticed that the angular displacement of the dump rope varies more rapidly compared to angular displacements of other ropes, especially at time interval [0, 4] s. This is an important feature for a mining manipulator operated by ropes like a dragline. It means a better orientation of the bucket and a reduction in cycle time, which results in higher productivity. The maximum absolute value of angular displacement of dump rope is 25° at 6 s. This displacement slightly varies after 6 s to avoid spillage of the filled materials from the bucket. As soon as the bucket is filled, the operator switches the clutches of the hoist and drag motors and engages the swinging motors to rotate the assembly toward dumping area. At this time, the hoist rope retracts and the drag ropes reel out due to inversing the displacements q7 and q8. In addition, a sufficient angular displacement of q1 to the machine house lifts the filled bucket off the ground in a 3D space below the boom. Fig. 10 shows the machine house and its front-end moving the bucket under the effect of three displacements q1, q7 and q8. At the end of full-bucket swinging, the hoist and drag ropes have rotated 25° and 15 s, while the machine house has rotated – 80°. As a result, the bucket is positioned below the boom point sheave and the operator is ready to dump the materials in the spoil area. This work advances the dragline research frontiers by augmenting the number of links in the front-end assembly, applying new kinematics formulation, and by accurately improving the trajectory′s measures of every rope. These improvements have a positive impact on the mining industry through understanding the underlying effects of digging scenarios on the dragline rope′s endurance and correlating it to the availability and productivity of dragline and the economic useful life of ropes. In other words, selecting appropriate trajectories can help not only increasing productivity with a reduction in the cycle time, but also reduce wear and tear in the wire ropes. As a result, failures in the ropes can be significantly reduced to ensure an efficient and reliable dragline operating system. Kinematics analysis of closed-loop mechanisms of the dragline is an area of research that has not been studied in detail in literature. Most kinematics mechanisms that have closed-loop structures are solved by splitting the mechanism at joints to convert it into two open chain mechanisms. This results in additional unknowns and makes the analysis lengthy and computationally inefficient. This paper studies the kinematics of a closed-loop system of dragline excavator using the method of generalized speeds. It also forms a basis for performing full multibody dynamic analysis using Kane′s method. The closed-loop dragline front-end assembly is an under-actuated mechanism, which has more degrees of freedom than the number of actuators. The initial conditions search and the inverse kinematics are done by solving the configuration constraint equations using feedforward solution algorithm. Singularity was eliminated by augmenting the Jacobian through coupling the solution algorithm with the Baumgarte′s stabilization technique. The kinematics model can produce accurate trajectories of the machine wire ropes and its bucket motion in a 3D operational space. It was shown that no interference is detected between the machine′s fixed and moving components. The bucket motion was successfully simulated during digging and full-bucket swinging phase. The kinematics model will be used in the dynamic analysis in a forthcoming paper. It can also serve as a basis for further research in the area of robotics or mechanisms operated by ropes. The authors are grateful to the anonymous referees for their valuable inputs. The funding from the Robert H. Quenon Endowment at Missouri S&T for this research is also greatly acknowledged. 参考文献 (25) Published Online Current Issue Special Issue Archive Advance Search Editor's Suggestion Aims and Scopes Index information Editorial Board Subscription Contact us Springer Page Guide for Authors Submission Template Endnote Template Copyright Agreement Review Procedure RSS Feed Guide for Referees Referees Acknowledgement IJAC News IJAC Awards WeChat: IJAC Twitter: IJAC_Journal © Institute of Automation, Chinese Academy of Sciences. Published by Springer Nature and Science Press. All rights reserved. Supported by Beijing Renhe Information Technology Co. Ltd support: [email protected]
CommonCrawl
How to calculate the trace of a product of matrices taking advantage of its properties? Hello I'm trying to compute the trace of the product of matrices like this $$\begin{equation} \begin{pmatrix} a+\imath b &c &0 \\ c& d &c \\ 0& c &f+\imath g \end{pmatrix}^{-1} \begin{pmatrix} -2b &0 &0 \\ 0 & 0 &0 \\ 0& 0 &0 \end{pmatrix} \begin{pmatrix} a+\imath b &c &0 \\ c& d &c \\ 0& c &f+\imath g \end{pmatrix}^{*-1} \begin{pmatrix} 0 &0 &0 \\ 0 & 0 &0 \\ 0& 0 &-2g \end{pmatrix} \end{equation}$$ where $*$ is the comlpex transpose and all the components are real. This situation is very easy to compute it but if I want to do it for a 50x50 matrix (the first is a tridiagonal matrix with $d$ in the main diagonal except the $(1,1)$ and $(N,N)$ elements, $c$ in the other two diagonals,the second and the forth have non zero elements only in the $(1,1)$ and $(N,N)$ positions) it's almost take a lot of time. Taking advantage of the symmetry and that at the end only the last column is non zero (and therefore only the element $(N,N)$ will contribute) is there a way to compute it faster? Edit: The code I'm using for a 10x10 matrix is the following M = {{a + I b, c, 0, 0, 0, 0, 0, 0, 0, 0}, {c, d, c, 0, 0, 0, 0, 0, 0, 0}, {0, c, d, c, 0, 0, 0, 0, 0, 0}, {0, 0, c, d, c, 0, 0, 0, 0, 0}, {0, 0, 0, c, d, c, 0, 0, 0, 0}, {0, 0, 0, 0, c, d, c, 0, 0, 0}, {0, 0, 0, 0, 0, c, d, c, 0, 0}, {0, 0, 0, 0, 0, 0, c, d, c, 0}, {0, 0, 0, 0, 0, 0, 0, c, d, c}, {0, 0, 0, 0, 0, 0, 0, 0, c, f + I g}}; G1 = {{-2 b, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}; G2 = {{0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, -2 g}}; Inverse[M].G1.Inverse[ConjugateTranspose[M]].G2 //ComplexExpand To make tridiagonal matrices I'm using the comand SparseArray[{Band[{1, 2}] -> c, Band[{1, 1}] -> d, Band[{2, 1}] -> c},10] // Normal and then recopying it, change only the two elements I need to change and then I operate matrix performance-tuning linear-algebra DanielDaniel $\begingroup$ Do you want a symbolic answer in terms of $a,b,c,d,f,g$, or are you happy with a function that takes numerical values for them and returns the trace? The two will obviously perform rather differently. $\endgroup$ – Emilio Pisanty Aug 12 '16 at 14:31 $\begingroup$ @EmilioPisanty symbolic answer because each of this terms are functions of other variables but always this varables appear on the same way $\endgroup$ – Daniel Aug 12 '16 at 14:35 $\begingroup$ Can you enter your matrices in copy-and-paste-able Mathematica code, properly formatted in code blocks? It will make it much easier for use to help you if we can just copy and paste into our own copies of Mathematica rather than have to type it in ourselves. $\endgroup$ – march Aug 12 '16 at 15:35 $\begingroup$ @march I edited the post with the code I'm using for a 10x10 matrix $\endgroup$ – Daniel Aug 12 '16 at 15:45 $\begingroup$ I'll think about this, but it might be worth writing this using the Hadamard product (i.e. element-wise multiplication of matrices) in this form. $\endgroup$ – march Aug 12 '16 at 15:55 The following exploits sparse matrices, and uses LinearSolve to avoid computing the inverse which is not sparse. I suspect that further optimisation would be possible, but it is certainly practical for problems of the scale you describe. (I have assumed that you requre a numeric, rather than a symbolic solution) test[n_] := Module[{c = 1.3, b = 0.2, a = 4.1, f = 2.2, g = -0.3, d = 0.5, M, G1, G2}, M = SparseArray[{Band[{1, 2}] -> c, Band[{1, 1}] -> d, Band[{2, 1}] -> c}, n]; M[[1, 1]] = a + I b; M[[-1, -1]] = f + I g; G1 = SparseArray[{1, 1} -> -2 b, {n, n}]; G2 = SparseArray[{-1, -1} -> -2 g, {n, n}]; Tr[LinearSolve[M, G1].LinearSolve[ConjugateTranspose[M], G2]]] In my tests, this agrees with the formula that you gave. I have AbsoluteTiming[test[5000]] (* {9.70897, -0.0233494 + 7.80626*10^-17 I} *) mikadomikado Not the answer you're looking for? Browse other questions tagged matrix performance-tuning linear-algebra or ask your own question. How can I compute the representation matrices of a point group under given basis functions? Add a sub-matrix of zeros in big matrix How can we multiply nested matrices? Express block matrix in terms of matrix basis Partitioned matrix operations Constructing a large tridiagonal matrix with alternating signs Expanding a matrix in a set of matrices Symbolic manipulation of matrix elements Search the full group $G$ based on the partial list of its matrix representations Selecting some vectors from a matrix
CommonCrawl
What is the volume of the universe? Of course it is not exactly known how big the universe is, but I thougt that the universe is about 100 billion lightyears in diameter. But if that is true can you also make an estimation of the volume of the universe? pela 6 years ago Just use your good old $V = \frac{4\pi}{3}R^3$ (and remember that we're talking about the _observable_ Universe; the rest may be infinite). Marijn 6 years ago What you give is the volume of a round sphere but in this picture https://en.wikipedia.org/wiki/File:Ilc_9yr_moll4096.png it is always not shown as as perfect globe. Why is that? You can thank Karl Mollweide for that :) EastOfJupiter 6 years ago Fun exercise: Once you have calculated the volume, you can then calculate the average approximate density, too. (Estimated mass of an average star (in sols)) * (Estimated number of stars in an average galaxy) * (Total estimated number of galaxies in observable universe) / Volume of observable universe. I think the reason to use a mollweide ellips is to get the whole earth on a map with similar proportions but I can't see why this reason should be valid for the cosmos? @TexasTubbs: That will give you the density of stars only, which is only $\sim0.5$% of the total mass of the Universe. Based on your comments, I think your confusion comes from having seen the classic rugby ball-shaped image of the CMB. The CMB we observe is not from the whole cosmos, but only from a thin and completely spherical shell centered on us, with a radius $R_\mathrm{CMB}\simeq45.6\,\mathrm{Gly}$ (using a Planck 2015 cosmology). Just as the shell of Earth can be projected onto a rugby-shaped figure using a Mollweide projection, so can the shell of the CMB. Here's a figure from Universe Adventure that can help visualize: Although we (still) can't see beyond the CMB, the observable Universe is all the way out to redshift $z=\infty$, while the CMB comes from $z\simeq1100$, but the difference is not big; $R_\mathrm{obs.Uni.} \simeq 47\,\mathrm{Gly}$. Thus, the volume of the observable Universe is V = \frac{4\pi}{3}R_\mathrm{obs.Uni.}^3 = 435,\!000\,\mathrm{Gly}^3. The total Universe is probably much, much larger, and may in fact easily be infinite. Mike G 6 years ago I guess I should read $\mathrm{Gly^3}$ as "cubic giga-light-year," not "billion cubic light-year." But still it is unclear why they mostly illustrate the CMG rugby shaped and not just spherical. As I said, for the earth I can imagine why, that is to show all continents in one view. But there isn't such a reason for the CMB? It may help to think in terms of the celestial sphere. Plotting the stars and galaxies on a globe, as if they were a uniform distance from us, helps us make sense of their apparent positions in the sky. We can also transform this globe into a flat map, but most map projections of a whole sphere are not circular, and all have some distortion. @MikeG: Yes, $\mathrm{Gly}^3 = (10^9\mathrm{ly})^3 = 10^{27}\mathrm{ly}$. Also, as it seems you spotted, I made an error of a factor $10^3$. @Marijn: The reason is exactly the same as for the Earth: We want to show the entire sky in one view. That's not possible with a spherical projection, unless you invent some crazy projection algorithm that will not only heavily distort the peripheral regions (which to a certain degree will be the case for all projections), but also display some regions multiple times. NB Only true for a universe with no curvature. Acccumulation 5 years ago But that formula is for Euclidean space. Is the universe sufficiently flat for it to be a reasonable approximation? @Acccumulation: Yes, the observable Universe is extremely (if not completely) flat, with $|\Omega_k| < 0.005$ (Planck Collaboration et al. 2016). Mike G Cosmologists estimate the age of the universe at 13.8 billion years. The most distant objects we could theoretically observe emitted light that long ago; light from more distant objects hasn't reached us yet. Due to expansion of the universe, that horizon is now about 46 billion light-years away. If we assume uniform expansion in all directions, we can approximate the observable universe as a sphere of that radius, consistent with the diameter you'd heard. I figure its volume as roughly $4 \times 10^{32}$ cubic light-years. The shape and size of the unobservable universe are anyone's guess. I'm assuming you're talking about the visible universe, not the observable universe. If so, the comoving distance we could see is about 45.7 Billion light years. Even though the universe is only 13.8 Billion years old, due to expansion it took longer than that to reach us. It'd be like walking up an escalator thats going down vs walking up stairs, I'll get there but it'll just take much longer. A COMOVING distance would relate better to the person on the stairs, it factors out expansion, thus making calculations much easier. Using the comoving distance, figuring out it's volume is simple math, unless of course you plan on figuring it's volume without mass, just the space. Then of course you'd have to find each masses volume, subtract and blah, blah, and impossible, since we don't know every object in the visible universe. Using V=4/3πr^3, V = approximately 3.99795E+32ly^3. Or, 399 nonillion Ly^3. Now imagine the escalator you're on starts running just as fast as you, eventually faster; You'll never reach the top, in fact you'll move away from the top despite running towards it. This is what is happening to our visible universe, it's slowly receding faster than light due to expansion. So the light will continue to travel towards us but will never reach us once it's emitted beyond about 4500 megaparsecs away, it's event horizon, sometimes called particle horizon. Point being, the visible universe volume is constantly shrinking. I think your distinguishing between the observable and the visible Universe is not what is normally used. As I understand your answer, you use the term "visible Universe" to refer to the region where light _emitted today_ will be able to reach us. But this is rarely of any interest in a scientific context. Also, note that the 4500 Mpc you're referring to is the comoving distance to the Hubble sphere, outside which stuff recedes faster than the speed of light. But actually light emitted from outside this region right now, up to ~5000 Mpc, will still be able to reach us. As I read your answer, you use the term "visible Universe" for the region bound by the _event horizon_, which is the horizon outside which light emitted today will never reach us. It is true that it is always shrinking in _comoving_ coordinates, but in physical coordinates it actually doesn't. It asymptotically increases toward a finite size (~17 Gly). How does this differ from Pela's answer? C. Towne Springer The Universe is infinite and always has been. Yes, it is expanding, but what does that mean when it exists everywhere? Yes, it can appear to have exploded from a point, but that "point" is a universe that is infinite in size and has no edge. There is no way to look at it or describe it from "outside". The diagrams you often see of the history of the Big Bang show a poor concept of expansion and are just misleading. They imply an outside view is possible. See the excellent answer to a similar question here: https://physics.stackexchange.com/questions/136860/did-the-big-bang-happen-at-a-point Chappo Hasn't Forgotten Monica 5 years ago This answer has a number of highly contentious statements (eg the universe "always has been" and it "exploded"). At the very least you need to back them up with references. C. Towne Springer 5 years ago I you can show that time existed before the Big Bang, then I will retract "always has been". No, that's not how science works. It's up to you to explain how your hypothesis resolves all possible scenarios (eg a cyclic universe). Are you two perhaps actually agreeing? It seems @Chappo objects to the "always has been" statement because this sounds like the Universe has existed forever. But maybe C.T. Springer just means "has been infinite since time started at BB" (which nevertheless is a somewhat bold statement, since this is not really known, although it's true if the rest of the Universe is like the obsevable Universe). "No, that's not how science works." Every measurement, experiment, and observation indicates an infinite universe and the existence of time. The current theory of course has to fit these conditions. A singularity at a Big Bang is an inference. Your objection to "always has been" places the burden on you to show why you claim something outside the evidence is true.
CommonCrawl
Last 12 months (18) Earth and Environmental Sciences (37) Epidemiology & Infection (36) Journal of Fluid Mechanics (31) Microscopy and Microanalysis (24) Journal of Materials Research (23) Proceedings of the International Astronomical Union (14) The Journal of Laryngology & Otology (13) The European Physical Journal - Applied Physics (12) The Journal of Agricultural Science (11) Parasitology (10) Laser and Particle Beams (9) Bulletin of Entomological Research (8) European Astronomical Society Publications Series (3) Glasgow Mathematical Journal (3) Materials Research Society Internet Journal of Nitride Semiconductor Research (3) Symposium - International Astronomical Union (3) The Aeronautical Journal (3) Ryan Test (29) Brazilian Society for Microscopy and Microanalysis (SBMM) (24) BSAS (20) The Australian Society of Otolaryngology Head and Neck Surgery (13) Royal Aeronautical Society (3) Canadian Mathematical Society (2) Mineralogical Society (2) test society (2) Australian Mathematical Society Inc (1) The Academia Europaea (1) newsociety20160909 (1) Comparative transcriptome profiling reveals candidate genes related to insecticide resistance of Glyphodes pyloalis H. Su, Y. Gao, Y. Liu, X. Li, Y. Liang, X. Dai, Y. Xu, Y. Zhou, H. Wang Journal: Bulletin of Entomological Research , First View Glyphodes pyloalis Walker (Lepidoptera: Pyralididae) is a common pest in sericulture and has developed resistance to different insecticides. However, the mechanisms involved in insecticide resistance of G. pyloalis are poorly understood. Here, we present the first whole-transcriptome analysis of differential expression genes in insecticide-resistant and susceptible G. pyloalis. Clustering and enrichment analysis of DEGs revealed several biological pathways and enriched Gene Ontology terms were related to detoxification or insecticide resistance. Genes involved in insecticide metabolic processes, including cytochrome P450, glutathione S-transferases and carboxylesterase, were identified in the larval midgut of G. pyloalis. Among them, CYP324A19, CYP304F17, CYP6AW1, CYP6AB10, GSTs5, and AChE-like were significantly increased after propoxur treatment, while CYP324A19, CCE001c, and AChE-like were significantly induced by phoxim, suggesting that these genes were involved in insecticide metabolism. Furthermore, the sequence variation analysis identified 21 single nucleotide polymorphisms within CYP9A20, CYP6AB47, and CYP6AW1. Our findings reveal many candidate genes related to insecticide resistance of G. pyloalis. These results provide novel insights into insecticide resistance and facilitate the development of insecticides with greater specificity to G. pyloalis. The potential application of plant wax markers from alfalfa for estimating the total feed intake of sheep H. Zhang, Y. P. Guo, W. Q. Chen, N. Liu, S. L. Shi, Y. J. Zhang, L. Ma, J. Q. Zhou Estimating the feed intake of grazing herbivores is critical for determining their nutrition, overall productivity and utilization of grassland resources. A 17-day indoor feeding experiment was conducted to evaluate the potential use of Medicago sativa as a natural supplement for estimating the total feed intake of sheep. A total of 16 sheep were randomly assigned to four diets (four sheep per diet) containing a known amount of M. sativa together with up to seven forages common to typical steppes. The diets were: diet 1, M. sativa + Leymus chinensis + Puccinellia distans; diet 2, species in diet 1 + Phragmites australis; diet 3, species in diet 2 + Chenopodium album + Elymus sibiricus; and diet 4, species in diet 3 + Artemisia scoparia + Artemisia tanacetifolia. After faecal marker concentrations were corrected by individual sheep recovery, treatment mean recovery or overall recovery, the proportions of M. sativa and other dietary forages were estimated from a combination of alkanes and long-chain alcohols using a least-square procedure. Total intake was the ratio of the known intake of M. sativa to its estimated dietary proportion. Each dietary component intake was obtained using total intake and the corresponding dietary proportions. The estimated values were compared with actual values to assess the estimation accuracy. The results showed that M. sativa exhibited a distinguishable marker pattern in comparison to the other dietary forage species. The accuracy of the dietary composition estimates was significantly (P < 0.001) affected by both diet diversity and the faecal recovery method. The proportion of M. sativa and total intake across all diets could be accurately estimated using the individual sheep or the treatment mean recovery methods. The largest differences between the estimated and observed total intake were 2.6 g and 19.2 g, respectively, representing only 0.4% and 2.6% of the total intake. However, they were significantly (P < 0.05) biased for most diets when using the overall recovery method. Due to the difficulty in obtaining individual sheep recovery under field conditions, treatment mean recovery is recommended. This study suggests that M. sativa, a natural roughage instead of a labelled concentrate, can be utilized as a dietary supplement to accurately estimate the total feed intake of sheep indoors and further indicates that it has potential to be used in steppe grassland of northern China, where the marker patterns of M. sativa differ markedly from commonly occurring plant species. Nucleotide variation in the ovine KRT31 promoter region and its association with variation in wool traits in Merino-cross lambs W. Chai, H. Zhou, H. Gong, J. Wang, Y. Luo, J. G. H. Hickford Journal: The Journal of Agricultural Science , First View Published online by Cambridge University Press: 10 June 2019, pp. 1-7 Keratins are the main structural proteins of wool fibres, and it is thought that variation in the keratins may affect wool fibre characteristics. Polymerase chain reaction-single stranded conformational polymorphism (PCR-SSCP) analyses were used to investigate four regions of the ovine keratin gene KRT31 including a portion of the promoter, the exon 1, exon 3 and exon 7 regions. Initially, in a screening panel of 300 New Zealand Romney, Merino and White Dorper sheep obtained from 26 farms, three, two, two and two PCR-SSCP banding patterns were observed for these four regions, respectively. The promoter region, the exon 1 and exon 3 regions contained two single nucleotide polymorphisms (SNPs) and the exon 7 region contained one SNP. The effect of the variation found in the promoter region on wool traits was subsequently investigated in 485 Southdown × Merino-cross lambs from seven sire-lines. The three variants identified in the original 300 sheep (named A, B and C) were observed with frequencies of 56, 29 and 15%, respectively. The presence of A and B had no significant effect on wool traits, but the presence of C was found to be associated with an increase in greasy fleece weight (GFW), clean fleece weight (CFW) and mean staple length (MSL). There was an effect of genotype on CFW and MSL, with BC sheep producing wool of higher CFW and MSL than AA, AB, AC and BB sheep. These results suggest that ovine KRT31 might be a useful candidate gene for improving wool traits. The turbulent Kármán vortex J. G. Chen, Y. Zhou, R. A. Antonia, T. M. Zhou Journal: Journal of Fluid Mechanics / Volume 871 / 25 July 2019 Published online by Cambridge University Press: 17 May 2019, pp. 92-112 Print publication: 25 July 2019 This work focuses on the temperature (passive scalar) and velocity characteristics within a turbulent Kármán vortex using a phase-averaging technique. The vortices are generated by a circular cylinder, and the three components of the fluctuating velocity and vorticity vectors, $u_{i}$ and $\unicode[STIX]{x1D714}_{i}$ ( $i=1,2,3$ ), are simultaneously measured, along with the fluctuating temperature $\unicode[STIX]{x1D703}$ and the temperature gradient vector, at nominally the same spatial point in the plane of mean shear at $x/d=10$ , where $x$ is the streamwise distance from the cylinder axis and $d$ is the cylinder diameter. We believe this is the first time the properties of fluctuating velocity, temperature, vorticity and temperature gradient vectors have been explored simultaneously within the Kármán vortex in detail. The Reynolds number based on $d$ and the free-stream velocity is $2.5\times 10^{3}$ . The phase-averaged distributions of $\unicode[STIX]{x1D703}$ and $u_{i}$ follow closely the Gaussian distribution for $r/d\leqslant 0.2$ ( $r$ is the distance from the vortex centre), but not for $r/d>0.2$ . The collapse of the distributions of the mean-square streamwise derivative of the velocity fluctuations within the Kármán vortex implies that the velocity field within the vortex tends to be more locally isotropic than the flow field outside the vortex. A possible physical explanation is that the large and small scales of velocity and temperature fields are statistically independent of each other near the Kármán vortex centre, but interact vigorously outside the vortex, especially in the saddle region, due to the action of coherent strain rate. Application of a long short-term memory neural network: a burgeoning method of deep learning in forecasting HIV incidence in Guangxi, China G. Wang, W. Wei, J. Jiang, C. Ning, H. Chen, J. Huang, B. Liang, N. Zang, Y. Liao, R. Chen, J. Lai, O. Zhou, J. Han, H. Liang, L. Ye Published online by Cambridge University Press: 09 May 2019, e194 Guangxi, a province in southwestern China, has the second highest reported number of HIV/AIDS cases in China. This study aimed to develop an accurate and effective model to describe the tendency of HIV and to predict its incidence in Guangxi. HIV incidence data of Guangxi from 2005 to 2016 were obtained from the database of the Chinese Center for Disease Control and Prevention. Long short-term memory (LSTM) neural network models, autoregressive integrated moving average (ARIMA) models, generalised regression neural network (GRNN) models and exponential smoothing (ES) were used to fit the incidence data. Data from 2015 and 2016 were used to validate the most suitable models. The model performances were evaluated by evaluating metrics, including mean square error (MSE), root mean square error, mean absolute error and mean absolute percentage error. The LSTM model had the lowest MSE when the N value (time step) was 12. The most appropriate ARIMA models for incidence in 2015 and 2016 were ARIMA (1, 1, 2) (0, 1, 2)12 and ARIMA (2, 1, 0) (1, 1, 2)12, respectively. The accuracy of GRNN and ES models in forecasting HIV incidence in Guangxi was relatively poor. Four performance metrics of the LSTM model were all lower than the ARIMA, GRNN and ES models. The LSTM model was more effective than other time-series models and is important for the monitoring and control of local HIV epidemics. RINGS WHOSE ELEMENTS ARE THE SUM OF A TRIPOTENT AND AN ELEMENT FROM THE JACOBSON RADICAL M. TAMER KOŞAN, TÜLAY YILDIRIM, Y. ZHOU Journal: Canadian Mathematical Bulletin / Impact of maternal HIV infection on pregnancy outcomes in southwestern China – a hospital registry based study M. Yang, Y. Wang, Y. Chen, Y. Zhou, Q. Jiang Published online by Cambridge University Press: 08 March 2019, e124 Globally, human immune deficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) continues to be a major public health issue. With improved survival, the number of people living with HIV/AIDS is increasing, with over 2 million among pregnant women. Investigating adverse pregnant outcomes of HIV-infected population and associated factors are of great importance to maternal and infant health. A cross-sectional data collected from hospital delivery records of 4397 mother–infant pairs in southwestern China were analysed. Adverse pregnant outcomes (including low birthweight/preterm delivery/low Apgar score) and maternal HIV status and other characteristics were measured. Two hundred thirteen (4.9%) mothers were HIV positive; maternal HIV infection, rural residence and pregnancy history were associated with all three indicators of adverse pregnancy outcomes. This research suggested that maternal population have high prevalence in HIV infection in this region. HIV-infected women had higher risks of experiencing adverse pregnancy outcomes. Rural residence predisposes adverse pregnancy outcomes. Findings of this study suggest social and medical support for maternal-infant care needed in this region, selectively towards rural areas and HIV-positive mothers. Prevalence and genotypes of anal human papillomavirus infection among HIV-positive vs. HIV-negative men in Taizhou, China X. Liu, H. Lin, X. Chen, W. Shen, X. Ye, Y. Lin, Z. Lin, S. Zhou, M. Gao, Y. Ding, N. He This study aims to investigate the prevalence and genotype distribution of anal human papillomavirus (HPV) infection among men with different sexual orientations with or without human immunodeficiency virus (HIV) in China. A cross-sectional study was conducted during 2016–2017 in Taizhou City, Zhejiang Province. Convenient sampling was used to recruit male participants from HIV voluntary counselling and testing clinics and Center for Disease Control and Prevention. A face-to-face questionnaire interview was administered and an anal-canal swab was collected for HPV genotyping. A total of 160 HIV-positive and 113 HIV-negative men participated in the study. The prevalence of any type HPV was 30.6% for heterosexual men, 74.1% for homosexual and 63.6% for bisexual men among HIV-positive participants, while the prevalence was 8.3%, 29.2% and 23.8% respectively among HIV-negatives. The most prevalent genotypes were HPV-58 (16.9%), HPV-6 (15.6%) and HPV-11 (15.0%) among HIV-positive men, and were HPV-16 (4.4%), HPV-52 (4.4%) and HPV-6 (3.5%) among HIV-negative men. Having ever had haemorrhoids and having ever seen blood on tissue after defaecation was associated with HPV infection. One-fourth of the HPV infections in this study population can be covered by the quadrivalent vaccine in market. The highly prevalent anal HPV infection among men especially HIV-infected men calls for close observation and further investigation for anal cancer prevention. Association of meteorological factors with seasonal activity of influenza A subtypes and B lineages in subtropical western China M. Pan, H. P. Yang, J. Jian, Y. Kuang, J. N. Xu, T. S. Li, X. Zhou, W. L. Wu, Z. Zhao, C. Wang, W. Y. Li, M. Y. Li, S. S. He, L.L. Zhou Published online by Cambridge University Press: 04 March 2019, e72 The seasonality of individual influenza subtypes/lineages and the association of influenza epidemics with meteorological factors in the tropics/subtropics have not been well understood. The impact of the 2009 H1N1 pandemic on the prevalence of seasonal influenza virus remains to be explored. Using wavelet analysis, the periodicities of A/H3N2, seasonal A/H1N1, A/H1N1pdm09, Victoria and Yamagata were identified, respectively, in Panzhihua during 2006–2015. As a subtropical city in southwestern China, Panzhihua is the first industrial city in the upper reaches of the Yangtze River. The relationship between influenza epidemics and local climatic variables was examined based on regression models. The temporal distribution of influenza subtypes/lineages during the pre-pandemic (2006–2009), pandemic (2009) and post-pandemic (2010–2015) years was described and compared. A total of 6892 respiratory specimens were collected and 737 influenza viruses were isolated. A/H3N2 showed an annual cycle with a peak in summer–autumn, while A/H1N1pdm09, Victoria and Yamagata exhibited an annual cycle with a peak in winter–spring. Regression analyses demonstrated that relative humidity was positively associated with A/H3N2 activity while negatively associated with Victoria activity. Higher prevalence of A/H1N1pdm09 and Yamagata was driven by lower absolute humidity. The role of weather conditions in regulating influenza epidemics could be complicated since the diverse viral transmission modes and mechanism. Differences in seasonality and different associations with meteorological factors by influenza subtypes/lineages should be considered in epidemiological studies in the tropics/subtropics. The development of subtype- and lineage-specific prevention and control measures is of significant importance. On-ground lateral direction control for an unswept flying-wing UAV Z. Y. Ma, X. P. Zhu, Z. Zhou Journal: The Aeronautical Journal / Volume 123 / Issue 1261 / March 2019 To solve the on-ground lateral direction control problem of the unswept flying-wing unmanned aerial vehicle (UAV) without rudder, steering system or breaking system, a control approach which uses differential propeller thrust to control the lateral direction is proposed. First, a mathematical model of the unswept flying-wing UAV on-ground moving is established. Second, based on the active disturbance rejection control (ADRC) theory, a yaw angle controller is designed by using the differential propeller thrust as the control output. Finally, a straight line trajectory tracking control law is designed by improving the vector field path following method. Experiment results show that the proposed control laws have a shorter response time, better robustness and better control precision compared with proportional integral derivative (PID) controller. The proposed controller has small computational complexity, simple parameter setting process, and uses practical measurable physical quantities, providing a reference solution for further engineering applications. Active drag reduction of a high-drag Ahmed body based on steady blowing B. F. Zhang, K. Liu, Y. Zhou, S. To, J. Y. Tu Journal: Journal of Fluid Mechanics / Volume 856 / 10 December 2018 Print publication: 10 December 2018 Active drag reduction of an Ahmed body with a slant angle of $25^{\circ }$ , corresponding to the high-drag regime, has been experimentally investigated at Reynolds number $Re=1.7\times 10^{5}$ , based on the square root of the model cross-sectional area. Four individual actuations, produced by steady blowing, are applied separately around the edges of the rear window and vertical base, producing a drag reduction of up to 6–14 %. However, the combination of the individual actuations results in a drag reduction 29 %, higher than any previous drag reductions achieved experimentally and very close to the target (30 %) set by automotive industries. Extensive flow measurements are performed, with and without control, using force balance, pressure scanner, hot-wire, flow visualization and particle image velocimetry techniques. A marked change in the flow structure is captured in the wake of the body under control, including the flow separation bubbles, over the rear window or behind the vertical base, and the pair of C-pillar vortices at the two side edges of the rear window. The change is linked to the pressure rise on the slanted surface and the base. The mechanisms behind the effective control are proposed. The control efficiency is also estimated. Variation in the ovine keratin-associated protein 15-1 gene affects wool yield W. Li, H. Gong, H. Zhou, J. Wang, X. Liu, S. Li, Y. Luo, J. G. H. Hickford Journal: The Journal of Agricultural Science / Volume 156 / Issue 7 / September 2018 Keratin-associated proteins (KAPs) are constituents of wool and hair fibres and are believed to play an important role in determining the characteristics of the fibres. In the current study, a polymerase chain reaction-single stranded conformational polymorphism (PCR-SSCP) approach was used to screen for variation in the ovine KAP15-1 gene (KRTAP15-1). Four PCR-SSCP banding patterns, representing four different variants (named A to D), were detected. Four single nucleotide polymorphisms were found within the coding region and three of these were non-synonymous. The effect of this genetic variation on wool traits was investigated in 396 Merino × Southdown-cross sheep. Of the three variants found in these sheep (A, B and C), the presence of B was found to be associated with decreased wool yield, while C was associated with increased wool yield and decreased fibre diameter standard deviation. Sheep of genotype AC had a higher wool yield than those of genotype AA or AB. Generation of uniform transverse beam distributions for high-energy electron radiography Q.T. Zhao, S.C. Cao, R. Cheng, Y.C. Du, X.K. Shen, Y.R. Wang, J.H. Xiao, Y. Zong, Y.L. Zhu, Y.W. Zhou, Y.T. Zhao, Z.M. Zhang, W. Gai Journal: Laser and Particle Beams / Volume 36 / Issue 3 / September 2018 High-energy electron radiography (HEER) has been proposed for time-resolved imaging of materials, high-energy density matter, and for inertial confinement fusion. The areal-density resolution, determined by the image intensity information is critical for these types of diagnostics. Preliminary experimental studies for different materials with the same thickness and the same areal-density target have been imaged and analyzed. Although there are some discrepancies between experimental and theory analysis, the results show that the density distribution can indeed be attained from HEER. The reason for the discrepancies has been investigated and indicates the importance of the uniformity in the transverse distribution beam illuminating the target. Furthermore, the method for generating a uniform transverse distribution beam using octupole magnets was studied and verified by simulations. The simulations also confirm that the octupole field does not affect the angle-position correlation in the center part beam, a critical requirement for the imaging lens. A more practical method for HEER using collimators and octupoles for generating more uniform beams is also described. Detailed experimental results and simulation studies are presented in this paper. Imaging Capabilities, Performance and Applications of the Hard X-ray Nanoprobe Beamline at NSLS-II H. Yan, X. Huang, E. Nazaretski, N. Bouet, J. Zhou, W. Xu, P. Ilinski, Y. S. Chu Journal: Microscopy and Microanalysis / Volume 24 / Issue S2 / August 2018 FIB/SEM Processing of Biological Samples Annalena Wolff, Yinghong Zhou, Jinying Lin, Yong Y Peng, John A.M. Ramshaw, Yin Xiao Seroprevalence and risk factors of Toxoplasma gondii infection in oral cancer patients in China: a case–control prospective study N. Zhou, X. Y. Zhang, Y. X. Li, L. Wang, L. L. Wang, W. Cong Journal: Epidemiology & Infection / Volume 146 / Issue 15 / November 2018 Over the recent years, potential associations between Toxoplasma gondii (T. gondii) infection and cancer risk have attracted a lot of attention. Nevertheless, the association between T. gondii infection and oral cancer remains relatively unexplored. We performed a case–control study of 861 oral cancer patients and 861 control subjects from eastern China with the aim to detect antibodies to T. gondii by enzyme-linked immunosorbent assay (ELISA) in these patients. The results showed that oral cancer patients (21.72%, 187/861) had a significantly higher seroprevalence than control subjects (8.25%, 71/861) (P < 0.001). Among them, 144 (16.72%) oral cancer patients and 71 (8.25%) control subjects were positive for IgG antibodies to T. gondii, while 54 (6.27%) oral cancer patients and 9 (1.05%) controls were positive for IgM antibodies to T. gondii. In addition, multiple logistic analysis showed that T. gondii infection in oral cancer patients was associated with blood transfusion history, keeping cats at home, and oyster consumption. To our knowledge, this is the first study that provided a serological evidence of an association between T. gondii infection and oral cancer patients. However, further studies are necessary to elucidate the role of T. gondii in oral cancer patients. Postnatal differential expression of chemoreceptors of free fatty acids along the gastrointestinal tract of supplemental feeding v. grazing kid goats T. Ran, Y. Liu, J. Z. Jiao, C. S. Zhou, S. X. Tang, M. Wang, Z. X. He, Z. L. Tan, W. Z. Yang, K. A. Beauchemin Journal: animal / Volume 13 / Issue 3 / March 2019 The gastrointestinal tract (GIT) of animals is capable of sensing various kinds of nutrients via G-protein coupled receptor-mediated signaling transduction pathways, and the process is known as 'gut nutrient chemosensing'. GPR40, GPR41, GPR43 and GPR119 are chemoreceptors for free fatty acids (FFAs) and lipid derivatives, but they are not well studied in small ruminants. The objective of this study is to determine the expression of GPR40, GPR41, GPR43 and GPR119 along the GIT of kid goats under supplemental feeding (S) v. grazing (G) during early development. In total, 44 kid goats (initial weight 1.35±0.12 kg) were slaughtered for sampling (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum) between days 0 and 70. The expression of GPR41 and GPR43 were measured at both mRNA and protein levels, whereas GPR40 and GPR119 were assayed at protein level only. The effects of age and feeding system on their expression were variable depending upon GIT segments, chemoreceptors and expression level (mRNA or protein), and sometimes feeding system × age interactions (P<0.05) were observed. Supplemental feeding enhanced expression of GPR40, GPR41 and GPR43 in most segments of the GIT of goats, whereas G enhanced expression of GPR119. GPR41 and GPR43 were mainly expressed in rumen, abomasum and cecum, with different responses to age and feeding system. GPR41 and GPR43 expression in abomasum at mRNA level was greatly (P<0.01) affected by both age and feeding system; whereas their expression in rumen and abomasum at protein level were different, feeding system greatly (P<0.05) affected GPR41 expression, but had no effect (P>0.05) on GPR43 expression; and there were no feeding system×age interactions (P>0.05) on GPR41 and GPR43 protein expression. The expression of GPR41 and GPR43 in rumen and abomasum linearly (P<0.01) increased with increasing age (from days 0 to 70). Meanwhile, age was the main factor affecting GPR40 expression throughout the GIT. These outcomes indicate that age and feeding system are the two factors affecting chemoreceptors for FFAs and lipid derivatives expression in the GIT of kids goats, and S enhanced the expression of chemoreceptors for FFAs, whereas G gave rise to greater expression of chemoreceptors for lipid derivatives. Our results suggest that enhanced expression of chemoreceptors for FFAs might be one of the benefits of early supplemental feeding offered to young ruminants during early development. A novel miRNA, miR-13664, targets CpCYP314A1 to regulate deltamethrin resistance in Culex pipiens pallens X. H. Sun, N. Xu, Y. Xu, D. Zhou, Y. Sun, W. J. Wang, L. Ma, C. L. Zhu, B. Shen Journal: Parasitology / Volume 146 / Issue 2 / February 2019 Extensive insecticide use has led to the resistance of mosquitoes to these insecticides, posing a major barrier to mosquito control. Previous Solexa high-throughput sequencing of Culex pipiens pallens in the laboratory has revealed that the abundance of a novel microRNA (miRNA), miR-13664, was higher in a deltamethrin-sensitive (DS) strain than a deltamethrin-resistant (DR) strain. Real-time quantitative PCR revealed that the miR-13664 transcript level was lower in the DR strain than in the DS strain. MiR-13664 oversupply in the DR strain increased the susceptibility of these mosquitoes to deltamethrin, whereas inhibition of miR-13664 made the DS strain more resistant to deltamethrin. Results of bioinformatic analysis, quantitative reverse-transcriptase polymerase chain reaction, luciferase assay and miR mimic/inhibitor microinjection revealed CpCYP314A1 to be a target of miR-13664. In addition, downregulation of CpCYP314A1 expression in the DR strain reduced the resistance of mosquitoes to deltamethrin. Taken together, our results indicate that miR-13664 could regulate deltamethrin resistance by interacting with CpCYP314A1, providing new insights into mosquito resistance mechanisms. Parametric study and scaling of jet manipulation using an unsteady minijet A. K. Perumal, Y. Zhou Journal: Journal of Fluid Mechanics / Volume 848 / 10 August 2018 A parametric study is conducted for the control of a turbulent jet using a single unsteady minijet. A number of control parameters that influence the decay rate $K$ of the jet centreline mean velocity are investigated, including the mass flow rate ratio $C_{m}$ , excitation frequency ratio $f_{e}/f_{0}$ and exit diameter ratio $d/D$ of the minijet to main jet, along with the duty cycle ( $\unicode[STIX]{x1D6FC}$ ) of the minijet injection. Extensive hot-wire, particle image velocimetry and flow visualization measurements were performed in the manipulated jet. Various flow structures have been identified, such as the flapping flow, non-flapping flow and that showing a manipulable thrust vector, depending on $C_{m}$ , $f_{e}/f_{0}$ and $\unicode[STIX]{x1D6FC}$ . Empirical scaling analysis unveils that, prior to the minijet impingement upon the wall of the nozzle and the generation of turbulence, the relationship $K=g_{1}$ ( $C_{m}$ , $f_{e}/f_{0}$ , $d/D$ , $\unicode[STIX]{x1D6FC}$ ) may be reduced to $K=g_{2}$ ( $\unicode[STIX]{x1D709}$ ), where $g_{1}$ and $g_{2}$ are different functions and the scaling factor $\unicode[STIX]{x1D709}=(\sqrt{MR}/\unicode[STIX]{x1D6FC})(d/D)^{n}$ ( $\sqrt{MR}\equiv C_{m}(D/d)$ is the momentum ratio and $n$ is a constant that depends on $\unicode[STIX]{x1D6FC}$ ) is physically the effective momentum ratio per pulse or effective penetration depth. Discussion is conducted based on $K=g_{2}$ ( $\unicode[STIX]{x1D709}$ ), which provides important insight into the jet control physics. Undernutrition in childhood resulted in bad dietary behaviors and the increased risk of hypertension in a middle-aged Chinese population L. Zhang, J. Sheng, Y. Xuan, P. Xuan, J. Zhou, Y. Fan, X. Zhu, K. Liu, L. Yang, F. Tao, S. Wang Journal: Journal of Developmental Origins of Health and Disease / Volume 9 / Issue 5 / October 2018 This study was designed to explore the association between undernutrition in the growth period and cardiovascular risk factors in a middle-aged Chinese population. A total of 1756 subjects, aged 45–60 years, were invited to participate in the Hefei Nutrition and Health Study and divided into three groups according to their self-reported animal food intake in the growth period. Group 1, Group 2 and Group 3 were defined as undernutrition, nutritional improvement and the good nutrition group, respectively. In the three groups, the subjects in Groups 1 and 2 had more oil and salt intake (P<0.001), and less eggs and milk intake (P<0.001), when compared with the subjects in Group 3. After adjusting for age, education, smoking status and other confounding factors, it was found that male participants who experienced nutritional improvement before age 18 had higher risk of hypertension [odds ratio (OR)=1.68; 95% confidence intervals (CI): 1.05, 2.69] than those with good nutrition, and female participants with undernutrition (OR=1.52; 95% CI: 1.01, 2.29) and nutritional improvement (OR=1.68; 95% CI: 1.04, 2.69) before age 18 had a higher risk of hypertension than those with good nutrition. For diabetes, obesity, hypercholesterolemia and hypertriglyceridemia, our results did not found difference among the three groups both in male and female. Our findings indicated that nutritional deficiency in childhood was associated with bad dietary behaviors and a significantly increased risk of hypertension in middle age. Therefore, early adequate nutrition is very important for the prevention of non-communicable diseases later.
CommonCrawl
Efficacy of acupuncture in children with asthma: a systematic review Chi Feng Liu1 & Li Wei Chien2 We performed a systematic review of the efficacy of various types of acupuncture in the treatment of asthma in children. We searched the MEDLINE, Embase, and Cochrane Library databases up to October 20, 2014. Randomized controlled trials (RCTs) of children and adolescents (<18 years of age) with asthma were included. Data extraction was applied, and methodologic quality was assessed. A total of 32 articles were assessed for eligibility, and seven studies comprising 410 patients were included in the systematic review. Two RCTs showed significant improvement in peak expiratory flow (PEF) variability for acupuncture (traditional and laser) vs. control, with one showing significant improvement in asthma-specific anxiety level, but no significant differences in other lung function parameters or quality of life. Another RCT reported significant benefits of laser acupuncture on lung function parameters but did not describe or report statistical analyses. One crossover RCT showed significant improvements in response to both acupuncture and placebo acupuncture, with better improvements with acupuncture compared to placebo acupuncture (forced exhaled volume in 1 s [FEV1], PEF). Two additional crossover RCTs showed no significant differences between single sessions of laser acupuncture and placebo acupuncture on baseline, postacupuncture, and postinduced bronchoconstriction values (% predicted FEV1, maximum expiratory flow). A recent study showed a significant effect of acupuncture paired with acupressure on medication use and symptoms in preschool-age children. Methodologic and reporting variability remains an issue. However, the results suggest that acupuncture may have a beneficial effect on PEF or PEF variability in children with asthma. The efficacy of acupuncture on other outcome measures is unclear. Large-scale RCTs are needed to further assess the efficacy of acupuncture in the treatment of asthma in children. Asthma is a common inflammatory disease in children as well as adults, characterized by airway hyperresponsiveness, obstruction, mucus hyperproduction, and airway wall remodeling [1]. Worldwide, asthma is estimated to affect more than 300 million people [2], with variable prevalence of symptoms and severity in children estimated from 1 % to 37 %, depending on country [3]. The development of childhood asthma is associated with a broad range of factors including genetics, environmental factors, and lifestyle [4]. Guidelines for treatment have been published, but their implementation has been reported to be less than optimal [5]. In addition, many of the commonly prescribed drug treatments for asthma have undesirable side effects [6], including effects on growth in children by inhaled corticosteroids [7]. Acupuncture, in the form of traditional Chinese medicine with needles, as well as electroacupuncture, laser acupuncture, and transcutaneous electrical nerve stimulation, has been used for the treatment of asthma. Randomized controlled trials (RCTs) have shown efficacy of acupuncture for the treatment of allergic rhinitis, and some studies have shown positive effects of acupuncture for the treatment of asthma, atopic dermatitis, and itch [8]. In addition, acupuncture plus routine care for the treatment of patients with allergic bronchial asthma has been reported to result in additional costs but better quality of life [9]. However, several meta-analyses, including a Cochrane Review and its update, have shown no consistent evidence of a significant benefit of various forms of acupuncture in terms of efficacy [10–12]. With respect to children, a recent systematic review assessed the efficacy of laser acupuncture in the treatment of asthma, and reported no compelling evidence for the efficacy of laser acupuncture in children with asthma [13]. The objective of the present systematic review was to assess the efficacy of acupuncture in all forms on the treatment of asthma in children. Search strategy, study selection, and data extraction We searched the MEDLINE, Embase, and Cochrane Library databases up to October 20, 2014. The following search terms were used: acupuncture, electroacupuncture, transcutaneous electrical nerve stimulation, children, asthma, asthmatic. We also searched reference lists of initially identified articles. The inclusion criteria were as follows: RCTs; study subjects children and adolescents (<18 years of age) with bronchial asthma; treatment with acupuncture, electroacupuncture, laser acupuncture, or transcutaneous electrical nerve stimulation versus sham (placebo, defined as acupoint[s] not considered related to asthma or no laser light emitted) acupuncture or no acupuncture; and reporting of quantitative outcomes of interest. Studies assessing acupuncture combined with other Chinese medicine and those of children with allergic conditions other than bronchial asthma (eg, allergic rhinitis, atopic dermatitis) were excluded. Only English language articles were included. Data regarding patient characteristics, intervention, and outcome were extracted by two independent reviewers. A third reviewer was consulted for resolution of disagreements. Outcome measures extracted included pulmonary function and quality of life. Quality assessment was performed with the Cochrane "assessing risk of bias" table [14]. This assessment includes six domains—random sequence generation, allocation concealment, blinding of patients and personnel, blinding of outcome assessment, incomplete outcome data, and selective reporting risk. The search process is detailed in Fig. 1. The search resulted in a total of 102 articles. Seventy of these articles were deemed not relevant, and 32 were assessed for eligibility. A total of 25 articles were excluded for the following reasons: performed in adults and not in children (n = 7), included adults as well as children (n = 12), not acupuncture vs. control (n = 4), acupuncture and control groups not compared (n = 1), and no outcome of interest (n = 1). A total of seven articles were included in the systematic review [15–21]. Flow diagram of study selection Characteristics of the seven RCTs, including 410 patients, are listed in Table 1. Four studies assessed traditional acupuncture [15, 16, 20, 21], and three studies assessed laser acupuncture [17–19]. The stage of asthma varied, with one study assessing intermittent or mild asthma [18], one assessing acute bronchial obstruction [17], two not specifying other than bronchial asthma [15, 16], and three assessing exercise-induced asthma [19–21]. With the exception of one study of preschool-age children [15], patient age was generally in the adolescent range (9–16 years of age), and the gender distribution was balanced in the three studies that reported gender. Table 1 Characteristics of included studies Lung parameter outcomes for were reported in six of the seven RCTs (Table 2), and medication use, symptoms, and quality of life outcomes were reported in three of the seven studies (Table 3). The most recent study, by Karlson and Bennicke [15], assessed the effect of 10 sessions of acupuncture plus ongoing acupressure treatment over a period of 3 months in preschool-age children with asthma (the youngest age group among the studies). The outcome measures of medication use (inhaled steroids [IHS]) and symptom reduction were measured via diary. The results showed a statistically significant reduction of symptoms (P = 0.0376) and IHS use (P = 0.0005) with acupuncture/acupressure compared to no acupuncture/acupressure at 3 months; however, this effect was not sustained at a 12-month follow-up (Table 3). Table 2 Outcomes for lung function parameters (data reported for six of seven studies) Table 3 Outcomes for medication use, symptoms, and quality of life (data reported for three of seven studies) The study by Scheewe et al. [16] assessed the effect of 12 acupuncture sessions over a period of 4 weeks on bronchial asthma in the inpatient setting. The control group participated in a discussion group. Patients maintained drug therapy throughout. Of the lung function parameters assessed, a statistically significant (P < 0.01) improvement in peak expiratory flow (PEF) variability (calculation information not included in the report) was shown in the acupuncture group vs. the control group (Table 2). No significant differences between groups were observed with respect to other lung function parameters (ratio of forced exhaled volume in 1 s [FEV1] to forced vital capacity [FVC]; or deltaFEV1 or maximum expiratory flow at 50 % of FVC [MEF50]) or medication use, symptoms, or quality-of-life variables (Table 3), although a significant improvement in asthma-specific anxiety level was reported. The study by Nedeljković et al. [17] assessed the effect of laser therapy applied at acupuncture points in the hand (Su Jok therapy) for a total of 10 sessions over a period of 12 days in the outpatient setting. Subjects continued conservative drug therapy throughout the study period. Lung function parameters (FEV1, FVC, and PEF, Table 2; as well as forced expiratory flow [FEF], not shown in Table 2) were assessed on Days 0, 5, and 12. The text states that lung function measurements in the acupuncture group returned to normal values by Day 12, whereas those in the control group did not, and that the results obtained in the acupuncture group and control group differed with a high rate of statistical significance. However, no statistical methods are provided or results (ie, SD values, P values) shown. Therefore, it is not possible to determine whether the shown % changes in lung function parameters between groups or at Day 12 vs. Day 0 are indeed significant. The study by Stockert et al. [18] assessed the effect of 10 treatments of once-weekly laser acupuncture plus probiotic drops three times daily for 7 weeks (the combined regimen referred to as Traditional Chinese Medicine) compared to placebo laser therapy plus placebo drops on intermittent or mild asthma. Of the lung function parameters assessed, a statistically significant (P = 0.034) improvement in PEF variability was shown in the acupuncture group vs. the placebo acupuncture group and also for acupuncture vs. baseline (P = 0.015) (Table 2). PEF variability was calculated as follows: $$ \mathrm{P}\mathrm{E}\mathrm{F}\ \mathrm{variability} = \frac{\left(\mathrm{P}\mathrm{E}\mathrm{F}\ \mathrm{evening}\ \hbox{--}\ \mathrm{P}\mathrm{E}\mathrm{F}\ \mathrm{morning}\right) \times 100}{\left(\mathrm{P}\mathrm{E}\mathrm{F}\ \mathrm{evening} + \mathrm{P}\mathrm{E}\mathrm{F}\ \mathrm{morning}\right) \times 0.5} $$ No significant differences between groups were observed with respect to other lung function parameters (FEV1), medication use, or quality-of-life (Table 3). The study by Gruber et al. [19] assessed the effects of single sessions of laser and placebo laser therapy applied in random order over two consecutive days (crossover design) on mild to moderate exercise-induced asthma in the outpatient setting, as measured by cold dry air hyperventilation–induced bronchoconstriction. Medications were withheld for 12 (bronchodilator) to 24 (long-term medications) hours before the study. Results showed no statistically significant differences between the laser acupuncture and placebo acupuncture groups in baseline (control), postacupuncture, and 3- and 15-min post-induced bronchoconstriction values (% predicted FEV1 as well as % predicted MEF25). The study by Fung et al. [20] assessed the effects of acupuncture and placebo acupuncture on mild to moderate exercise-induced asthma in the outpatient setting. A first weekly session was performed without any acupuncture (control), followed by two weekly sessions of acupuncture or placebo acupuncture in random order applied 20 min before exercise (crossover design). The subjects refrained from the use of any medication 24 h before the study day. Both acupuncture and placebo acupuncture showed statistically significant improvements in lung function parameters (difference in mean percentage fall in FEV1, FVC, and PEF) compared to control (P < 0.02 by two-way ANOVA) (Table 2). However, pairwise comparisons showed better improvements in response to acupuncture compared to placebo acupuncture (FEV1, P < 0.01; PEF, P < 0.05). The statistical difference in mean percentage fall in FVC between acupuncture and placebo acupuncture groups was not reported. The study by Chow et al. [21] assessed the effects of acupuncture and placebo acupuncture on exercise-induced asthma. A first weekly session was performed without any acupuncture (control), followed by two weekly sessions of acupuncture or placebo acupuncture in random order applied 10 min before exercise (crossover design). The subjects refrained from the use of medications for 8-24 h before testing. The results showed no significant differences in FEV1 between groups (Table 2). Results of quality assessment are shown in Table 4. All seven studies were RCTs, but the details of randomization were not detailed. One acupuncture study [20] and two laser acupuncture studies [18, 19] were double-blind (patient and technician assessing outcomes) with respect to the type of acupuncture (acupuncture or placebo) performed. One additional study [21] was single-blind (patients were blind to the type of acupuncture performed). Table 4 Quality assessment of included studies The aim of this review was to assess published reports regarding the efficacy of acupuncture for the treatment of asthma in children. The original aim was to perform an updated meta-analysis; however, the quality and presentation of the data in the seven articles were insufficient for statistical comparison. Therefore, we performed a systematic review. The results indicate that acupuncture, in the form of traditional needle acupuncture or laser acupuncture, may provide benefit with respect to PEF and perhaps FEV1, along with asthma-specific anxiety level. However, these findings are based on the results of three studies and should be interpreted with caution. In addition, the results of Karlson and Bennicke [15] indicate a significant effect of acupuncture on medication use and symptoms in younger children. The quality assessment shows that the study quality was not high, with four of the studies being blinded. In addition, the acupuncture points used varied among studies. The presentation of the outcome measures assessed in the seven studies makes it difficult to directly compare results. Whereas six of the seven studies assessed objective lung function (FEV1, FVC, PEF), the presentation as PEF variability, FEV1/FVC, % fall, % predicted value, and presentation in graphic vs. numeric form prevented direct comparison. Other outcome measures, such as quality of life, anxiety, and respiratory symptoms assessed via visual analog scale, were not consistently reported. In addition to variable outcome measure presentation, heterogeneity in the acupuncture methods used (type of acupuncture, application placement, depth, time, number of sessions), experimental conditions (induced vs. noninduced asthma), testing environment (inpatient vs. outpatient), and controls (no treatment vs. placebo), may alter conditions sufficiently to mask any real effect of acupuncture in this context. For example, the single sessions used in the study by Gruber et al. [19] may not have been sufficient to elicit an effect. In addition, three of the studies in the present analysis [19–21] were excluded from a Cochrane review because the asthma assessed was induced [10]. Moreover, the significant but limited beneficial effects reported by Scheewe et al. [16] were obtained in the inpatient environment; the authors suggest that stronger effects might be obtained in the outpatient environment. Finally, the finding of significant effects in response to placebo acupuncture (but less so than in response to acupuncture) in the study by Fung et al. [20] suggest the possibility of a placebo effect, but there is concern that the placebo points used may not be strictly placebo (ie, they may be relevant to asthma) [10], as well as concern regarding the relevance of the acupuncture points used to treat asthma. It should also be considered that acupuncture might indirectly affect allergic diseases by modulating the production of inflammatory cytokines [22]. Methodologic variability remains an issue. However, the results suggest that acupuncture may have a beneficial effect with respect to PEF or PEF variability in children with asthma. The efficacy of acupuncture on other outcome measures is unclear. Along with the issues discussed above, limitations of the present review include the small number of included studies, with relatively few patients. Large-scale RCTs using similar methodology and assessing similar outcome measures are needed to further assess the efficacy of acupuncture in the treatment of children with asthma. FEV: Forced exhaled volume FVC: Forced vital capacity PEF: Peak expiratory flow Kudo M, Ishigatsubo Y, Aoki I. Pathology of asthma. Front Microbiol. 2013;4:263. PubMed Central PubMed Article Google Scholar Bousquet J, Clark TJ, Hurd S, Khaltaev N, Lenfant C, O'byrne P, et al. GINA guidelines on asthma and beyond. Allergy. 2007;62:102–12. Lai CK, Beasley R, Crane J, Foliaki S, Shah J, Weiland S. International Study of Asthma and Allergies in Childhood Phase Three Study Group. Global variation in the prevalence and severity of asthma symptoms: phase three of the International Study of Asthma and Allergies in Childhood (ISAAC). Thorax. 2009;64:476–83. Ding G, Ji R, Bao Y. Risk and protective factors for the development of childhood asthma. Paediatr Respir Rev. 2014;pii:S1526–0542(14)00082-7. Hakimeh D, Tripodi S. Recent advances on diagnosis and management of childhood asthma and food allergies. Ital J Pediatr. 2013;39:80. Cates CJ, Jaeschke R, Schmidt S, Ferrer M. Regular treatment with formoterol and inhaled steroids for chronic asthma: serious adverse events. Cochrane Database Syst Rev. 2013;6, CD006924. Zhang L, Prietsch SO, Ducharme FM. Inhaled corticosteroids in children with persistent asthma: effects on growth. Cochrane Database Syst Rev. 2014;7, CD009471. Pfab F, Schalock PC, Napadow V, Athanasiadis GI, Huss-Marp J, Ring J. Acupuncture for allergic disease therapy–the current state of evidence. Expert Rev Clin Immunol. 2014;10:831–41. Reinhold T, Brinkhaus B, Willich SN, Witt C. Acupuncture in patients suffering from allergic asthma: is it worth additional costs? J Altern Complement Med. 2014;20:169–77. McCarney RW, Brinkhaus B, Lasserson TJ, Linde K. Acupuncture for chronic asthma. Cochrane Database Syst Rev. 2004;2004(1):CD000008 [Update of Linde et al., 2000]. Linde K, Jobst K, Panton J. Acupuncture for chronic asthma. Cochrane Database Syst Rev. 2000;2, CD000008. Martin J, Donaldson AN, Villarroel R, Parmar MK, Ernst E, Higginson IJ. Efficacy of acupuncture in asthma: systematic review and meta-analysis of published data from 11 randomised controlled trials. Eur Respir J. 2002;20:846–52. Zhang J, Li X, Xu J, Ernst E. Laser acupuncture for the treatment of asthma in children: a systematic review of randomized controlled trials. J Asthma. 2012;49:773–7. Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions, version 5.1.0 [updated March 2011]. Chichester, West Sussex, UK: The Cochrane Collaboration; 2011. Karlson G, Bennicke P. Acupuncture in asthmatic children: a prospective, randomized, controlled clinical trial of efficacy. Altern Ther Health Med. 2013;19:13–9. Scheewe S, Vogt L, Minakawa S, Eichmann D, Welle S, Stachow R, et al. Acupuncture in children and adolescents with bronchial asthma: a randomised controlled study. Complement Ther Med. 2011;19:239–46. Nedeljković M, Ljustina-Pribić R, Savić K. Innovative approach to laser acupuncture therapy of acute obstruction in asthmatic children. Med Pregl. 2008;61:123–30 [Article in English, Serbian]. Stockert K, Schneider B, Porenta G, Rath R, Nissel H, Eichler I. Laser acupuncture and probiotics in school age children with asthma: a randomized, placebo-controlled pilot study of therapy guided by principles of Traditional Chinese Medicine. Pediatr Allergy Immunol. 2007;18:160–6. Erratum: Pediatr Allergy Immunol. 2007;18:272. Gruber W, Eber E, Malle-Scheid D, Pfleger A, Weinhandl E, Dorfer L, et al. Laser acupuncture in children and adolescents with exercise induced asthma. Thorax. 2002;57:222–5. CAS PubMed Central PubMed Article Google Scholar Fung KP, Chow OK, So SY. Attenuation of exercise-induced asthma by acupuncture. Lancet. 1986;2:1419–22. Chow OK, So SY, Lam WK, Yu DY, Yeung CY. Effect of acupuncture on exercise-induced asthma. Lung. 1983;161:321–6. McDonald JL, Cripps AW, Smith PK, Smith CA, Xue CC, Golianu B. The anti-inflammatory effects of acupuncture and their relevance to allergic rhinitis: a narrative review and proposed model. Evid Based Complement Alternat Med. 2013;2013:591796. Writing assistance provided by Lauren P. Baker, PhD, ELS, in association with MedCom Asia Inc. Graduate Institute of Integration of Traditional Chinese Medicine with Western Nursing, National Taipei University of Nursing and Health Sciences, No. 365 Ming-De Road, Beitou, Taipei, 11211, Taiwan Chi Feng Liu Department of Obstetrics and Gynecology, Taipei Medical University Hospital, Taipei, Taiwan Li Wei Chien Correspondence to Chi Feng Liu. CFL: guarantor of integrity of the entire study, study concepts, study design, definition of intellectual content, literature research, data extraction, experimental studies, data acquisition, manuscript editing, manuscript review. LWC: literature research, data extraction, data analysis, manuscript preparation. Both authors read and approved the final manuscript. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Liu, C.F., Chien, L.W. Efficacy of acupuncture in children with asthma: a systematic review. Ital J Pediatr 41, 48 (2015). https://doi.org/10.1186/s13052-015-0155-1
CommonCrawl
Pôle Automates, structures et vérification Équipe thématique Automates et applications Gestion des séances Jour, heure et lieu Le mardi à 14h30, online Wolfgang Steiner This is an international online seminar on numeration systems and related topics. If you want to participate, please write to [email protected]. For more information, in particular slides and videos of past talks, visit Numeration - OWNS homepage. Prochaine séance Mardi 31 janvier 2023, 14 heures, Online Slade Sanderson (Universiteit Utrecht) Matching for parameterised symmetric golden maps In 2020, Dajani and Kalle investigated invariant measures and frequencies of digits of signed binary expansions arising from a parameterised family of piecewise linear interval maps of constant slope 2. Central to their study was a property called 'matching,' where the orbits of the left and right limits of discontinuity points agree after some finite number of steps. We obtain analogous results for a parameterised family of 'symmetric golden maps' of constant slope $\beta$, with $\beta$ the golden mean. Matching is again central to our methods, though the dynamics of the symmetric golden maps are more delicate than the binary case. We characterize the matching phenomenon in our setting, present explicit invariant measures and frequencies of digits of signed $\beta$-expansions, and—time permitting—show further implications for a family of piecewise linear maps which arise as jump transformations of the symmetric golden maps. Joint with Karma Dajani. Kiko Kawamura (University of North Texas) The partial derivative of Okamoto's functions with respect to the parameter Okamoto's functions were introduced in 2005 as a one-parameter family of self-affine functions, which are expressed by ternary expansion of x on the interval [0,1]. By changing the parameter, one can produce interesting examples: Perkins' nowhere differentiable function, Bourbaki-Katsuura function and Cantor's Devil's staircase function. In this talk, we consider the partial derivative of Okomoto's functions with respect to the parameter a. We place a significant focus on a = 1/3 to describe the properties of a nowhere differentiable function K(x) for which the set of points of infinite derivative produces an example of a measure zero set with Hausdorff dimension 1. This is a joint work with T. Mathis and M.Paizanis (undergraduate students) and N.Dalaklis (graduate student). The talk is very accessible and includes many computer graphics. Roswitha Hofer (JKU Linz) Exact order of discrepancy of normal numbers In the talk we discuss some previous results on the discrepancy of normal numbers and consider the still open question of Korobov: What is the best possible order of discrepancy $D_N$ in $N$, a sequence $(\{b^n\alpha\})_{n\geq 0}$, $b\geq 2,\in\mathbb{N}$, can have for some real number $\alpha$? If $\lim_{N\to\infty} D_N=0$ then $\alpha$ in called normal in base $b$. So far the best upper bounds for $D_N$ for explicitly known normal numbers in base $2$ are of the form $ND_N\ll\log^2 N$. The first example is due to Levin (1999), which was later generalized by Becher and Carton (2019). In this talk we discuss the recent result in joint work with Gerhard Larcher that guarantees $ND_N\gg \log^2 N$ for Levin's binary normal number. So EITHER $ND_N\ll \log^2N$ is the best possible order for $D_N$ in $N$ of a normal number OR there exist another example of a binary normal number with a better growth of $ND_N$ in $N$. The recent result for Levin's normal number might support the conjecture that $ND_N\ll \log^2N$ is the best order for $D_N$ in $N$ a normal number can obtain. Mardi 13 décembre 2022, 14 heures, Online Hiroki Takahasi (Keio University) Distribution of cycles for one-dimensional random dynamical systems We consider an independently identically distributed random dynamical system generated by finitely many, non-uniformly expanding Markov interval maps with a finite number of branches. Assuming a topologically mixing condition and the uniqueness of equilibrium state for the associated skew product map, we establish a samplewise (quenched) almost-sure level-2 weighted equidistribution of "random cycles", with respect to a natural stationary measure as the periods of the cycles tend to infinity. This result implies an analogue of Bowen's theorem on periodic orbits of topologically mixing Axiom A diffeomorphisms. This talk is based on the preprint arXiv:2108.05522. If time permits, I will mention some future perspectives in this project. Mardi 6 décembre 2022, 14 heures, Online Christoph Bandt (Universität Greifswald) Automata generated topological spaces and self-affine tilings Numeration assigns symbolic sequences as addresses to points in a space X. There are points which get multiple addresses. It is known that these identifications describe the topology of X and can often be determined by an automaton. Here we define a corresponding class of automata and discuss their properties and interesting examples. Various open questions concern the realization of such automata by iterated functions and the uniqueness of such an implementation. Self-affine tiles form a simple class of examples. Mardi 29 novembre 2022, 14 heures, Online Manuel Hauke (TU Graz) The asymptotic behaviour of Sudler products Given an irrational number $\alpha$, we study the asymptotic behaviour of the Sudler product defined by $P_N(\alpha) = \prod_{r=1}^N 2 \lvert \sin \pi r \alpha \rvert$, which appears in many different areas of mathematics. In this talk, we explain the connection between the size of $P_N(\alpha)$ and the Ostrowski expansion of $N$ with respect to $\alpha$. We show that $\liminf_{N \to \infty} P_N(\alpha) = 0$ and $\limsup_{N \to \infty} P_N(\alpha)/N = \infty$, whenever the sequence of partial quotients in the continued fraction expansion of $\alpha$ exceeds $7$ infinitely often, and show that the value $7$ is optimal. For Lebesgue-almost every $\alpha$, we can prove more: we show that for every non-decreasing function $\psi: (0,\infty) \to (0,\infty)$ with $\sum_{k=1}^{\infty} \frac{1}{\psi(k)} = \infty$ and $\liminf_{k \to \infty} \psi(k)/(k \log k)$ sufficiently large, the conditions $\log P_N(\alpha) \leq -\psi(\log N)$, $\log P_N(\alpha) \geq \psi(\log N)$ hold on sets of upper density $1$ respectively $1/2$. Faustin Adiceam (Université Paris-Est Créteil) Badly approximable vectors and Littlewood-type problems Badly approximable vectors are fractal sets enjoying rich Diophantine properties. In this respect, they play a crucial role in many problems well beyond Number Theory and Fractal Geometry (e.g., in signal processing, in mathematical physics and in convex geometry). After outlining some of the latest developments in this very active area of research, we will take an interest in the Littlewood conjecture (c. 1930) and in its variants which all admit a natural formulation in terms of properties satisfied by badly approximable vectors. We will then show how ideas emerging from the mathematical theory of quasicrystals, from numeration systems and from the theory of aperiodic tilings have recently been used to refute the so-called t-adic Littlewood conjecture. All necessary concepts will be defined in the talk. Joint with Fred Lunnon (Maynooth) and Erez Nesharim (Technion, Haifa). Seul Bee Lee (Institute for Basic Science) Regularity properties of Brjuno functions associated with by-excess, odd and even continued fractions An irrational number is called a Brjuno number if the sum of the series of $\log(q_{n+1})/q_n$ converges, where $q_n$ is the denominator of the $n$-th principal convergent of the regular continued fraction. The importance of Brjuno numbers comes from the study of one variable analytic small divisor problems. In 1988, J.-C. Yoccoz introduced the Brjuno function which characterizes the Brjuno numbers to estimate the size of Siegel disks. In this talk, we introduce Brjuno-type functions associated with by-excess, odd and even continued fractions with a number theoretical motivation. Then we discuss the $L^p$ and the Hölder regularity properties of the difference between the classical Brjuno function and the Brjuno-type functions. This is joint work with Stefano Marmi. Mardi 8 novembre 2022, 14 heures, Online Wen Wu (South China University of Technology) From the Thue-Morse sequence to the apwenian sequences In this talk, we will introduce a class of $\pm 1$ sequences, called the apwenian sequences. The Hankel determinants of these $\pm1$ sequences share the same property as the Hankel determinants of the Thue-Morse sequence found by Allouche, Peyrière, Wen and Wen in 1998. In particular, the Hankel determinants of apwenian sequences do not vanish. This allows us to discuss the Diophantine property of the values of their generating functions at $1/b$ where $b\geq 2$ is an integer. Moreover, the number of $\pm 1$ apwenian sequences is given explicitly. Similar questions are also discussed for $0$-$1$ apwenian sequences. This talk is based on joint work with Y.-J. Guo and G.-N. Han. Mardi 25 octobre 2022, 14 heures, Online Álvaro Bustos-Gajardo (The Open University) Quasi-recognizability and continuous eigenvalues of torsion-free S-adic systems We discuss combinatorial and dynamical descriptions of S-adic systems generated by sequences of constant-length morphisms between alphabets of bounded size. For this purpose, we introduce the notion of quasi-recognisability, a strictly weaker version of recognisability but which is indeed enough to reconstruct several classical arguments of the theory of constant-length substitutions in this more general context. Furthermore, we identify a large family of directive sequences, which we call "torsion-free", for which quasi-recognisability is obtained naturally, and can be improved to actual recognisability with relative ease. Using these notions we give S-adic analogues of the notions of column number and height for substitutions, including dynamical and combinatorial interpretations of each, and give a general characterisation of the maximal equicontinuous factor of the identified family of S-adic shifts, showing as a consequence that in this context all continuous eigenvalues must be rational. As well, we employ the tools developed for a first approach to the measurable case. This is a joint work with Neil Mañibo and Reem Yassawi. Yufei Chen (TU Delft) Matching of orbits of certain N-expansions with a finite set of digits In this talk we consider a class of continued fraction expansions: the so-called $N$-expansions with a finite digit set, where $N\geq 2$ is an integer. For $N$ fixed they are steered by a parameter $\alpha\in (0,\sqrt{N}-1]$. For $N=2$ an explicit interval $[A,B]$ was determined, such that for all $\alpha\in [A,B]$ the entropy $h(T_{\alpha})$ of the underlying Gauss-map $T_{\alpha}$ is equal. In this paper we show that for all $N\in\mathbb{N}$, $N\geq 2$, such plateaux exist. In order to show that the entropy is constant on such plateaux, we obtain the underlying planar natural extension of the maps $T_{\alpha}$, the $T_{\alpha}$-invariant measure, ergodicity, and we show that for any two $\alpha,\alpha'$ from the same plateau, the natural extensions are metrically isomorphic, and the isomorphism is given explicitly. The plateaux are found by a property called matching. Lukas Spiegelhofer (Montanuniversität Leoben) Primes as sums of Fibonacci numbers We prove that the Zeckendorf sum-of-digits function of prime numbers, $z(p)$, is uniformly distributed in residue classes. The main ingredient that made this proof possible is the study of very sparse arithmetic subsequences of $z(n)$. In other words, we will meet the level of distribution. Our proof of this central result is based on a combination of the "Mauduit−Rivat−van der Corput method" for digital problems and an estimate of a Gowers norm related to $z(n)$. Our method of proof yields examples of substitutive sequences that are orthogonal to the Möbius function (cf. Sarnak's conjecture). This is joint work with Michael Drmota and Clemens Müllner (TU Wien). Mardi 4 octobre 2022, 14 heures, Online David Siukaev (Higher School of Economics) Exactness and Ergodicity of Certain Markovian Multidimensional Fraction Algorithms A multidimensional continued fraction algorithm is a generalization of well-known continued fraction algorithms of small dimensions: Gauss and Euclidean. Ergodic properties of Markov MCF algorithms (ergodicity, nonsingularity, exactness, bi-measurability) affect their convergence (if the MСF algorithm is a Markov algorithm, there is a relationship between the spectral properties and its convergence). In 2013 T. Miernowski and A. Nogueira proved that the Euclidean algorithm and the non-homogeneous Rauzy induction satisfy the intersection property and, as a consequence, are exact. At the end of the article it is stated that other non-homogeneous markovian algorithms (Selmer, Brun and Jacobi-Perron) also satisfy the intersection property and they also exact. However, there is no proof of this. In our paper this proof is obtained by using the structure of the proof of the exactness of the Euclidean algorithm with its generalization and refinement for multidimensional algorithms. We obtained technically complex proofs that differ from the proofs given in the article of T. Miernowski and A. Nogueira by the difficulties of generalization to the multidimensional case. Mardi 4 octobre 2022, 14 heures 30, Online Alexandra Skripchenko (Higher School of Economics) Bruin-Troubetzkoy family of interval translation mappings: a new glance In 2002 H. Bruin and S. Troubetzkoy described a special class of interval translation mappings on three intervals. They showed that in this class the typical ITM could be reduced to an interval exchange transformations. They also proved that generic ITM of their class that can not be reduced to IET is uniquely ergodic. We suggest an alternative proof of the first statement and get a stronger version of the second one. It is a joint work in progress with Mauro Artigiani and Pascal Hubert. Mardi 27 septembre 2022, 14 heures, Online Niels Langeveld (Montanuniversität Leoben) $N$-continued fractions and $S$-adic sequences Given the $N$-continued fraction of a number $x$, we construct $N$-continued fraction sequences in the same spirit as Sturmian sequences can be constructed from regular continued fractions. These sequences are infinite words over a two letter alphabet obtained as the limit of a directive sequence of certain substitutions (they are S-adic sequences). By viewing them as a generalisation of Sturmian sequences it is natural to study balancedness. We will see that the sequences we construct are not 1-balanced but C-balanced for $C=N^2$. Furthermore, we construct a dual sequence which is related to the natural extension of the $N$-continued fraction algorithm. This talk is joint work with Lucía Rossi and Jörg Thuswaldner. Mardi 13 septembre 2022, 14 heures 30, Online Benedict Sewell (Alfréd Rényi Institute) An upper bound on the box-counting dimension of the Rauzy gasket The Rauzy gasket is a subset of the standard two-simplex, and an important subset of parameter space in various settings. It is a parabolic, non-conformal fractal attractor; meaning that even the most trivial upper bounds on its Hausdorff or box-counting dimensions are hard to obtain. In this talk (featuring joint work with Mark Pollicott), we discuss how an elementary method leads to the best known upper bound on these dimensions. Mardi 12 juillet 2022, 14 heures 30, Online Ruofan Li (South China University of Technology) Rational numbers in ×b-invariant sets Let $b \ge 2$ be an integer and $S$ be a finite non-empty set of primes not containing divisors of $b$. For any $\times b$-invariant, non-dense subset $A$ of $[0,1)$, we prove the finiteness of rational numbers in $A$ whose denominators can only be divided by primes in $S$. A quantitative result on the largest prime divisors of the denominators of rational numbers in $A$ is also obtained. This is joint work with Bing Li and Yufeng Wu. Mardi 5 juillet 2022, 14 heures 30, Online Charlene Kalle (Universiteit Leiden) Random Lüroth expansions Since the introduction of Lüroth expansions by Lüroth in his paper from 1883 many results have appeared on their approximation properties. In 1990 Kalpazidou, Knopfmacher and Knopfmacher introduced alternating Lüroth expansions and studied their properties. A comparison between the two and other comparable number systems was then given by Barrionuevo, Burton, Dajani and Kraaikamp in 1996. In this talk we introduce a family of random dynamical systems that produce many Lüroth type expansions at once. Topics that we consider are periodic expansions, universal expansions, speed of convergence and approximation coefficients. This talk is based on joint work with Marta Maggioni. Mardi 21 juin 2022, 14 heures 30, Online James A. Yorke (University of Maryland) Large and Small Chaos Models To set the scene, I will discuss one large model, a whole-Earth model for predicting the weather, and how to initialize such a model and what aspects of chaos are essential. Then I will discuss a couple related "very simple" maps that tell us a great deal about very complex models. The results on simple models are new. I will discuss the logistic map mx(1-x). Its dynamics can make us rethink climate models. Also, we have created a piecewise linear map on a 3D cube that is unstable in 2 dimensions in some places and unstable in 1 in others. It has a dense set of periodic points that are 1 D unstable and another dense set of periodic points that are all 2 D unstable. I will also discuss a new project whose tentative title is " Can the flap of butterfly's wings shift a tornado out of Texas – without chaos? Mardi 7 juin 2022, 14 heures 30, Online Sophie Morier-Genoud (Université Reims Champagne Ardenne) q-analogues of real numbers Classical sequences of numbers often lead to interesting q-analogues. The most popular among them are certainly the q-integers and the q-binomial coefficients which both appear in various areas of mathematics and physics. With Valentin Ovsienko we recently suggested a notion of q-rationals based on combinatorial properties and continued fraction expansions. The definition of q-rationals naturally extends the one of q-integers and leads to a ratio of polynomials with positive integer coefficients. I will explain the construction and give the main properties. In particular I will briefly mention connections with the combinatorics of posets, cluster algebras, Jones polynomials, homological algebra. Finally I will also present further developments of the theory, leading to the notion of q-irrationals and q-unimodular matrices. Mardi 31 mai 2022, 14 heures 30, Online Verónica Becher (Universidad de Buenos Aires & CONICET Argentina) Poisson generic real numbers Years ago Zeev Rudnick defined the Poisson generic real numbers as those where the number of occurrences of the long strings in the initial segments of their fractional expansions in some base have the Poisson distribution. Yuval Peres and Benjamin Weiss proved that almost all real numbers, with respect to Lebesgue measure, are Poisson generic. They also showed that Poisson genericity implies Borel normality but the two notions do not coincide, witnessed by the famous Champernowne constant. We recently showed that there are computable Poisson generic real numbers and that all Martin-Löf real numbers are Poisson generic. This is joint work Nicolás Álvarez and Martín Mereb. Émilie Charlier (Université de Liège) Spectrum, algebraicity and normalization in alternate bases The first aim of this work is to give information about the algebraic properties of alternate bases determining sofic systems. We exhibit two conditions: one necessary and one sufficient. Comparing the setting of alternate bases to that of one real base, these conditions exhibit a new phenomenon: the bases should be expressible as rational functions of their product. The second aim is to provide an analogue of Frougny's result concerning normalization of real bases representations. Under some suitable condition (i.e., our previous sufficient condition for being a sofic system), we prove that the normalization function is computable by a finite Büchi automaton, and furthermore, we effectively construct such an automaton. An important tool in our study is the spectrum of numeration systems associated with alternate bases. For our purposes, we use a generalized concept of spectrum associated with a complex base and complex digits, and we study its topological properties. This is joint work with Célia Cisternino, Zuzana Masáková and Edita Pelantová. Vilmos Komornik (Université de Strasbourg et Shenzhen University) Topology of univoque sets in real base expansions We report on a recent joint paper with Martijn de Vries and Paola Loreti. Given a positive integer $M$ and a real number $1 < q\le M+1$, an expansion of a real number $x \in \left[0,M/(q-1)\right]$ over the alphabet $A=\{0,1,\ldots,M\}$ is a sequence $(c_i) \in A^{\mathbb{N}}$ such that $x=\sum_{i=1}^{\infty}c_iq^{-i}$. Generalizing many earlier results, we investigate the topological properties of the set $U_q$ consisting of numbers $x$ having a unique expansion of this form, and the combinatorial properties of the set $U_q'$ consisting of their corresponding expansions. Mardi 3 mai 2022, 14 heures 30, Online Nicolas Chevallier (Université de Haute Alsace) Best Diophantine approximations in the complex plane with Gaussian integers Starting with the minimal vectors in lattices over Gaussian integers in $\mathbb{C}^2$, we define a algorithm that finds the sequence of minimal vectors of any unimodular lattice in $\mathbb{C}^2$. Restricted to lattices associated with complex numbers this algorithm find all the best Diophantine approximations of a complex numbers. Following Doeblin, Lenstra, Bosma, Jager and Wiedijk, we study the limit distribution of the sequence of products $(u_{n1}u_{n2})_n$ where $(u_n=( u_{n1},u_{n2} ))_n$ is the sequence of minimal vectors of a lattice in $\mathbb{C}^2$. We show that there exists a measure in $\mathbb{C}$ which is the limit distribution of the sequence of products of almost all unimodular lattices. Mardi 19 avril 2022, 14 heures 30, Online Paulina Cecchi Bernales (Universidad de Chile) Coboundaries and eigenvalues of finitary S-adic systems An S-adic system is a shift space obtained by performing an infinite composition of morphisms defined over possibly different finite alphabets. It is said to be finitary if these morphisms are taken from a finite set. S-adic systems are a generalization of substitution shifts. In this talk we will discuss spectral properties of finitary S-adic systems. Our departure point will be a theorem by B. Host which characterizes eigenvalues of substitution shifts, and where coboundaries appear as a key tool. We will introduce the notion of S-adic coboundaries and present some results which show how they are related with eigenvalues of S-adic systems. We will also present some applications of our results to constant-length finitary S-adic systems. This is joint work with Valérie Berthé and Reem Yassawi. Eda Cesaratto (Univ. Nac. de Gral. Sarmiento & CONICET, Argentina) Lochs-type theorems beyond positive entropy Lochs' theorem and its generalizations are conversion theorems that relate the number of digits determined in one expansion of a real number as a function of the number of digits given in some other expansion. In its original version, Lochs' theorem related decimal expansions with continued fraction expansions. Such conversion results can also be stated for sequences of interval partitions under suitable assumptions, with results holding almost everywhere, or in measure, involving the entropy. This is the viewpoint we develop here. In order to deal with sequences of partitions beyond positive entropy, this paper introduces the notion of log-balanced sequences of partitions, together with their weight functions. These are sequences of interval partitions such that the logarithms of the measures of their intervals at each depth are roughly the same. We then state Lochs-type theorems which work even in the case of zero entropy, in particular for several important log-balanced sequences of partitions of a number-theoretic nature. This is joint work with Valérie Berthé (IRIF), Pablo Rotondo (U. Gustave Eiffel) and Martín Safe (Univ. Nac. del Sur & CONICET, Argentina). Mardi 5 avril 2022, 14 heures 30, Online Jungwon Lee (University of Warwick) Dynamics of Ostrowski skew-product: Limit laws and Hausdorff dimensions We discuss a dynamical study of the Ostrowski skew-product map in the context of inhomogeneous Diophantine approximation. We plan to outline the setup/ strategy based on transfer operator analysis and applications in arithmetic of number fields (joint with Valérie Berthé). Mardi 29 mars 2022, 14 heures 30, Online Tingyu Zhang (East China Normal University) Random β-transformation on fat Sierpiński gasket We define the notions of greedy, lazy and random transformations on fat Sierpiński gasket. We determine the bases, for which the system has a unique measure of maximal entropy and an invariant measure of product type, with one coordinate being absolutely continuous with respect to Lebesgue measure. This is joint work with K. Dajani and W. Li. Pierre Popoli (Université de Lorraine) Maximum order complexity for some automatic and morphic sequences along polynomial values Automatic sequences are not suitable sequences for cryptographic applications since both their subword complexity and their expansion complexity are small, and their correlation measure of order 2 is large. These sequences are highly predictable despite having a large maximum order complexity. However, recent results show that polynomial subsequences of automatic sequences, such as the Thue-Morse sequence or the Rudin-Shapiro sequence, are better candidates for pseudorandom sequences. A natural generalization of automatic sequences are morphic sequences, given by a fixed point of a prolongeable morphism that is not necessarily uniform. In this talk, I will present my results on lowers bounds for the maximum order complexity of the Thue-Morse sequence, the Rudin-Shapiro sequence and the sum of digits function in Zeckendorf base, which are respectively automatics and morphic sequences. Mardi 8 mars 2022, 14 heures 30, Online Michael Coons (Universität Bielefeld) A spectral theory of regular sequences A few years ago, Michael Baake and I introduced a probability measure associated to Stern's diatomic sequence, an example of a regular sequence—sequences which generalise constant length substitutions to infinite alphabets. In this talk, I will discuss extensions of these results to more general regular sequences as well as further properties of these measures. This is joint work with several people, including Michael Baake, James Evans, Zachary Groth and Neil Manibo. Daniel Krenn (Universität Salzburg) k-regular sequences: Asymptotics and Decidability A sequence $x(n)$ is called $k$-regular, if the set of subsequences $x(k^j n + r)$ is contained in a finitely generated module. In this talk, we will consider the asymptotic growth of $k$-regular sequences. When is it possible to compute it? …and when not? If possible, how precisely can we compute it? If not, is it just a lack of methods or are the underlying decision questions recursively solvable (i.e., decidable in a computational sense)? We will discuss answers to these questions. To round off the picture, we will consider further decidability questions around $k$-regular sequences and the subclass of $k$-automatic sequences. Mardi 15 février 2022, 14 heures 30, Online Wolfgang Steiner (IRIF) Unique double base expansions For pairs of real bases $\beta_0, \beta_1 > 1$, we study expansions of the form $\sum_{k=1}^\infty i_k / (\beta_{i_1} \beta_{i_2} \cdots \beta_{i_k})$ with digits $i_k \in \{0,1\}$. We characterise the pairs admitting non-trivial unique expansions as well as those admitting uncountably many unique expansions, extending recent results of Neunh\"auserer (2021) and Zou, Komornik and Lu (2021). Similarly to the study of unique $\beta$-expansions with three digits by the speaker (2020), this boils down to determining the cardinality of binary shifts defined by lexicographic inequalities. Labarca and Moreira (2006) characterised when such a shift is empty, at most countable or uncountable, depending on the position of the lower and upper bounds with respect to Thue–Morse–Sturmian words. This is joint work with Vilmos Komornik and Yuru Zou. Mardi 8 février 2022, 14 heures 30, Online Magdaléna Tinková (České vysoké učení technické v Praze) Universal quadratic forms, small norms and traces in families of number fields In this talk, we will discuss universal quadratic forms over number fields and their connection with additively indecomposable integers. In particular, we will focus on Shanks' family of the simplest cubic fields. This is joint work with Vítězslav Kala. Jonas Jankauskas (Vilniaus universitetas) Digit systems with rational base matrix over lattices Let $A$ be a matrix with rational entries and no eigenvalue in absolute value smaller than 1. Let $\mathbb{Z}^d[A]$ be the minimal $A$-invariant $\mathbb{Z}$-module, generated by integer vectors and the matrix $A$. In 2018, we have shown that one can find a finite set $D$ of vectors, such that each element of $\mathbb{Z}^d[A]$ has a finite radix expansion in base $A$ using only the digits from $D$, i.e. $\mathbb{Z}^d[A]=D[A]$. This is called 'the finiteness property' of a digit system. In the present talk I will review more recent developments in mathematical machinery, that enable us to build finite digit systems over lattices using reasonably small digit sets, and even to do some practical computations with them on a computer. Tools that we use are the generalized rotation bases with digit sets that have 'good' convex properties, the semi-direct ('twisted') sums of such rotational digit systems, and the special, 'restricted' version of the remainder division that preserves the lattice $\mathbb{Z}^d$ and can be extended to $\mathbb{Z}^d[A]$. This is joint work with J. Thuswaldner, "Rational Matrix Digit Systems", to appear in "Linear and Multilinear Algebra" (arXiv preprint: https://arxiv.org/abs/2107.14168). Mardi 25 janvier 2022, 14 heures 30, Online Claudio Bonanno (Università di Pisa) Infinite ergodic theory and a tree of rational pairs The study of the continued fraction expansions of real numbers by ergodic methods is now a classical and well-known part of the theory of dynamical systems. Less is known for the multi-dimensional expansions. I will present an ergodic approach to a two-dimensional continued fraction algorithm introduced by T. Garrity, and show how to get a complete tree of rational pairs by using the Farey sum of fractions. The talk is based on joint work with A. Del Vigna and S. Munday. Agamemnon Zafeiropoulos (NTNU) The order of magnitude of Sudler products Given an irrational $\alpha \in [0,1] \smallsetminus \mathbb{Q}$, we define the corresponding Sudler product by $$ P_N(\alpha) = \prod_{n=1}^{N}2|\sin (\pi n \alpha)|. $$ In joint work with C. Aistleitner and N. Technau, we show that when $\alpha = [0;b,b,b…]$ is a quadratic irrational with all partial quotients in its continued fraction expansion equal to some integer b, the following hold: - If $b\leq 5$, then $\liminf_{N\to \infty}P_N(\alpha) >0$ and $\limsup_{N\to \infty} P_N(\alpha)/N < \infty$. - If $b\geq 6$, then $\liminf_{N\to \infty}P_N(\alpha) = 0$ and $\limsup_{N\to \infty} P_N(\alpha)/N = \infty$. We also present an analogue of the previous result for arbitrary quadratic irrationals (joint work with S. Grepstad and M. Neumueller). Philipp Gohlke (Universität Bielefeld) Zero measure spectrum for multi-frequency Schrödinger operators Cantor spectrum of zero Lebesgue measure is a striking feature of Schrödinger operators associated with certain models of aperiodic order, like primitive substitution systems or Sturmian subshifts. This is known to follow from a condition introduced by Boshernitzan that establishes that on infinitely many scales words of the same length appear with a similar frequency. Building on works of Berthé–Steiner–Thuswaldner and Fogg–Nous we show that on the two-dimensional torus, Lebesgue almost every translation admits a natural coding such that the associated subshift satisfies the Boshernitzan criterion (joint work with J.Chaika, D.Damanik and J.Fillman). Mardi 21 décembre 2021, 14 heures 30, Online Fan Lü (Sichuan Normal University) Multiplicative Diophantine approximation in the parameter space of beta-dynamical system Beta-transformation is a special kind of expanding dynamics, the total information of which can be determined by the orbits of some critical points (e.g., the point 1). Letting $T_{\beta}$ be the beta-transformation with $\beta>1$ and $x$ be a fixed point in $(0,1]$, we consider the set of parameters $(\alpha, \beta)$, such that the multiple $\|T^n_{\alpha}(x)\|\|T^n_{\beta}(x)\|$ is well approximated or badly approximated. The Gallagher-type question, Jarník-type question as well as the badly approximable pairs, i.e., Littlewood-type question are studied in detail. Mardi 7 décembre 2021, 14 heures 30, Online Jamie Walton (University of Nottingham) Extending the theory of symbolic substitutions to compact alphabets In this work, joint with Neil Mañibo and Dan Rust, we consider an extension of the theory of symbolic substitutions to infinite alphabets, by requiring the alphabet to carry a compact, Hausdorff topology for which the substitution is continuous. Such substitutions have been considered before, in particular by Durand, Ormes and Petite for zero-dimensional alphabets, and Queffélec in the constant length case. We find a simple condition which ensures that an associated substitution operator is quasi-compact, which we conjecture to always be satisfied for primitive substitutions on countable alphabets. In the primitive case this implies the existence of a unique natural tile length function and, for a recognisable substitution, that the associated shift space is uniquely ergodic. The main tools come from the theory of positive operators on Banach spaces. Very few prerequisites will be assumed, and the theory will be demonstrated via examples. Mardi 23 novembre 2021, 14 heures 30, Online Sascha Troscheit (Universität Wien) Analogues of Khintchine's theorem for random attractors Khintchine's theorem is an important result in number theory which links the Lebesgue measure of certain limsup sets with the convergence/divergence of naturally occurring volume sums. This behaviour has been observed for deterministic fractal sets and inspired by this we investigate the random settings. Introducing randomisation into the problem makes some parts more tractable, while posing separate new challenges. In this talk, I will present joint work with Simon Baker where we provide sufficient conditions for a large class of stochastically self-similar and self-affine attractors to have positive Lebesgue measure. Lucía Rossi (Montanuniversität Leoben) Rational self-affine tiles associated to (nonstandard) digit systems In this talk we will introduce the notion of rational self-affine tiles, which are fractal-like sets that arise as the solution of a set equation associated to a digit system that consists of a base, given by an expanding rational matrix, and a digit set, given by vectors. They can be interpreted as the set of "fractional parts" of this digit system, and the challenge of this theory is that these sets do not live in a Euclidean space, but on more general spaces defined in terms of Laurent series. Steiner and Thuswaldner defined rational self-affine tiles for the case where the base is a rational matrix with irreducible characteristic polynomial. We present some tiling results that generalize the ones obtained by Lagarias and Wang: we consider arbitrary expanding rational matrices as bases, and simultaneously allow the digit sets to be nonstandard (meaning they are not a complete set of residues modulo the base). We also state some topological properties of rational self-affine tiles and give a criterion to guarantee positive measure in terms of the digit set. Mardi 9 novembre 2021, 14 heures 30, Online Zhiqiang Wang (East China Normal University) How inhomogeneous Cantor sets can pass a point Abstract: For $x > 0$, we define $\Upsilon(x) = \{ (a,b): x\in E_{a,b}, a>0, b>0, a+b \le 1 \}$, where the set $E_{a,b}$ is the unique nonempty compact invariant set generated by the inhomogeneous IFS $\{ f_0(x) = a x, f_1(x) = b(x+1) \}$. We show the set $\Upsilon(x)$ is a Lebesgue null set with full Hausdorff dimension in $\mathbb{R}^2$, and the intersection of sets $\Upsilon(x_1), \Upsilon(x_2), \cdots, \Upsilon(x_\ell)$ still has full Hausdorff dimension $\mathbb{R}^2$ for any finitely many positive real numbers $x_1, x_2, \cdots, x_\ell$. Younès Tierce (Université de Rouen Normandie) Extensions of the random beta-transformation Let $\beta \in (1,2)$ and $I_\beta := [0,\frac{1}{\beta-1}]$. Almost every real number of $I_\beta$ has infinitely many expansions in base $\beta$, and the random $\beta$-transformation generates all these expansions. We present the construction of a "geometrico-symbolic" extension of the random $\beta$-transformation, providing a new proof of the existence and unicity of an absolutely continuous invariant probability measure, and an expression of the density of this measure. This extension shows off some nice renewal times, and we use these to prove that the natural extension of the system is a Bernoulli automorphism. Pieter Allaart (University of North Texas) On the existence of Trott numbers relative to multiple bases Trott numbers are real numbers in the interval (0,1) whose continued fraction expansion equals their base-b expansion, in a certain liberal but natural sense. They exist in some bases, but not in all. In a previous OWNS talk, T. Jones sketched a proof of the existence of Trott numbers in base 10. In this talk I will discuss some further properties of these Trott numbers, and focus on the question: Can a number ever be Trott in more than one base at once? While the answer is almost certainly "no", a full proof of this seems currently out of reach. But we obtain some interesting partial answers by using a deep theorem from Diophantine approximation. Mardi 26 octobre 2021, 14 heures 30, Online Michael Baake (Universität Bielefeld) Spectral aspects of aperiodic dynamical systems One way to analyse aperiodic systems employs spectral notions, either via dynamical systems theory or via harmonic analysis. In this talk, we will look at two particular aspects of this, after a quick overview of how the diffraction measure can be used for this purpose. First, we consider some concequences of inflation rules on the spectra via renormalisation, and how to use it to exclude absolutely continuous componenta. Second, we take a look at a class of dynamical systems of number-theoretic origin, how they fit into the spectral picture, and what (other) methods there are to distinguish them. Mélodie Lapointe (IRIF) q-analog of the Markoff injectivity conjecture The Markoff injectivity conjecture states that $w\mapsto\mu(w)_{12}$ is injective on the set of Christoffel words where $\mu:\{\mathtt{0},\mathtt{1}\}^*\to\mathrm{SL}_2(\mathbb{Z})$ is a certain homomorphism and $M_{12}$ is the entry above the diagonal of a $2\times2$ matrix $M$. Recently, Leclere and Morier-Genoud (2021) proposed a $q$-analog $\mu_q$ of $\mu$ such that $\mu_{q\to1}(w)_{12}=\mu(w)_{12}$ is the Markoff number associated to the Christoffel word $w$. We show that there exists an order $<_{radix}$ on $\{\mathtt{0},\mathtt{1}\}^*$ such that for every balanced sequence $s \in \{\mathtt{0},\mathtt{1}\}^\mathbb{Z}$ and for all factors $u, v$ in the language of $s$ with $u <_{radix} v$, the difference $\mu_q(v)_{12} - \mu_q(u)_{12}$ is a nonzero polynomial of indeterminate $q$ with nonnegative integer coefficients. Therefore, for every $q>0$, the map $\{\mathtt{0},\mathtt{1}\}^*\to\mathbb{R}$ defined by $w\mapsto\mu_q(w)_{12}$ is increasing thus injective over the language of a balanced sequence. The proof uses an equivalence between balanced sequences satisfying some Markoff property and indistinguishable asymptotic pairs. Liangang Ma (Binzhou University) Inflection points in the Lyapunov spectrum for IFS on intervals We plan to present the audience a general picture about regularity of the Lyapunov spectrum for some iterated function systems, with emphasis on its inflection points in case the spectrum is smooth. Some sharp or moderate relationship between the number of Lyapunov inflections and (essential) branch number of a linear system is clarified. As most numeration systems are non-linear ones, the corresponding relationship for these systems are still mysterious enough comparing with the linear systems. Lulu Fang (Nanjing University of Science and Technology) On upper and lower fast Khintchine spectra in continued fractions Let $\psi: \mathbb{N} \to \mathbb{R}^+$ be a function satisfying $\psi(n)/n \to \infty$ as $n \to \infty$. We investigate from a multifractal analysis point of view the growth speed of the sums $\sum_{k=1}^n \log a_k(x)$ with respect to $\psi(n)$, where $x = [a_1(x),a_2(x),\dots]$ denotes the continued fraction expansion of $x \in (0,1)$. The (upper, lower) fast Khintchine spectrum is defined as the Hausdorff dimension of the set of points $x \in (0,1)$ for which the (upper, lower) limit of $\frac{1}{\psi(n)} \sum_{k=1}^n \log a_k(x)$ is equal to 1. These three spectra have been studied by Fan, Liao, Wang & Wu (2013, 2016), Liao & Rams (2016). In this talk, we will give a new look at the fast Khintchine spectrum, and provide a full description of upper and lower fast Khintchine spectra. The latter improves a result of Liao and Rams (2016). Taylor Jones (University of North Texas) On the Existence of Numbers with Matching Continued Fraction and Decimal Expansion A Trott number in base 10 is one whose continued fraction expansion agrees with its base 10 expansion in the sense that [0;a_1,a_2,…] = 0.(a_1)(a_2)… where (a_i) represents the string of digits of a_i. As an example [0;3,29,54,7,…] = 0.329547… An analogous definition may be given for a Trott number in any integer base b>1, the set of which we denote by T_b. The first natural question is whether T_b is empty, and if not, for which b? We discuss the history of the problem, and give a heuristic process for constructing such numbers. We show that T_{10} is indeed non-empty, and uncountable. With more delicate techniques, a complete classification may be given to all b for which T_b is non-empty. We also discuss some further results, such as a (non-trivial) upper bound on the Hausdorff dimension of T_b, as well as the question of whether the intersection of T_b and T_c can be non-empty. Philipp Hieronymi (Universität Bonn) A strong version of Cobham's theorem Let k,l>1 be two multiplicatively independent integers. A subset X of N^n is k-recognizable if the set of k-ary representations of X is recognized by some finite automaton. Cobham's famous theorem states that a subset of the natural numbers is both k-recognizable and l-recognizable if and only if it is Presburger-definable (or equivalently: semilinear). We show the following strengthening. Let X be k-recognizable, let Y be l-recognizable such that both X and Y are not Presburger-definable. Then the first-order logical theory of (N,+,X,Y) is undecidable. This is in contrast to a well-known theorem of Büchi that the first-order logical theory of (N,+,X) is decidable. Our work strengthens and depends on earlier work of Villemaire and Bès. The essence of Cobham's theorem is that recognizability depends strongly on the choice of the base k. Our results strengthens this: two non-Presburger definable sets that are recognizable in multiplicatively independent bases, are not only distinct, but together computationally intractable over Presburger arithmetic. This is joint work with Christian Schulz. Maria Siskaki (University of Illinois at Urbana-Champaign) The distribution of reduced quadratic irrationals arising from continued fraction expansions It is known that the reduced quadratic irrationals arising from regular continued fraction expansions are uniformly distributed when ordered by their length with respect to the Gauss measure. In this talk, I will describe a number theoretical approach developed by Kallies, Ozluk, Peter and Snyder, and then by Boca, that gives the error in the asymptotic behavior of this distribution. Moreover, I will present the respective result for the distribution of reduced quadratic irrationals that arise from even (joint work with F. Boca) and odd continued fractions. Steve Jackson (University of North Texas) Descriptive complexity in numeration systems Descriptive set theory gives a means of calibrating the complexity of sets, and we focus on some sets occurring in numerations systems. Also, the descriptive complexity of the difference of two sets gives a notion of the logical independence of the sets. A classic result of Ki and Linton says that the set of normal numbers for a given base is a Π_3^0 complete set. In work with Airey, Kwietniak, and Mance we extend to other numerations systems such as continued fractions, ????-expansions, and GLS expansions. In work with Mance and Vandehey we show that the numbers which are continued fraction normal but not base b normal is complete at the expected level of D_2(Π_3^0). An immediate corollary is that this set is uncountable, a result (due to Vandehey) only known previously assuming the generalized Riemann hypothesis. Mardi 7 septembre 2021, 14 heures 30, Online Oleg Karpenkov (University of Liverpool) On Hermite's problem, Jacobi-Perron type algorithms, and Dirichlet groups In this talk we introduce a new modification of the Jacobi-Perron algorithm in the three dimensional case. This algorithm is periodic for the case of totally-real conjugate cubic vectors. To the best of our knowledge this is the first Jacobi-Perron type algorithm for which the cubic periodicity is proven. This provides an answer in the totally-real case to the question of algebraic periodicity for cubic irrationalities posed in 1848 by Ch.Hermite. We will briefly discuss a new approach which is based on geometry of numbers. In addition we point out one important application of Jacobi-Perron type algorithms to the computation of independent elements in the maximal groups of commuting matrices of algebraic irrationalities. Niclas Technau (University of Wisconsin - Madison) Littlewood and Duffin-Schaeffer-type problems in diophantine approximation Gallagher's theorem describes the multiplicative diophantine approximation rate of a typical vector. Recently Sam Chow and I establish a fully-inhomogeneous version of Gallagher's theorem, and a diophantine fibre refinement. In this talk I outline the proof, and the tools involved in it. Polina Vytnova (University of Warwick) Hausdorff dimension of Gauss-Cantor sets and their applications to the study of classical Markov spectrum The classical Lagrange and Markov spectra are subsets of the real line which arise in connection with some problems in theory Diophantine approximation theory. In 1921 O. Perron gave a definition in terms of continued fractions, which allowed to study the Markov and Lagrange spectra using limit sets of iterated function schemes. In this talk we will see how the first transition point, where the Markov spectra acquires the full measure can be computed by the means of estimating Hausdorff dimension of the certain Gauss-Cantor sets. The talk is based on a joint work with C. Matheus, C. G. Moreira and M. Pollicott. Lingmin Liao (Université Paris-Est Créteil Val de Marne) Simultaneous Diophantine approximation of the orbits of the dynamical systems x2 and x3 We study the sets of points whose orbits of the dynamical systems x2 and x3 simultaneously approach to a given point, with a given speed. A zero-one law for the Lebesgue measure of such sets is established. The Hausdorff dimensions are also determined for some special speeds. One dimensional formula among them is established under the abc conjecture. At the same time, we also study the Diophantine approximation of the orbits of a diagonal matrix transformation of a torus, for which the properties of the (negative) beta transformations are involved. This is a joint work with Bing Li, Sanju Velani and Evgeniy Zorin. Sam Chow (University of Warwick) Dyadic approximation in the Cantor set We investigate the approximation rate of a typical element of the Cantor set by dyadic rationals. This is a manifestation of the times-two-times-three phenomenon, and is joint work with Demi Allen and Han Yu. Shigeki Akiyama (University of Tsukuba) Counting balanced words and related problems Balanced words and Sturmian words are ubiquitous and appear in the intersection of many areas of mathematics. In this talk, I try to explain an idea of S. Yasutomi to study finite balanced words. His method gives a nice way to enumerate number of balanced words of given length, slope and intercept. Applying this idea, we can obtain precise asymptotic formula for balanced words. The result is connected to some classical topics in number theory, such as Farey fraction, Riemann Hypothesis and Large sieve inequality. Bastián Espinoza (Université de Picardie Jules Verne and Universidad de Chile) Automorphisms and factors of finite topological rank systems Finite topological rank systems are a type of minimal S-adic subshift that includes many of the classical minimal systems of zero entropy (e.g. linearly recurrent subshifts, interval exchanges and some Toeplitz sequences). In this talk I am going to present results concerning the number of automorphisms and factors of systems of finite topological rank, as well as closure properties of this class with respect to factors and related combinatorial operations. Charles Fougeron (IRIF) Dynamics of simplicial systems and multidimensional continued fraction algorithms Motivated by the richness of the Gauss algorithm which allows to efficiently compute the best approximations of a real number by rationals, many mathematicians have suggested generalisations to study Diophantine approximations of vectors in higher dimensions. Examples include Poincaré's algorithm introduced at the end of the 19th century or those of Brun and Selmer in the middle of the 20th century. Since the beginning of the 90's to the present day, there has been many works studying the convergence and dynamics of these multidimensional continued fraction algorithms. In particular, Schweiger and Broise have shown that the approximation sequence built using Selmer and Brun algorithms converge to the right vector with an extra ergodic property. On the other hand, Nogueira demonstrated that the algorithm proposed by Poincaré almost never converges. Starting from the classical case of Farey's algorithm, which is an "additive" version of Gauss's algorithm, I will present a combinatorial point of view on these algorithms which allows to us to use a random walk approach. In this model, taking a random vector for the Lebesgue measure will correspond to following a random walk with memory in a labelled graph called symplicial system. The laws of probability for this random walk are elementary and we can thus develop probabilistic techniques to study their generic dynamical behaviour. This will lead us to describe a purely graph theoretic criterion to check the convergence of a continued fraction algorithm. Joseph Vandehey (University of Texas at Tyler) Solved and unsolved problems in normal numbers We will survey a variety of problems on normal numbers, some old, some new, some solved, and some unsolved, in the hope of spurring some new directions of study. Topics will include constructions of normal numbers, normality in two different systems simultaneously, normality seen through the lens of informational or logical complexity, and more. Giulio Tiozzo (University of Toronto) The bifurcation locus for numbers of bounded type We define a family B(t) of compact subsets of the unit interval which provides a filtration of the set of numbers whose continued fraction expansion has bounded digits. This generalizes to a continuous family the well-known sets of numbers whose continued fraction expansion is bounded above by a fixed integer. We study how the set B(t) changes as the parameter t ranges in [0,1], and describe precisely the bifurcations that occur as the parameters change. Further, we discuss continuity properties of the Hausdorff dimension of B(t) and its regularity. Finally, we establish a precise correspondence between these bifurcations and the bifurcations for the classical family of real quadratic polynomials. Joint with C. Carminati. Mardi 4 mai 2021, 16 heures, Online Tushar Das (University of Wisconsin - La Crosse) Hausdorff Hensley Good & Gauss Several participants of the One World Numeration Seminar (OWNS) will know Hensley's haunting bounds (c. 1990) for the dimension of irrationals whose regular continued fraction expansion partial quotients are all at most N; while some might remember Good's great bounds (c. 1940) for the dimension of irrationals whose partial quotients are all at least N. We will report on relatively recent results in https://arxiv.org/abs/2007.10554 that allow one to extend such fabulous formulae to unexpected expansions. Our technology may be utilized to study various systems arising from numeration, dynamics, or geometry. The talk will be accessible to students and beyond, and I hope to present a sampling of open questions and research directions that await exploration. Boris Adamczewski (CNRS, Université Claude Bernard Lyon 1) Expansions of numbers in multiplicatively independent bases: Furstenberg's conjecture and finite automata It is commonly expected that expansions of numbers in multiplicatively independent bases, such as 2 and 10, should have no common structure. However, it seems extraordinarily difficult to confirm this naive heuristic principle in some way or another. In the late 1960s, Furstenberg suggested a series of conjectures, which became famous and aim to capture this heuristic. The work I will discuss in this talk is motivated by one of these conjectures. Despite recent remarkable progress by Shmerkin and Wu, it remains totally out of reach of the current methods. While Furstenberg's conjectures take place in a dynamical setting, I will use instead the language of automata theory to formulate some related problems that formalize and express in a different way the same general heuristic. I will explain how the latter can be solved thanks to some recent advances in Mahler's method; a method in transcendental number theory initiated by Mahler at the end of the 1920s. This a joint work with Colin Faverjon. Ayreena Bakhtawar (La Trobe University) Metrical theory for the set of points associated with the generalized Jarnik-Besicovitch set From Lagrange's (1770) and Legendre's (1808) results we conclude that to find good rational approximations to an irrational number we only need to focus on its convergents. Let [a_1(x),a_2(x),…] be the continued fraction expansion of a real number x ∈ [0,1). The Jarnik-Besicovitch set in terms of continued fraction consists of all those x ∈ [0,1) which satisfy a_{n+1}(x) ≥ e^{τ (log|T'x|+⋯+log|T'(T^{n-1}x)|)} for infinitely many n ∈ N, where a_{n+1}(x) is the (n+1)-th partial quotient of x and T is the Gauss map. In this talk, I will focus on determining the Hausdorff dimension of the set of real numbers x ∈ [0,1) such that for any m ∈ N the following holds for infinitely many n ∈ N: a_{n+1}(x)a_{n+2}(x)⋯a_{n+m}(x) ≥ e^{τ(x)(f(x)+⋯+f(T^{n-1}x))}, where f and τ are positive continuous functions. Also we will see that for appropriate choices of m, τ(x) and f(x) our result implies various classical results including the famous Jarnik-Besicovitch theorem. Andrew Mitchell (University of Birmingham) Measure theoretic entropy of random substitutions Michael Drmota (TU Wien) (Logarithmic) Densities for Automatic Sequences along Primes and Squares It is well known that the every letter α of an automatic sequence a(n) has a logarithmic density – and it can be decided when this logarithmic density is actually a density. For example, the letters 0 and 1 of the Thue-Morse sequences t(n) have both frequences 1/2. [The Thue-Morse sequence is the binary sum-of-digits functions modulo 2.] The purpose of this talk is to present a corresponding result for subsequences of general automatic sequences along primes and squares. This is a far reaching generalization of two breakthrough results of Mauduit and Rivat from 2009 and 2010, where they solved two conjectures by Gelfond on the densities of 0 and 1 of t(p_n) and t(n^2) (where p_n denotes the sequence of primes). More technically, one has to develop a method to transfer density results for primitive automatic sequences to logarithmic-density results for general automatic sequences. Then as an application one can deduce that the logarithmic densities of any automatic sequence along squares (n^2)_{n≥0} and primes (p_n)_{n≥1} exist and are computable. Furthermore, if densities exist then they are (usually) rational. This is a joint work with Boris Adamczewski and Clemens Müllner. Godofredo Iommi (Pontificia Universidad Católica de Chile) Arithmetic averages and normality in continued fractions Every real number can be written as a continued fraction. There exists a dynamical system, the Gauss map, that acts as the shift in the expansion. In this talk, I will comment on the Hausdorff dimension of two types of sets: one of them defined in terms of arithmetic averages of the digits in the expansion and the other related to (continued fraction) normal numbers. In both cases, the non compactness that steams from the fact that we use countable many partial quotients in the continued fraction plays a fundamental role. Some of the results are joint work with Thomas Jordan and others together with Aníbal Velozo. Alexandra Skripchenko (Higher School of Economics) Double rotations and their ergodic properties Double rotations are the simplest subclass of interval translation mappings. A double rotation is of finite type if its attractor is an interval and of infinite type if it is a Cantor set. It is easy to see that the restriction of a double rotation of finite type to its attractor is simply a rotation. It is known due to Suzuki - Ito - Aihara and Bruin - Clark that double rotations of infinite type are defined by a subset of zero measure in the parameter set. We introduce a new renormalization procedure on double rotations, which is reminiscent of the classical Rauzy induction. Using this renormalization we prove that the set of parameters which induce infinite type double rotations has Hausdorff dimension strictly smaller than 3. Moreover, we construct a natural invariant measure supported on these parameters and show that, with respect to this measure, almost all double rotations are uniquely ergodic. In my talk I plan to outline this proof that is based on the recent result by Ch. Fougeron for simplicial systems. I also hope to discuss briefly some challenging open questions and further research plans related to double rotations. The talk is based on a joint work with Mauro Artigiani, Charles Fougeron and Pascal Hubert. Natalie Priebe Frank (Vassar College) The flow view and infinite interval exchange transformation of a recognizable substitution A flow view is the graph of a measurable conjugacy between a substitution or S-adic subshift or tiling space and an exchange of infinitely many intervals in [0,1]. The natural refining sequence of partitions of the sequence space is transferred to [0,1] with Lebesgue measure using a canonical addressing scheme, a fixed dual substitution, and a shift-invariant probability measure. On the flow view, sequences are shown horizontally at a height given by their image under conjugacy. In this talk I'll explain how it all works and state some results and questions. There will be pictures. Mardi 2 mars 2021, 16 heures, Online Vitaly Bergelson (Ohio State University) Normal sets in (ℕ,+) and (ℕ,×) We will start with discussing the general idea of a normal set in a countable cancellative amenable semigroup, which was introduced and developed in the recent paper "A fresh look at the notion of normality" (joint work with Tomas Downarowicz and Michał Misiurewicz). We will move then to discussing and juxtaposing combinatorial and Diophantine properties of normal sets in semigroups (ℕ,+) and (ℕ,×). We will conclude the lecture with a brief review of some interesting open problems. Seulbee Lee (Scuola Normale Superiore di Pisa) Odd-odd continued fraction algorithm The classical continued fraction gives the best approximating rational numbers of an irrational number. We define a new continued fraction, say odd-odd continued fraction, which gives the best approximating rational numbers whose numerators and denominators are odd. We see that a jump transformation associated to the Romik map induces the odd-odd continued fraction. We discuss properties of the odd-odd continued fraction expansions. This is joint work with Dong Han Kim and Lingmin Liao. Gerardo González Robert (Universidad Nacional Autónoma de México) Good's Theorem for Hurwitz Continued Fractions In 1887, Adolf Hurwitz introduced a simple procedure to write any complex number as a continued fraction with Gaussian integers as partial denominators and with partial numerators equal to 1. While similarities between regular and Hurwitz continued fractions abound, there are important differences too (for example, as shown in 1974 by R. Lakein, Serret's theorem on equivalent numbers does not hold in the complex case). In this talk, after giving a short overview of the theory of Hurwitz continued fractions, we will state and sketch the proof of a complex version of I. J. Good's theorem on the Hausdorff dimension of the set of real numbers whose regular continued fraction tends to infinity. Finally, we will discuss some open problems. Clemens Müllner (TU Wien) Multiplicative automatic sequences It was shown by Mariusz Lemańczyk and the author that automatic sequences are orthogonal to bounded and aperiodic multiplicative functions. This is a manifestation of the disjointedness of additive and multiplicative structures. We continue this path by presenting in this talk a complete classification of complex-valued sequences which are both multiplicative and automatic. This shows that the intersection of these two worlds has a very special (and simple) form. This is joint work with Mariusz Lemańczyk and Jakub Konieczny. Samuel Petite (Université de Picardie Jules Verne) Interplay between finite topological rank minimal Cantor systems, S-adic subshifts and their complexity The family of minimal Cantor systems of finite topological rank includes Sturmian subshifts, coding of interval exchange transformations, odometers and substitutive subshifts. They are known to have dynamical rigidity properties. In a joint work with F. Durand, S. Donoso and A. Maass, we provide a combinatorial characterization of such subshifts in terms of S-adic systems. This enables to obtain some links with the factor complexity function and some new rigidity properties depending on the rank of the system. Carlo Carminati (Università di Pisa) Prevalence of matching for families of continued fraction algorithms: old and new results We will give an overview of the phenomenon of matching, which was first observed in the family of Nakada's α-continued fractions, but is also encountered in other families of continued fraction algorithms. Our main focus will be the matching property for the family of Ito-Tanaka continued fractions: we will discuss the analogies with Nakada's case (such as prevalence of matching), but also some unexpected features which are peculiar of this case. The core of the talk is about some recent results obtained in collaboration with Niels Langeveld and Wolfgang Steiner. Tom Kempton (University of Manchester) Bernoulli Convolutions and Measures on the Spectra of Algebraic Integers Given an algebraic integer beta and alphabet A = {-1,0,1}, the spectrum of beta is the set \Sigma(\beta) := \{ \sum_{i=1}^n a_i \beta^i : n \in \mathbb{N}, a_i \in A \}. In the case that beta is Pisot one can study the spectrum of beta dynamically using substitutions or cut and project schemes, and this allows one to see lots of local structure in the spectrum. There are higher dimensional analogues for other algebraic integers. In this talk we will define a random walk on the spectrum of beta and show how, with appropriate renormalisation, this leads to an infinite stationary measure on the spectrum. This measure has local structure analagous to that of the spectrum itself. Furthermore, this measure has deep links with the Bernoulli convolution, and in particular new criteria for the absolute continuity of Bernoulli convolutions can be stated in terms of the ergodic properties of these measures. Mardi 5 janvier 2021, 14 heures 30, Online Claire Merriman (Ohio State University) alpha-odd continued fractions The standard continued fraction algorithm come from the Euclidean algorithm. We can also describe this algorithm using a dynamical system of [0,1), where the transformation that takes x to the fractional part of 1/x is said to generate the continued fraction expansion of x. From there, we ask two questions: What happens to the continued fraction expansion when we change the domain to something other than [0,1)? What happens to the dynamical system when we impose restrictions on the continued fraction expansion, such as finding the nearest odd integer instead of the floor? This talk will focus on the case where we first restrict to odd integers, then start shifting the domain [α-2, α). This talk is based on joint work with Florin Boca and animations done by Xavier Ding, Gustav Jennetten, and Joel Rozhon as part of an Illinois Geometry Lab project. Lukas Spiegelhofer (Montanuniversität Leoben) The digits of n+t We study the binary sum-of-digits function s_2 under addition of a constant t. For each integer k, we are interested in the asymptotic density δ(k,t) of integers t such that s_2(n+t) - s_2(n) = k. In this talk, we consider the following two questions. (1) Do we have c_t = δ(0,t) + δ(1,t) + … > 1/2? This is a conjecture due to T. W. Cusick (2011). (2) What does the probability distribution defined by k → δ(k,t) look like? We prove that indeed c_t > 1/2 if the binary expansion of t contains at least M blocks of contiguous ones, where M is effective. Our second theorem states that δ(j,t) usually behaves like a normal distribution, which extends a result by Emme and Hubert (2018). This is joint work with Michael Wallner (TU Wien). Tanja Isabelle Schindler (Scuola Normale Superiore di Pisa) Limit theorems on counting large continued fraction digits We establish a central limit theorem for counting large continued fraction digits (a_n), that is, we count occurrences {a_n>b_n}, where (b_n) is a sequence of positive integers. Our result improves a similar result by Philipp, which additionally assumes that bn tends to infinity. Moreover, we also show this kind of central limit theorem for counting the number of occurrences entries such that the continued fraction entry lies between d_n and d_n (1+1/c_n) for given sequences (c_n) and (d_n). For such intervals we also give a refinement of the famous Borel–Bernstein theorem regarding the event that the nth continued fraction digit lying infinitely often in this interval. As a side result, we explicitly determine the first φ-mixing coefficient for the Gauss system - a result we actually need to improve Philipp's theorem. This is joint work with Marc Kesseböhmer. Michael Barnsley (Australian National University) Rigid fractal tilings I will describe recent work, joint with Louisa Barnsley and Andrew Vince, concerning a symbolic approach to self-similar tilings. This approach uses graph-directed iterated function systems to analyze both classical tilings and also generalized tilings of what may be unbounded fractal subsets of R^n. A notion of rigid tiling systems is defined. Our key theorem states that when the system is rigid, all the conjugacies of the tilings can be described explicitly. In the seminar I hope to prove this for the case of standard IFSs. Jacques Sakarovitch (IRIF, CNRS et Télécom Paris) The carry propagation of the successor function Given any numeration system, the carry propagation at an integer N is the number of digits that change between the representation of N and N+1. The carry propagation of the numeration system as a whole is the average carry propagations at the first N integers, as N tends to infinity, if this limit exists. In the case of the usual base p numeration system, it can be shown that the limit indeed exists and is equal to p/(p-1). We recover a similar value for those numeration systems we consider and for which the limit exists. The problem is less the computation of the carry propagation than the proof of its existence. We address it for various kinds of numeration systems: abstract numeration systems, rational base numeration systems, greedy numeration systems and beta-numeration. This problem is tackled with three different types of techniques: combinatorial, algebraic, and ergodic, each of them being relevant for different kinds of numeration systems. This work has been published in Advances in Applied Mathematics 120 (2020). In this talk, we shall focus on the algebraic and ergodic methods. Joint work with V. Berthé (Irif), Ch. Frougny (Irif), and M. Rigo (Univ. Liège). Pieter Allaart (University of North Texas) On the smallest base in which a number has a unique expansion For x>0, let U(x) denote the set of bases q in (1,2] such that x has a unique expansion in base q over the alphabet {0,1}, and let f(x)=inf U(x). I will explain that the function f(x) has a very complicated structure: it is highly discontinuous and has infinitely many infinite level sets. I will describe an algorithm for numerically computing f(x) that often gives the exact value in just a small finite number of steps. The Komornik-Loreti constant, which is f(1), will play a central role in this talk. This is joint work with Derong Kong, and builds on previous work by Kong (Acta Math. Hungar. 150(1):194-208, 2016). Tomáš Vávra (University of Waterloo) Distinct unit generated number fields and finiteness in number systems A distinct unit generated field is a number field K such that every algebraic integer of the field is a sum of distinct units. In 2015, Dombek, Masáková, and Ziegler studied totally complex quartic fields, leaving 8 cases unresolved. Because in this case there is only one fundamental unit u, their method involved the study of finiteness in positional number systems with base u and digits arising from the roots of unity in K. First, we consider a more general problem of positional representations with base beta with an arbitrary digit alphabet D. We will show that it is decidable whether a given pair (beta, D) allows eventually periodic or finite representations of elements of O_K. We are then able to prove the conjecture that the 8 remaining cases indeed are distinct unit generated. Mélodie Andrieu (Aix-Marseille University) A Rauzy fractal unbounded in all directions of the plane Until 2001 it was believed that, as for Sturmian words, the imbalance of Arnoux-Rauzy words was bounded - or at least finite. Cassaigne, Ferenczi and Zamboni disproved this conjecture by constructing an Arnoux-Rauzy word with infinite imbalance, i.e. a word whose broken line deviates regularly and further and further from its average direction. Today, we hardly know anything about the geometrical and topological properties of these unbalanced Rauzy fractals. The Oseledets theorem suggests that these fractals are contained in a strip of the plane: indeed, if the Lyapunov exponents of the matricial product associated with the word exist, one of these exponents at least is nonpositive since their sum equals zero. This talk aims at disproving this belief. Paul Surer (University of Natural Resources and Life Sciences, Vienna) Representations for complex numbers with integer digits In this talk we present the zeta-expansion as a complex version of the well-known beta-expansion. It allows us to expand complex numbers with respect to a complex base by using integer digits. Our concepts fits into the framework of the recently published rotational beta-expansions. But we also establish relations with piecewise affine maps of the torus and with shift radix systems. Kan Jiang (Ningbo University) Representations of real numbers on fractal sets There are many approaches which can represent real numbers. For instance, the β-expansions, the continued fraction and so forth. Representations of real numbers on fractal sets were pioneered by H. Steinhaus who proved in 1917 that C+C=[0,2] and C−C=[−1,1], where C is the middle-third Cantor set. Equivalently, for any x ∈ [0,2], there exist some y,z ∈ C such that x=y+z. In this talk, I will introduce similar results in terms of some fractal sets. Francesco Veneziano (University of Genova) Finiteness and periodicity of continued fractions over quadratic number fields We consider continued fractions with partial quotients in the ring of integers of a quadratic number field K; a particular example of these continued fractions is the β-continued fraction introduced by Bernat. We show that for any quadratic Perron number β, the β-continued fraction expansion of elements in Q(β) is either finite of eventually periodic. We also show that for certain four quadratic Perron numbers β, the β-continued fraction represents finitely all elements of the quadratic field Q(β), thus answering questions of Rosen and Bernat. Based on a joint work with Zuzana Masáková and Tomáš Vávra. Marta Maggioni (Leiden University) Random matching for random interval maps In this talk we extend the notion of matching for deterministic transformations to random matching for random interval maps. For a large class of piecewise affine random systems of the interval, we prove that this property of random matching implies that any invariant density of a stationary measure is piecewise constant. We provide examples of random matching for a variety of families of random dynamical systems, that includes generalised beta-transformations, continued fraction maps and a family of random maps producing signed binary expansions. We finally apply the property of random matching and its consequences to this family to study minimal weight expansions. Based on a joint work with Karma Dajani and Charlene Kalle. Yotam Smilansky (Rutgers University) Multiscale Substitution Tilings Multiscale substitution tilings are a new family of tilings of Euclidean space that are generated by multiscale substitution rules. Unlike the standard setup of substitution tilings, which is a basic object of study within the aperiodic order community and includes examples such as the Penrose and the pinwheel tilings, multiple distinct scaling constants are allowed, and the defining process of inflation and subdivision is a continuous one. Under a certain irrationality assumption on the scaling constants, this construction gives rise to a new class of tilings, tiling spaces and tiling dynamical system, which are intrinsically different from those that arise in the standard setup. In the talk I will describe these new objects and discuss various structural, geometrical, statistical and dynamical results. Based on joint work with Yaar Solomon. Jeffrey Shallit (University of Waterloo) Lazy Ostrowski Numeration and Sturmian Words In this talk I will discuss a new connection between the so-called "lazy Ostrowski" numeration system, and periods of the prefixes of Sturmian characteristic words. I will also give a relationship between periods and the so-called "initial critical exponent". This builds on work of Frid, Berthé-Holton-Zamboni, Epifanio-Frougny-Gabriele-Mignosi, and others, and is joint work with Narad Rampersad and Daniel Gabric. Bing Li (South China University of Technology) Some fractal problems in beta-expansions For greedy beta-expansions, we study some fractal sets of real numbers whose orbits under beta-transformation share some common properties. For example, the partial sum of the greedy beta-expansion converges with the same order, the orbit is not dense, the orbit is always far from that of another point etc. The usual tool is to approximate the beta-transformation dynamical system by Markov subsystems. We also discuss the similar problems for intermediate beta-expansions. Bill Mance (Adam Mickiewicz University in Poznań) Hotspot Lemmas for Noncompact Spaces We will explore a correction of several previously claimed generalizations of the classical hotspot lemma. Specifically, there is a common mistake that has been repeated in proofs going back more than 50 years. Corrected versions of these theorems are increasingly important as there has been more work in recent years focused on studying various generalizations of the concept of a normal number to numeration systems with infinite digit sets (for example, various continued fraction expansions, the Lüroth series expansion and its generalizations, and so on). Also, highlighting this (elementary) mistake may be helpful for those looking to study these numeration systems further and wishing to avoid some common pitfalls. Attila Pethő (University of Debrecen) On diophantine properties of generalized number systems - finite and periodic representations In this talk we investigate elements with special patterns in their representations in number systems in algebraic number fields. We concentrate on periodicity and on the representation of rational integers. We prove under natural assumptions that there are only finitely many S-units whose representation is periodic with a fixed period. We prove that the same holds for the set of values of polynomials at rational integers. Hajime Kaneko (University of Tsukuba) Analogy of Lagrange spectrum related to geometric progressions Classical Lagrange spectrum is defined by Diophantine approximation properties of arithmetic progressions. The theory of Lagrange spectrum is related to number theory and symbolic dynamics. In our talk we introduce significantly analogous results of Lagrange spectrum in uniform distribution theory of geometric progressions. In particular, we discuss the geometric sequences whose common ratios are Pisot numbers. For studying the fractional parts of geometric sequences, we introduce certain numeration system. This talk is based on a joint work with Shigeki Akiyama. Niels Langeveld (Leiden University) Continued fractions with two non integer digits In this talk, we will look at a family of continued fraction expansions for which the digits in the expansions can attain two different (typically non-integer) values, named α1 and α2 with α1α2 ≤ 1/2 . If α1α2 < 1/2 we can associate a dynamical system to these expansions with a switch region and therefore with lazy and greedy expansions. We will explore the parameter space and highlight certain values for which we can construct the natural extension (such as a family for which the lowest digit cannot be followed by itself). We end the talk with a list of open problems. Derong Kong (Chongqing University) Univoque bases of real numbers: local dimension, Devil's staircase and isolated points Given a positive integer M and a real number x, let U(x) be the set of all bases q in (1,M+1] such that x has a unique q-expansion with respect to the alphabet {0,1,…,M}. We will investigate the local dimension of U(x) and prove a 'variation principle' for unique non-integer base expansions. We will also determine the critical values and the topological structure of U(x). Carlos Matheus (CNRS, École Polytechnique) Approximations of the Lagrange and Markov spectra The Lagrange and Markov spectra are closed subsets of the positive real numbers defined in terms of diophantine approximations. Their topological structures are quite involved: they begin with an explicit discrete subset accumulating at 3, they end with a half-infinite ray of the form [4.52…,∞), and the portions between 3 and 4.52… contain complicated Cantor sets. In this talk, we describe polynomial time algorithms to approximate (in Hausdorff topology) these spectra. Simon Baker (University of Birmingham) Equidistribution results for self-similar measures A well known theorem due to Koksma states that for Lebesgue almost every x>1 the sequence (x^n) is uniformly distributed modulo one. In this talk I will discuss an analogue of this statement that holds for fractal measures. As a corollary of this result we show that if C is equal to the middle third Cantor set and t≥1, then almost every x in C+t is such that (x^n) is uniformly distributed modulo one. Here almost every is with respect to the natural measure on C+t. Henna Koivusalo (University of Vienna) Linear repetition in polytopal cut and project sets Cut and project sets are aperiodic point patterns obtained by projecting an irrational slice of the integer lattice to a subspace. One way of classifying aperiodic sets is to study repetition of finite patterns, where sets with linear pattern repetition can be considered as the most ordered aperiodic sets. Repetitivity of a cut and project set depends on the slope and shape of the irrational slice. The cross-section of the slice is known as the window. In an earlier work it was shown that for cut and project sets with a cube window, linear repetitivity holds if and only if the following two conditions are satisfied: (i) the set has minimal complexity and (ii) the irrational slope satisfies a certain Diophantine condition. In a new joint work with Jamie Walton, we give a generalisation of this result for other polytopal windows, under mild geometric conditions. A key step in the proof is a decomposition of the cut and project scheme, which allows us to make sense of condition (ii) for general polytopal windows. Célia Cisternino (University of Liège) Ergodic behavior of transformations associated with alternate base expansions We consider a p-tuple of real numbers greater than 1, beta=(beta_1,…,beta_p), called an alternate base, to represent real numbers. Since these representations generalize the beta-representation introduced by Rényi in 1958, a lot of questions arise. In this talk, we will study the transformation generating the alternate base expansions (greedy representations). First, we will compare the beta-expansion and the (beta_1*…*beta_p)-expansion over a particular digit set and study the cases when the equality holds. Next, we will talk about the existence of a measure equivalent to Lebesgue, invariant for the transformation corresponding to the alternate base and also about the ergodicity of this transformation. This is a joint work with Émilie Charlier and Karma Dajani. Boris Solomyak (University of Bar-Ilan) On singular substitution Z-actions We consider primitive aperiodic substitutions on d letters and the spectral properties of associated dynamical systems. In an earlier work we introduced a spectral cocycle, related to a kind of matrix Riesz product, which extends the (transpose) substitution matrix to the d-dimensional torus. The asymptotic properties of this cocycle provide local information on the (fractal) dimension of spectral measures. In the talk I will discuss a sufficient condition for the singularity of the spectrum in terms of the top Lyapunov exponent of this cocycle. This is a joint work with A. Bufetov. Olivier Carton (Université de Paris) Preservation of normality by selection We first recall Agafonov's theorem which states that finite state selection preserves normality. We also give two slight extensions of this result to non-oblivious selection and suffix selection. We also propose a similar statement in the more general setting of shifts of finite type by defining selections which are compatible with the shift. Narad Rampersad (University of Winnipeg) Ostrowski numeration and repetitions in words One of the classical results in combinatorics on words is Dejean's Theorem, which specifies the smallest exponent of repetitions that are avoidable on a given alphabet. One can ask if it is possible to determine this quantity (called the repetition threshold) for certain families of infinite words. For example, it is known that the repetition threshold for Sturmian words is 2+phi, and this value is reached by the Fibonacci word. Recently, this problem has been studied for balanced words (which generalize Sturmian words) and rich words. The infinite words constructed to resolve this problem can be defined in terms of the Ostrowski-numeration system for certain continued-fraction expansions. They can be viewed as Ostrowski-automatic sequences, where we generalize the notion of k-automatic sequence from the base-k numeration system to the Ostrowski numeration system.
CommonCrawl
Combinatorial games with infinite paths, and generalized Sprague-Grundy theory Generalized Sprague-Grundy theory has been used to analyze finite impartial loopy games with normal play. There is a nice short account by Mark S. in this answer. It was introduced by Cedric Smith in the 60s, and developed independently by Fraenkel and others. The theory is usually described in the setting of finite directed graphs (see for example section IV.4 of Siegel's Combinatorial Game Theory). I see that Fraenkl and Rahat (2001) looked at the case of infinite directed graphs which are locally path-bounded, i.e. where for any vertex $v$, the sup of the length of all paths starting at $v$ is finite. I'm interested in the case of directed graphs which are allowed to contain infinite paths. Suppose we assume that every vertex has finite outdegree. It seems to me that much of the structure (e.g. all the results of Section IV.4 of Siegel) extends unchanged to this setting. In particular, every such game has a "loopy nim value" which is either a non-negative integer (in which case the game is equivalent to a finite nim heap) or of the form $\infty(\mathcal{A})$ for some subset $\mathcal{A}$ of $\mathbb{N}$ (see the answer by Mark S. linked above). (Once we allow infinite paths, it's no longer important to allow cycles, since we can create multiple "copies" of each node. Without loss of generality one could consider "game trees", i.e. rooted trees directed away from the root, with all degrees finite.) Can someone point me to a reference covering such games? Does the generalized Sprague-Grundy theory go through straightforwardly to that context as I imagine? reference-request combinatorial-game-theory James MartinJames Martin I am not sure what you imagine, but once one makes the move to games with infinite play, then various set-theoretic issues come to light, and the subject becomes more set-theoretic and less like combinatorial game theory. The keyword is determinacy, and there is a very rich literature. A Gale-Stewart game is a game allowing infinite play, where the winner is determined by a certain set $A$ of plays. So player I is trying to play into the payoff set and player II is trying to play out of the payoff set. The Gale-Stewert theorem (1953) asserts that any game whose payoff set is open in the product topology is determined, meaning that one of the players has a winning strategy. This is generalized by Martin's theorem establishing Borel determinacy, generalizing from open sets to Borel sets. The axiom of choice implies that there are non-determined sets, but meanwhile, if one drops the axiom of choice, then the assertion that every game has a winning strategy, known as the axiom of determinacy, has many extremely attractive consequences, such as the fact that all sets of reals are Lebesgue measurable and every set has the property of Baire, among others. The axiom of determinacy is equiconsistent over ZF+DC with the existence of infinitely many Woodin cardinals. $\begingroup$ Thanks Joel. The theory is all fine for "loopy" games, defined on finite graphs but allowing cycles. There one already has infinite play. I am asking whether that theory (the "generalized Sprague-Grundy theory") extends to the case of infinite graphs with finite degrees. Very specifically, if you like, whether every such game has a "loopy nim value". See the answer by Mark S. that I linked to, or Chapter IV.4 of Siegel's book. I think the assumption of finite degrees gets around the need for any difficult set theory here. $\endgroup$ – James Martin Aug 13 '18 at 21:04 $\begingroup$ Yes, my answer is that in the general context of infinite play, then for games on countable graphs, even with finite degree, the analysis becomes completely set-theoretic, and quickly things become totally unlike the case of combinatorial game theory. The simplest case of the new realm may be clopen games, which are games that all end in finitely many moves (but not necessarily uniformly bounded), and in this case, one still has determinacy, by the fundamental theorem of finite games, also known as Zermelo's theorem. $\endgroup$ – Joel David Hamkins Aug 13 '18 at 21:20 $\begingroup$ Finite degree is a limitation on the number of moves for a player at each turn, but these set-theoretic issues even with games on $\{0,1\}$, where each player has only two moves. $\endgroup$ – Joel David Hamkins Aug 13 '18 at 21:21 $\begingroup$ Those directions are fascinating, but I don't think relevant to my actual question. Maybe I should make clear that this is all about games with "normal play", i.e. the game terminates if a player has no move, and that player loses. There are three outcome-classes with optimal play: 1st-player win, 2nd-player win, draw. From the finite-degree assumption you get compactness: if a player has a winning strategy, then they can guarantee to win within some finite number of moves. I am pretty confident that the loopy Sprague-Grundy theory does extend, but I don't know if it's written anywhere. $\endgroup$ – James Martin Aug 13 '18 at 21:42 $\begingroup$ It seems you are not interested in the larger possibilities offered by infinite play, which are extremely fascinating. Meanwhile, your context of normal play does allow for arbitrary clopen games, however, which never have draws, and I'm not sure the extent to which the Sprague Gundy analysis extends into that realm. Perhaps that would be the real answer to your question, which I encourage others to provide. $\endgroup$ – Joel David Hamkins Aug 13 '18 at 21:51 Not the answer you're looking for? Browse other questions tagged reference-request combinatorial-game-theory or ask your own question. Generalized Sprague-Grundy Theorem proving two partizan games are equivalent Bipartite Nim-Geography Why does the bitxor function appear in Nim? What does "game theory" cover and how should it be called? A different equivalence relation on partizan combinatorial games
CommonCrawl
Search SpringerLink Performance Management of Supply Chain Sustainability in Small and Medium-Sized Enterprises Using a Combined Structural Equation Modelling and Data Envelopment Analysis Prasanta Kumar Dey1, Guo-liang Yang2, Chrysovalantis Malesios ORCID: orcid.org/0000-0003-0378-39391, Debashree De1 & Konstantinos Evangelinos3 Computational Economics volume 58, pages 573–613 (2021)Cite this article Although the contribution of small and medium-sized enterprises (SMEs) to economic growth is beyond doubt, they collectively affect the environment and society negatively. As SMEs have to perform in a very competitive environment, they often find it difficult to achieve their environmental and social targets. Therefore, making SMEs sustainable is one of the most daunting tasks for both policy makers and SME owners/managers alike. Prior research argues that through measuring SMEs' supply chain sustainability performance and deriving means of improvement one can make SMEs' business more viable, not only from an economic perspective, but also from the environmental and social point of view. Prior studies apply data envelopment analysis (DEA) for measuring the performance of groups of SMEs using multiple criteria (inputs and outputs) by segregating efficient and inefficient SMEs and suggesting improvement measures for each inefficient SME through benchmarking it against the most successful one. However, DEA is limited to recommending means of improvement solely for inefficient SMEs. To bridge this gap, the use of structural equation modelling (SEM) enables developing relationships between the criteria and sub-criteria for sustainability performance measurement that facilitates to identify improvement measures for every SME within a region through a statistical modelling approach. As SEM suggests improvements not from the perspective of individual SMEs but for the totality of SMEs involved, this tool is more suitable for policy makers than for individual company owners/managers. However, a performance measurement heuristic that combines DEA and SEM could make use of the best of each technique, and thereby could be the most appropriate tool for both policy makers and individual SME owners/managers. Additionally, SEM results can be utilized by DEA as inputs and outputs for more effective and robust results since the latter are based on more objective measurements. Although DEA and SEM have been applied separately to study the sustainability of organisations, according to the authors' knowledge, there is no published research that has combined both the methods for sustainable supply chain performance measurement. The framework proposed in the present study has been applied in two different geographical locations—Normandy in France and Midlands in the UK—to demonstrate the effectiveness of sustainable supply chain performance measurement using the combined DEA and SEM approach. Additionally, the state of the companies' sustainability in both regions is revealed with a number of comparative analyses. Climate change presents one of the most serious environmental challenges faced by humanity today (Dey et al. 2018). The achievement of sustainability is a major issue for organizations worldwide. Enterprises need to both maintain and improve their market position and fulfill their environmental and social responsibilities (Halkos and Evangelinos 2002). There is a growing literature analyzing the interactions between economy, society and the environment. The focus of most studies up to date, however, has been on the activities of large-scale companies, while less is known about the operations of small and medium-sized enterprises (SMEs) (Johnson and Schaltegger 2016), which the majority of published studies on sustainability have largely ignored. SMEs make significant contributions to the global economy; hence governments increasingly promote the development of these enterprises in recognition of the critical role they play in the socio-economy (Vidal 2013). While SMEs are of crucial importance in the economic growth across the world, they also impose collectively considerable pressures on the environment (Mollenkopf 2008; Johnson and Schaltegger 2016). In comparison with large companies, small and medium-sized enterprises tend to be less engaged in sustainability management practices, thus limiting their potential for reducing environmental and social impacts. SMEs are much less likely to have sustainability goals and practices in place (Johnson 2015) and are potentially more ill-prepared than their larger counterparts to cope with sustainability challenges. This is due to limited financial and managerial resources, lack of time and shortage of (staff) skills in sustainability-related practices (Sullivan-Taylor and Branicki 2011). SMEs tend to plan in the short term, simply reacting to arising situations, and focusing on their very survival. Likewise, they share less formalized structures and codified policies while they are most usually owner-managed resulting in a command and control management culture (e.g. Ates et al. 2013). Other reasons for their inefficiency include the lack of internal capacity (e.g. financial resources, human resources, technologies, business processes and R&D activities), weak supporting frameworks and, in many cases, political indulgence by policy makers (see Zhu and Sarkis 2004; Dey and Cheffi 2012). The theoretical and practical underpinning of this study relates to the sustainability performance (Seuring and Muller 2008) of SMEs. Our aim is to develop a performance measurement (PM) method of supply chain sustainability and a management framework for SMEs through revealing the characteristics, issues and challenges of sustainable supply chain management, with a view to enhancing the sustainability performance of SMEs. Although previous research explores various sustainability PM models (see, e.g., Büyüközkan and Karabulut 2018; Acquaye et al. 2017), supply chain sustainability measurements for SMEs are scant. Up till now there has not been a thorough research project on the impact of implementing sustainable supply chain management practices on the performance of SMEs. Available models either analyze the sustainability performance of SMEs using multiple criteria and objectively distinguish between efficient and inefficient companies or identify causal relationships among criteria so as to objectively come up with overall improvement measures. More specifically, one methodological approach in prior research applies Data Envelopment Analysis (DEA) for measuring the performance of a group of SMEs using multiple criteria (inputs and outputs), segregating in this way inefficient SMEs from efficient ones and suggesting improvement initiatives for each inefficient company through benchmarking it against the most successful one. However, this approach is limited to suggesting means of improvement solely for the individual inefficient SMEs under study. To mend this gap, another methodological approach employs Structural Equation Modelling (SEM) which enables the development of more general relationships between the criteria and sub-criteria for sustainability performance measurements and helps identify improvement measures for every SME within a region through a statistical modelling technique based on selected sampling of SMEs (Malesios et al. 2018). As SEM suggests improvements in criteria level not from the point of view of an individual SME, this tool can be considered as more suitable for policy makers compared to individual SME owners/managers. However, a performance measurement heuristic that combines both DEA and SEM could make use of the best of each technique, and thereby could be the most appropriate tool for both policy makers and individual SME owners/managers. Typically, for studies measuring sustainability performance it can be assumed that the importance of the sustainability criteria and sub-criteria is equal, e.g. through the simple aggregation of individual SME sustainability performance scores in each one of sustainability criteria. Our modelling approach offers the accommodation of criteria weights. Among the advantages of a combined SEM and DEA analysis is the ability to obtain through SEM more suitable input and output latent constructs of sustainability practices and performances, instead of utilizing average and aggregate scores from individual observed items (see, e.g. Thanassoulis et al. 2017). We believe that the combined approach is more robust since it essentially comprises a suitably adjusted index of practices/performances that is weighted according to the magnitude of the effect each individual observed item depicts on the corresponding latent construct of sustainability. However, according to the authors' knowledge, there is no research available that combines the above two aspects within the same analysis framework in order to take advantage of the best of the two approaches. In view of the above, the objective of this study is to develop a framework to measure the supply chain sustainability of SMEs using a combined DEA and SEM approach. SEM is employed for measuring the sustainability performance of a group of SMEs using combined data, whereas DEA utilizes the individual SME sustainability performance scores rankings produced by SEM and segregates between efficient and inefficient SMEs. The proposed framework has been applied to a carefully selected sample of SMEs in two geographic locations—viz. Normandy of France and Midlands of the UK—in order to not only validate the framework but also reveal the characteristics of sustainable supply chains of SMEs in both regions. The grouping was performed so as to ensure high degree of validity by comparing the manufacturing enterprises in two developed nations, namely UK and France. The regions where the SMEs are based in each country are industrial areas. The two developed countries have similar supply chain management drivers and pressure for sustainability from the perspective of regulations and policy makers. The remainder of the paper is organised as follows. Section 2 critically reviews the contemporary literature on sustainability performance of SMEs and identifies knowledge gaps that this research intends to bridge. Section 3 elaborates on the methodology that we adopt to achieve the research objectives. Section 4 introduces the proposed performance measurement model/framework. Section 5 demonstrates the application of the proposed modelling approaches. Section 5 provides explicit discussion and conclusions of this paper. Performance measurement (PM) is described as the process of quantifying efficiency and effectiveness of action (Neely et al. 1995) or the process of using measurement information to support managers in decision-making situations and to link strategy to operations (Bititci et al. 2012). Different PM tools have been developed in recent decades from different perspectives (Ramezankhani et al. 2018). Nudurupati et al. (2011) report that performance management is an organisation-wide shared vision that surrounds performance measurement activity. Marr and Creelman (2011) define performance management as "the execution of the organization mission through the coordinated effort of others". In the past few years, performance management has become much more common in government managed organisations (Poister 2003) and has evolved into a popular research topic in the field of management (e.g. Taticchi et al. 2010; Nudurupati et al. 2011). Thomas and Griffin (1996) propose that supply chain management (SCM) reflects the most advanced state in the evolutionary development of purchasing, procurement and other supply chain activities. Hall (2000) argues that today's organisations face pressure to enhance sustainable behaviour from several sources, including regulations, consumers, etc. As noted also by Dey and Cheffi (2012), the pressure from various stakeholders to commit to sustainable practices and performance management results in the rapid increase of interest in sustainable supply chains and their management on the part of government regulators, NGOs, academics and industrial players. Measurement has been recognized as a crucial element to improve business performance (Sharma et al. 2005). Consequently, there has been a vast body of literature on designing and implementing performance measurement tools as well as developing sustainability performance plans in supply chains (e.g. Gunasekaran et al. 2004; Chan and Qi 2003; Shepherd and Gunter 2005; Dey and Cheffi 2013). Gunasekaran et al. (2004) claim that performance measurement and metrics have an important role to play in setting objectives, evaluating performance, and determining future courses of action. However, they point out that performance measurement and metrics pertaining to SCM have not received adequate attention from researchers or practitioners. Therefore, they have developed a framework to promote a better understanding of the importance of SCM measurement and metrics. Taticchi et al. (2013) report that supply chain sustainability has been of great interest in the last decade for academia and the industrial world because of pressures from various stakeholders to adopt a commitment to sustainability practices. Chan and Qi (2003) have proposed a PM method to contribute to the development of SCM, which employs a process-based systematic perspective to build an effective model that measures the holistic performance of complex supply chains. Shepherd and Gunter (2005) have developed several metrics and classified them into different groups as follows: (a) Whether they are qualitative or quantitative, (b) What they measure (i.e. cost vs non-cost; quality, resource utilization, delivery and flexibility, visibility, trust and innovativeness), (c) Their operational, tactical or strategic focus, and (d) The process in the supply chain they relate to. Taticchi et al. (2013) have also adapted this classification of metrics. Dey and Cheffi (2013) have developed a framework for green supply chain performance measurement consisting of two higher order constructs based on environmental practices and sustainable performances across the supply chain using the analytic hierarchy process (AHP). Bhattacharya et al. (2013) identify the green causal relationships between the constructs (i.e. organisational commitment etc.) based on a green supply chain PM framework and test them with a collaborative decision-making approach using a fuzzy analytical network process based green balanced scorecard. Bhagwat and Sharma (2007) have also applied a balanced scorecard approach for measuring and evaluating day-to-day business operations. Zhu et al. (2007a) reveal that external relationships in green supply chain management may receive less attention than might be expected. Kainuma and Tawara (2006) suggest that quantitative methods can be useful, when considering the complexity involved in making a supply chain leaner and greener, in assessing the value of specific initiatives to the overall greenness of the supply chain (Zhu et al. 2007b) or in deciding what to do first (Kleindorfer et al. 2005; Orsato 2006). Enterprises have to measure, monitor and manage performance in multiple dimensions using a balanced and dynamic set of measures that facilitates decision-making processes (Nudurupati et al. 2011; Taticchi et al. 2009). Kaplan and Norton (1996) argue that the concept of "balance" refers to the need of using different metrics and perspectives, that are linked together to give a holistic view of the organization, e.g. financial versus non-financial; quantitative versus qualitative; internal versus external; etc. See also Lynch and Cross (1991); Fitzgerald et al. (1991); and Neely et al. (2002). Garengo et al. (2005) claim that the word "dynamic" implies the need of developing a system that constantly monitors the internal and external context and reviews objectives and priorities up to date. For enterprises or companies it is necessary to measure, monitor and manage organizational performance in its multiple dimensions to compete in complex and continuously changing environments. Research within this topic focuses on the ongoing development of both qualitative and quantitative metrics and frameworks (Taticchi et al. 2009). The implementation of a performance measurement system in SMEs, especially in the manufacturing sector, has been the popular research question due to the important changes occurring recently, e.g. the increasingly competitive environment, the proneness of growing in dimension, the evolution of quality concept, the increased focus on continuous improvement and the significant advances in information and communication technologies (Garengo et al. 2005). In recent years, with more and more focus shifting towards environmental protection, a rich body of literature has emerged devoted to performance measurement and green supply chains, e.g. Lee and Klassen (2008), Gavronski et al. (2011), Halkos and Polemis (2018) and Kim et al. (2011). Lee and Klassen (2008) have mapped the factors that initiated and improved environmental capabilities in small- and medium-sized enterprises over time using a case study method including multiple suppliers of two large buying firms. They point out that buyers' green supply chain management initiated and then enabled the improvement of suppliers' environmental capabilities through several specific mechanisms. Gavronski et al. (2011) have provided a model for the development of green supply chain capabilities that is grounded on the resource-based view of the firm as the theoretical background. Halkos and Polemis (2018) estimate the environmental efficiency of the power generation sector in the USA by using Window Data Envelopment Analysis (W-DEA) (see also Halkos et al. (2015a, b) for similar approaches at country level). Kim et al. (2011) have presented a model for the optimal design of biomass supply chain networks under uncertainties impacting overall profitability and design. Moreover, Bjorklund et al. (2012) have identified five dimensions of performance measurement for green supply chain: (a) stakeholder perspective, (b) purpose of measuring, (c) managerial levels of measuring, (d) measuring across the supply chain and (e) combination of measurements. Shi et al. (2012) have identified causal links between institutional drivers, intra-organisational and inter-organisational environmental practices that affect green supply chain management. Hassini et al. (2012) review the literature on sustainable supply chains during the 2000–2010 period to design an original framework for sustainable supply chain management and performance measurement, which provides a link to closed-loop supply chains incorporating sourcing, transformation, delivery, value proposition, customers and product use along with reuse, recycle and return concepts. The above literature highlights the necessity to create an integrated framework for measuring the performance of supply chains. Walker and Jones (2012) have pointed out that there is a wide gap between what practitioners say and actually do about the sustainability of supply chains because they only provide lip service to sustainable supply chain management. To the authors' knowledge, there is a rather limited volume of published studies on benchmarking the performance of sustainable supply chains using DEA as the quantitative tool. Wong and Wong (2007) employ DEA for measuring supply chain performance and Azadia et al. (2015) have developed a DEA-based model that allows evaluation of suppliers in sustainable supply chain management. Although DEA is capable of segregating efficient and inefficient SMEs with respect to sustainability performance and suggest improvement measures solely for inefficient SMEs, it is unable to recommend improvement actions for the efficient SMEs or to come up with relationships between sustainability criteria and sustainability performances that can be generalized for the entire population of SMEs based on the statistical analysis of a specific sample. On the other hand, SEM is able to establish correlations among all the criteria by formulating a PM model. Using this model, suitable steps for improving the sustainability performance of both efficient and inefficient SMEs could be undertaken in a more unified way. This research utilises the pros of both DEA and SEM. While DEA classifies SMEs into efficient and inefficient ones in line with their sustainability performance, SEM puts forward improvement measures for every SME within the region. This combined information may help both policy makers and SME owners/managers. Another comparative advantage of the combined use of SEM and DEA analysis is feeding DEA with more suitable overall input and output data of latent constructs through SEM, instead of the typical approach of averaging or aggregating individual observed variables. The proposed research processes commence with an in-depth literature review on climate change issues and environmental practices of small and medium-sized enterprises across the world in order to identify issues and challenges facing them, policies being adopted, strategies being deployed and framework being used for achieving superior environmental performance. Specifically, the study undertakes the following steps: (a) reviewing literature to identify constructs for supply chain characteristics and sustainability performance measurement of SMEs, (b) developing a questionnaire to derive supply chain characteristics of SMEs, (c) selecting SMEs for survey and performance measurement, and conducting the survey with the selected SMEs, (d) formulating a DEA-based supply chain sustainability PM model, (e) processing the data to feed into the SEM and DEA models, (f) running the SEM models and suggesting improvement initiatives for the participating SMEs by using SEM, (g) running the DEA models and deriving the supply chain sustainability performance of the participating enterprises and (h) deriving a set of propositions from the research. Figure 1 represents the research method framework. Research method framework The following paragraphs describe the collected data and subsequently demonstrate the methods (SEM and DEA) that have been considered for developing the proposed performance measurement and benchmarking framework. Sample Collection and Data For the purposes of the current research, a questionnaire has been constructed in order to study the sustainability practices and performance of a number of French and British SMEs. The analysis and outcome of the questionnaire survey into SMEs covering aspects such as manufacturing, processing and construction help develop a sustainability PM and management framework. This framework is then applied to a sample of selected companies in the UK (30 SMEs) and France (54 SMEs) to measure their current sustainable performance using combined structural equation modelling and data envelopment analysis and suggest improvement measures through SEM. The sample of SMEs is from operating sectors whose environmental impact is generally stronger compared with SMEs in other sectors. Specifically, an interview protocol was formed and the survey was designed and conducted aiming at capturing the perceptions of the SME owners and managers about the sustainable supply chain practices and performance of SMEs in the UK and France. Initially a workshop was organized with the involvement of selected researchers and owners/managers of a small number of SMEs to derive the suitable questionnaire for achieving the objectives of the study. Secondly, an initial pre-sample survey was carried out into some SMEs in the Midlands, UK, and Normandy, France. Sample size selection for our sample framework has been determined by utilizing simple random sampling, where we have used \( \hat{p} = 0.5 \) as an estimate of population proportion that shares a certain characteristic on one of the (categorical) variables in the survey, e = 10% the proportion of error we are prepared to accept and t = 1.96 the value from the standard normal distribution reflecting the 95% confidence level, indicating a sample size of approximately 90 companies. An interview protocol has been created for capturing the supply chain characteristics of SMEs (available on demand) through the aforementioned questionnaire. This helps identify criteria and sub-criteria for the supply chain sustainability performance of the SMEs under investigation. Table 1 shows the criteria and sub-criteria of the proposed DEA-based PM model and SEM model for establishing relationships among the criteria that have been set through questionnaires. Practices are considered as inputs and performances are the outputs. In order to obtain a unified index for the corresponding practices/performances that can subsequently be entered as inputs and outputs into the DEA model, we have utilized factor scores derived from SEM analysis. Detailed information on the relevant literature studied for the selection of the specific indicators is provided in Table 5 in the "Appendix". Table 1 Criteria and sub-criteria for input and output Analysis Based on Combined SEM and DEA The two stages of the analysis are presented in this section. In the first stage, SEM is applied to the observed items of sustainability practices and performances illustrated in Table 1, to derive the overall associations between practices and performances in the two countries. In addition, the latent factor scores of sustainability inputs (sustainability practices) and outputs (sustainability performances) are extracted in order to be subsequently used for the DEA analysis. In the next stage, DEA analysis is run based on inputs/outputs obtained through SEM. The methodologies used in this analysis are briefly described below. Structural Equation Modelling In order to test the influence of the various latent items of sustainability practices on the sustainability performance of SMEs, we fit two structural equation models (see Bollen 1989), one for the French and one for the British data. Specifically, we hypothesize a unified conceptual sustainability model where the four latent constructs of sustainability practices (i.e. economic, environmental, social and operational) are causing the four corresponding latent constructs of sustainability performance (see Fig. 2). Conceptual model for SEM analysis The distinguishing characteristic of SEM is that variables can be either directly observed or latent or a mixture of both like the eight latent constructs of sustainability practices and performances), a feature that cannot be found in other standard regression-type analysis techniques, such as multiple regression analysis. By combining each latent sustainability factor of practices and performances with the corresponding observed items obtained from the questionnaire data and associating all latent constructs of practices with all latent constructs of performances we derive the hypothetical model of Fig. 2. This model is fitted, separately to the French and British data, by the method of Weighted Least Squares (WLS) (Jöreskog 1994) due to the discrete nature of the observed collected data. Fit of the two models was performed with the use of Amos statistical software (Arbuckle 2014). As regards assessing the fit of a SEM model, there exists a large variety of goodness-of-fit measures that are mostly functions of the model's Chi square. Typical examples of such indices are the GFI (goodness-of-fit index) and the AGFI (adjusted goodness-of-fit index). Another popular measure is the Root Mean Square Error of Approximation (RMSEA). If the fit of the model is good, GFI and AGFI should approach 1, whereas RMSEA should be small (typically less than 0.05). Data Envelopment Analysis The DEA method (Charnes et al. 1978; Banker et al. 1984) is a linear programming technique, based on relative efficiency. The major advantage of DEA is that it is applicable to situations with many inputs and outputs without any prior information on weights. To gain a better understanding of the idea of DEA, the single-input single-output case is most intuitive (see Fig. 3). Let us suppose there is a Decision Making Unit (DMU) K producing certain outputs with given inputs using the technology that is defined by the curve "True frontier", which, unfortunately, is not observable. DEA frontiers are proved to be the maximum likelihood estimation of "True frontier" (see e.g. Banker 1993; Cao et al. 2016) under certain assumptions (Korostelev et al. 1995). It is obvious that K is not efficient since it could produce the same output with much less input. Thus, its inefficiency can be measured from the relative distance towards the frontier. In input direction it is the factor by which the inputs could be downscaled-so that K would lie on the frontier. However, in real cases, the true frontier is not observable so this measure cannot be calculated directly. The basic idea of DEA is grounded in estimating the unobserved true frontier by a frontier which is constructed from existing observations. The model (CCR model) developed by Charnes et al. (1978) assumes constant returns to scale (RTS), which implies that the frontier is a straight line from the origin. Under a variable RTS assumption, Banker et al. (1984) proposed the BCC model with an extra convex constraint. The BCC frontier is the one formed by the line segment EFH in Fig. 3. The illustration of production frontiers Under different returns to scale assumptions, the derived efficiencies of certain DMUs can vary. In Fig. 3, true efficiency, output-based CCR efficiency and BCC efficiency of unit K can be represented as reciprocals of \( E^{\prime \prime } E_{0} /KE_{0}^{\prime } \), \( EE_{0} /KE_{0} \) and \( E^{\prime } E_{0} /KE_{0} \), respectively. Furthermore, we can easily see that CCR efficiency and BCC efficiency will by construction overestimate true efficiency (See also Cao et al. 2016; Shen et al. 2016). However, it has been proved that if the sample becomes large, the estimated frontier converges to the true one (e.g. Smith 1997; Kneip et al. 2008). Using the DEA analysis, a specific DMU (i.e. a SME in our application) can enhance its sustainability performance by setting its projection on the frontier as the performance target. For the purposes of the current analysis, the BCC-DEA modelling approach has been employed. Combining SEM factor scores as inputs/outputs for DEA analysis The combined application of SEM and DEA involves the introduction of raw factor score values of practices and performances latent structures estimated by the fit of weighted least squares SEM models into the BCC-DEA models. However, due to the specific nature of raw factor scores from SEM analysis, which involve positive and also negative (small in magnitude) values, there are methodological issues that arise in the context of DEA analysis, which are associated with this data divergence. To this end, and following the suggestions of the relevant theoretical literature (Ali and Seiford 1990), the factor score data are normalised to comply with the suitable scale of the data for DEA analysis. Hence the normalisation of the SEM results is carried out and subsequently fed into the DEA models. The results of the SEM analysis (i.e. raw SEM factor scores) are attached in "Appendix" 6 and 7 for the British and French SMEs, respectively. These results are transformed by scaling data into the DEA model. The data transformation has been performed through the following equation: $$ Data\;transformation = 1 - \frac{data - \hbox{min} \,data }{\hbox{max} \,data - \hbox{min} \,data} $$ The obtained transformed data are then put into the DEA conceptual model. The results of the DEA analysis are shown in Tables 8 and 9 for the British SMEs and similarly Tables 10 and 11 for the French SMEs. Supply chain data were gathered from SME representatives in two specific geographical locations (Normandy in France and Midlands in the UK) in order to validate the effectiveness of the proposed method. The information was collected through the specially constructed interview protocol (questionnaire utilized for the study is available upon request by the corresponding author). The responses were processed via SEM models in order to subsequently be fed into the DEA models for each input and output criterion. In the following sub-sections, the results derived from the combined application of SEM and DEA methods on the SME sample are presented in detail. Deriving Associations Between Sustainability Practices and Performance of SMEs Through SEM Modelling SEM enables developing relationships between the criteria and sub-criteria for a sustainability performance measurement that facilitates to identify improvement measures for every SME within a region through a statistical modelling approach. Additionally, the obtained weighted estimates—in the form of factor scores—of the overall constructs of sustainability practices and performances can be utilized for the subsequent DEA analysis. Here we present the results of the fit of SEM models to the French and British data. In particular, the SEM results of the British and French SMEs are depicted in Figs. 4 and 5 respectively, where one can see the standardized coefficients of the associations between sustainability practices and the corresponding performances. Estimated SEM Model (British data). *p < 0.05; Solid right arrow: significant direct positive effect; Dashed right arrow: non-significant direct effect Estimated SEM Model (French data). *p < 0.05; Solid right arrow: significant direct positive effect; Dashed right arrow: non-significant direct effect As is seen from the obtained results, in the region of Midlands, UK, the economic practices of SMEs are interconnected with their economic, environmental and operational performance; and each set of practices is linked to each type of performance respectively (Fig. 4). Therefore, any given SME located in the Midlands is quite likely to enhance its economic, environmental and operational performance through effective management of economic practices. However, in order for SMEs to improve their social performance they are required to address their social practices closely. On the contrary, as regards the French SMEs, economic practices are connected only to their economic performance (Fig. 5). Operational practices have a strong correlation with operational, environmental and social performances; environmental practices are associated with both environmental and operational performances. Therefore, SMEs in Normandy, in order to optimize their environmental and social performance, need to address all operational, environmental and social issues and challenges. Additionally, improving environmental practices will contribute to a higher operational performance. However, economic performance only depends on economic practices and remains unaffected by any other variable. The information generated from SEM analysis makes it possible for both policy makers and individual SME owners/managers to upgrade their company's performance. Additionally, both efficient and inefficient SMEs remain informed on their performance dynamically. Even SMEs that have not taken part in the analysis can gain valuable insights from supply chain practices and performances of enterprises in their region, a fact that will ultimately help them to re-examine their practices and performance in order to make the right decisions. Benchmarking Inefficient SMEs Through DEA In the current section, the results of DEA analysis utilizing the input and output scores derived via SEM are presented in detail. Specifically, Tables 6 and 7 in "Appendix" include British and French SMEs' input/output data respectively, referring to the raw factor scores obtained through SEM. Tables 2 and 3 show the efficient and inefficient SMEs of the UK and France respectively, following application of the DEA method. In Table 2, we can see that, as per the above data analysis using VRSTE, the UK SMEs coded as 3, 4, 6, 8, 11, 13, 14, 15, 16, 18, 19, 21, 23, 24, 26, 27, 29 and 30 are efficient. There are 12 inefficient SMEs according to the BCC-DEA model. There are 1, 13 and 16 SMEs with increasing returns to scale (IRS), constant returns to scale (CRS), and decreasing returns to scale (DRS), respectively. Column 4 in Table 2 shows the scale efficiencies of those 30 British SMEs. Table 2 Efficiency summary of 30 UK SMEs Table 3 Efficiency summary of 54 French SMEs In Table 3, which shows the efficiency summary of the participating French SMEs, we can see that, as per the above data analysis using VRSTE, there are 30 French SMEs which are efficient, whereas there are 24 inefficient SMEs according to the BCC-DEA model. There are 0, 28, and 26 French SMEs with increasing returns to scale (IRS), constant returns to scale (CRS), and decreasing returns to scale (DRS), respectively. Column 4 in Table 3 shows the scale efficiencies of those 54 French SMEs (see also Fig. 6 for a comparative graph for the two countries' SMEs). Distributions of the three RTS types, by country Tables 8, 9, 10 and 11 in "Appendix" include the slacks and targets of the inefficient British and French SMEs of our sample, respectively. The term 'slacks' refers to the residues of the variables in DEA model. The input and output targets denote the benchmarks for performance improvement of assessed SMEs. In order to demonstrate the usefulness and applicability of the DEA analysis performed in combination with structural equation modelling, Table 4 shows an illustrative example based on the detailed results of randomly chosen DMU in the UK (DMU1) that pinpoints how important it is to suggest actions for improving the performance of the specific inefficient company. Table 4 Performance analysis of DMU 1 in the UK Original values against each economic, operational, environmental and social practice and performance in line with survey responses are shown in row 1 of the Table. The difference % in the third row shows that the movement desired in input is 4.8%, indicating that a decrease is required. The final projected value for achieving efficiency is shown in row 2 for economic, operational, environmental and social performances respectively. Similar analysis was undertaken for all the inefficient SMEs and improvement measures were derived. These measures enable the enterprises to develop business cases through benchmarking against the practices of the specific benchmarked SMEs. The efficient SMEs might also have room for improvement, but the DEA-based approach is unable to suggest improvement plans in terms of BCC-efficiency in the BCC-DEA model. However, those BCC-efficient DMUs could still improve their performance by (1) improving scale efficiencies, and (2) retaining unity pure technical efficiency. This research theoretically contributes to the relevant literature by proposing a new supply chain sustainability performance measurement and management framework for small and medium-sized enterprises. At the same time, it empirically determines the state of economic, operational, social, and environmental practices and performance of the participating SMEs, and puts forward for consideration a set of improvement measures through benchmarking against sectoral best practice. Prior research on sustainability PM relies on economic, environmental and social criteria. We have added operational criteria, too, so as to objectively derive improvement measures across business processes along with resources and infrastructures. The proposed sustainable supply chain PM model combines SEM and DEA methodologies. We have clearly presented all the practices as inputs and all the performances as outputs in an effective way, by combining the methods of SEM and DEA analyses. Specifically, the combined SEM and DEA analysis has offered the ability to extract through SEM suitable overall input and output latent constructs of sustainability practices and performances. This approach is more robust in comparison with the typical method of using averages for derivation of a unified input/output score for each sustainability practice/performance, since it produces a suitably adjusted overall index of each construct of sustainability practices/performances (i.e. environmental, social, economic and operational) that is weighted according to the magnitude of the effect of each individual observed item on the corresponding latent structure of sustainability. This also helps to derive improvement measures through specific interventions in each practice for achieving the desired performance. Moreover, this is the first model proposed for DEA-based sustainability supply chain PM for SMEs. The proposed combined modelling approach is validated through applications in two varied set-ups—Midlands in the UK and Normandy in France. While DEA can segregate efficient and inefficient units (i.e. individual SEMs) using a number of input and output criteria, and recommend improvement measures for inefficient units through benchmarking them against the most efficient units, complementary SEM helps to develop causal relationships among the criteria through hypotheses testing, which also enables SMEs to decide on how to enhance their performance effectively. Individual SMEs would be able to adopt this model and measure their current state of practices and performance, and accordingly implement targeted improvement initiatives. Additionally, the tool will be beneficial to policy makers as it cannot only depict the state of SMEs' sustainability in a specific region but also predict the characteristics of a specific SME's sustainable performance even if it is not included in the sample. Although prior research has used DEA (West 2014) and SEM (Hanim et al. 2017) for sustainability analysis separately, as of today, the two methods have not been used in an integrated way so as to get the benefits of both of them and reduce the individual shortcomings of both DEA and SEM. According to the authors' knowledge, this is the first attempt to combine DEA and SEM for sustainability performance measurements. Any group of SMEs with the objective of benchmarking their sustainability performance could adopt this model. Firstly, it is required that a consensus be reached on the input and output criteria and their proxies. In line with the proxies, an interview protocol is formed and all the participating SMEs are asked to respond. Their responses are processed in a spread sheet by linking them with each criterion (refer to Table 5 in the "Appendix"). This helps derive the value for each practice and performance criterion. These criteria are run in a DEA-based PM model to derive the efficiency of the participating SMEs and objective improvement measures for inefficient SMEs. This DEA-based supply chain sustainability PM method is equally suitable for policy makers as well as individual SME owners/managers. While individual participating SMEs can take advantage of the study results and develop business cases for improving their sustainability performance, policy makers can develop focused schemes in order to work towards and ensure the long-term viability of all the inefficient SMEs. Table 5 Bibliographic sources for the selection of practices and performances indicators The proposed combined DEA and SEM-based supply chain sustainability PM model for SMEs is robust. DEA analyzes the performance of participating SMEs using benchmarking principles and suggests improvement strategies for inefficient SMEs only. However, there may be room for improving the performance of efficient SMEs as well, something that the DEA model is unable to derive. On the contrary, the SEM approach helps to objectively extract the causal relationships between sustainable supply chain practices (inputs) and performances (outputs) of SMEs within a region from a sample study. The characteristics of any SME's sustainable supply chain within the study region could be estimated using the SEM approach. This enables to derive not only what to do (via DEA) but also how to do (via SEM) in order to enhance sustainability performance, irrespective of whether SMEs are efficient or inefficient (the latter being a major shortcoming of DEA). The present study sheds light on the fact that SMEs in both regions could reduce inefficiency by optimising their economic and operational practices. SMEs in the Midlands, UK, need to do more in order to achieve sustainability than their French counterparts. Environmental aspects are also very important for achieving sustainability and most of the inefficient SMEs need to integrate the environmental dimension into their development process. Social aspects are not of as much concern as social practices and performance have a strong synergy. Overall, inefficient SMEs must adopt the lean approach (efficiency focused) in order to increase their sustainability. Efficient SMEs would attain a higher level of sustainability through adopting cost intensive approaches. Economic practices are likely to affect economic, environmental and operational performances in the Midlands, UK, whereas operational practices are likely to impact operational, environmental and social performances in Normandy, France. Beside these managerial and policy implications, several potential avenues for future research emerge from the current study as well. Sensitivity analysis could also be undertaken with the consideration of various inputs and outputs along with modifications of their values. The modelling implementation, however, has certain limitations. We need a reasonably good number of SMEs to benchmark and establish causal relationships among the criteria and sub-criteria in addition to the present sample collected. Additional means of improvement could be in the direction of considering avenues for obtaining a more refined body of ideas for improving specific characteristics of sustainability practices and performances in individual SMEs, since that at the current state the DEA modelling of inputs/outputs is at a higher level (overall concepts of sustainability practices and performances). Therefore, the combined SEM- and DEA-based PM modelling approach could be integrated with other standalone PM models in order to lead to a more refined and robust approach. This has been kept outside the scope of this study and could be undertaken in the future. Abdul-Rashid, S. H., Sakundarini, N., Raja Ghazilla, R. A., & Thurasamy, R. (2017). The impact of sustainable manufacturing practices on sustainability performance: Empirical evidence from Malaysia. International Journal of Operations & Production Management, 37(2), 182–204. Acquaye, A., Feng, K., Oppon, E., Salhi, S., Ibn-Mohammed, T., Genovese, A., et al. (2017). Measuring the environmental sustainability performance of global supply chains: A multi-regional input–output analysis for carbon, sulphur oxide and water footprints. Journal of Environmental Management, 187, 571–585. Ali, A. I., & Seiford, L. M. (1990). Translation invariance in data envelopment analysis. Operations Research Letters, 9, 403–405. Arbuckle, J. L. (2014). Amos 23.0 user's guide. Chicago: IBM SPSS. Ates, A., Garengo, P., Cocca, P., & Bititci, U. (2013). The development of SME managerial practice for effective performance management. Journal of Small Business and Enterprise Development, 20(1), 28–54. Azadia, M., Jafarianb, M., Saen, R. F., & Mirhedayatian, S. M. (2015). A new fuzzy DEA model for evaluation of efficiency and effectiveness of suppliers in sustainable supply chain management context. Computer and Operations Research, 54, 274–285. Banker, R. D. (1993). Maximum likelihood, consistency and data envelopment analysis: A statistical foundation. Management Science, 39(10), 1265–1273. Banker, R. D., Charnes, A., & Cooper, W. W. (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science, 30(9), 1078–1092. Bhagwat, R., & Sharma, M. K. (2007). Performance measurement of supply chain management: A balanced scorecard approach. Computers & Industrial Engineering, 53(1), 43–62. Bhattacharya, A., Mohapatra, P., Kumar, V., Dey, P. K., Brady, M., Tiwari, M. K., et al. (2013). Green supply chain performance measurement using fuzzy ANP-based balanced scorecard: a collaborative decision-making approach. Production Planning & Control: SI - The Management of Operations. https://doi.org/10.1080/09537287.2013.798088. Bititci, U., Garengo, P., Dörfler, V., & Nudurupati, S. (2012). Performance measurement: Challenges for tomorrow. International Journal of Management Reviews, 14(3), 305. Bjorklund, M., Martinsen, U., & Abrahamsson, M. (2012). Performance measurements in the greening of supply chains. Supply Chain Management: An International Journal, 17(1), 29–39. Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley-Interscience. Büyüközkan, G., & Karabulut, Y. (2018). Sustainability performance evaluation: Literature review and future directions. Journal of Environmental Management, 217, 253–267. Cao, J., Chen, G., Khoveyni, M., Eslami, R., & Yang, G. (2016). Specification of a performance indicator using the evidential-reasoning approach. Knowledge-Based Systems, 92, 138–150. Chan, F. T. S., & Qi, H. J. (2003). An innovative performance measurement method for supply chain management. Supply Chain Management: An International Journal, 8(3), 209–223. Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2(6), 429–444. Chay, T., Xu, Y., Tiwari, A., & Chay, F. (2015). Towards lean transformation: The analysis of lean implementation frameworks. Journal of Manufacturing Technology Management, 26(7), 1031–1052. Devins, D., Johnson, S., & Sutherland, J. (2004). Employer characteristics and employee training outcomes in UK SMEs: A multivariate analysis. Journal of Small Business and Enterprise Development, Bradford, 11(4), 449–457. Dey, P. K., & Cheffi, W. (2012). Green supply chain performance measurement using the analytic hierarchy process: A comparative analysis of manufacturing organisations. Production Planning and Control: The Management of Operations, 24(8–9), 702–720. Dey, P. K., & Cheffi, W. (2013). Managing supply chain integration: Contemporary approaches and scope for further research. Production Planning and Control: The Management of Operations, 24(8–9), 653–657. Dey, P. K., Malesios, C., De, D., Chowdhury, S., & Abdelaziz, F. B. (2018). Could lean practices and process innovation enhance supply chain sustainability of small and medium sized enterprises? Business Strategy and the Environment. https://doi.org/10.1002/bse.2266. Fen Tseng, Y., Jim Wu, Y., Wu, W., & Chen, C. (2010). Exploring corporate social responsibility education. Management Decision, 48(10), 1514–1528. Fitzgerald, L., Johnson, R., Brignall, S., Silvestro, R., & Voss, C. (1991). Performance measurement in service businesses. London: CIMA. Garengo, P., Biazzo, S., & Bititci, U. S. (2005). Performance measurement systems in SMEs: A review for a research agenda. International Journal of Management Reviews, 7(1), 25–47. Gavronski, I., Klassen, R. D., Vachon, S., & do Nascimento, L. F. M. (2011). A resource-based view of green supply management. Transportation Research Part E: Logistics and Transportation Review, 47(6), 872–885. Groves, C., Frater, L., Lee, R., & Stokes., H. (2011). Is there room at the bottom for CSR? Corporate social responsibility and nanotechnology in the UK. Journal of Business Ethics, 101(4), 525–552. Gunasekaran, A., Parel, C., & Mc Gaughey, R. E. (2004). A framework for supply chain performance measurement. International Journal of Production Economics, 87(3), 333–347. Halkos, G. E., & Evangelinos, K. (2002). Determinants of environmental management systems standards implementation: Evidence from Greek industry. Business Strategy and the Environment, 11(6), 360–375. Halkos, G. E., & Polemis, G. (2018). The impact of economic growth on environmental efficiency of the electricity sector: A hybrid window DEA methodology for the USA. Journal of Environmental Management, 211, 334–346. Halkos, G. E., Tzeremes, N. G., & Kourtzidis, S. A. (2015a). Regional sustainability efficiency index in Europe: An additive two-stage DEA approach. Operational Research, 15(1), 1–23. Halkos, G. E., Tzeremes, N. G., & Kourtzidis, S. A. (2015b). Measuring sustainability efficiency using a two-stage data envelopment analysis approach. Journal of Industrial Ecology, 20(5), 1159–1175. Hall, J. (2000). Environmental supply chain dynamics. Journal of Cleaner Production, 8, 455–471. Hanim, H., Bakri, M., Mohamed, N., & Said, J. (2017). Mitigating asset misappropriation through integrity and fraud risk elements: evidence emerging economies. Journal of Financial Crime, 24(2). Hassini, E., Surti, C., & Searcy, C. (2012). A literature review and a case study of sustainable supply chains with a focus on metrics. International Journal of Production Economics, 140(1), 69–82. Hoejmose, S., Brammer, S., & Millington, A. (2013). An empirical examination of the relationship between business strategy and socially responsible supply chain management. International Journal of Operations & Production Management, 33(5), 589–621. Hu, Q., Mason, R., Williams, S. J., & Found, P. (2015). Lean implementation within SMEs: A literature review. Journal of Manufacturing Technology Management, 26(7), 980–1012. Jamali, D., Zanhour, M., & Keshishian, T. (2009). Peculiar strengths and relational attributes of SMEs in the context of CSR. Journal of Business Ethics, 87, 355–367. Johnson, M. P. (2015). Sustainability management and small and medium-sized enterprises: Managers awareness and implementation of innovative tools. Corporate Social Responsibility and Environmental Management, 22(5), 281–285. Johnson, M. P., & Schaltegger, S. (2016). Two decades of sustainability management tools for SMEs: How far have we come? Journal of Small Business Management, 54(2), 481–505. Jöreskog, K. G. (1994). Structural equation modeling with ordinal variables. Multivariate analysis and its applications (pp. 297–310). Hayward, CA: Institute of Mathematical Statistics. Kainuma, Y., & Tawara, N. (2006). A multiple attribute utility theory approach to lean and green supply chain management. International Journal of Production Economics, 101, 99–108. Kaplan, R. S., & Norton, D. (1996). Using the balanced scorecard as a strategic management system. Harvard Business Review, 74(1), 75–85. Kim, J., Realff, M. J., & Lee, J. H. (2011). Optimal design and global sensitivity analysis of biomass supply chain networks for biofuels under uncertainty. Computers & Chemical Engineering, 35(9), 1738–1751. Kleindorfer, P.-R., Singhal, K., & Wassenhove, L.-N.-V. (2005). Sustainable operations management. Production and Operations Management, 14(4), 482–492. Kneip, A., Simar, L., & Wilson, P. W. (2008). Asymptotics and consistent bootstraps for DEA estimators in non-parametric frontier models. Econometric Theory, 24, 1663–1697. Koh, S. C. L., & Simpson, M. (2005). Change and uncertainty in SME manufacturing environments using ERP. Journal of Manufacturing Technology Management, 16(5/6), 629–653. Korostelev, A., Simar, L., & Tsybakov, A. B. (1995). On estimation of monotone and convex boundaries. Publicaitons de l'Institut de Statistique de l'Universit´e de Paris XXXIX 1, 3–18. Lee, S. Y., & Klassen, R. D. (2008). Drivers and enablers that foster environmental management capabilities in small- and medium-sized suppliers in supply chains. Production and Operations Management, 17(6), 573–586. Lewis, W. G., Pun, K. F., & Lalla, T. R. M. (2006). Empirical investigation of the hard and soft criteria of TQM in ISO 9001 certified small and medium-sized enterprises. The International Journal of Quality & Reliability Management, 23(8), 964–985. Lynch, R., & Cross, K. (1991). Measure up! Yardsticks for continuous improvement. Cambridge: Blackwell. Malesios, C., Dey, P. K., & Abdelaziz, F. B. (2018). Supply chain sustainability performance measurement of small and medium sized enterprises using structural equation modeling. Annals of Operations Research. https://doi.org/10.1007/s10479-018-3080-z. Marr, B., & Creelman, J. (2011). More with less: Maximizing value in the public sector. Basingstoke: Palgrave Macmilllan. Mollenkopf, D. (2008). Drivers and barriers to environmental supply chain management practices: Lessons from the public and private sectors. Journal of Purchasing and Supply Management, 14, 69–85. Neely, A., Adams, C., & Kennerley, M. (2002). The performance prism: The scorecard for measuring and managing stakeholder relationship. London: Prentice Hall. Neely, A., Gregory, M., & Platts, K. (1995). Performance measurement system design: A literature review and research agenda. International Journal of Operations & Production Management, 15(4), 80–116. Nguyen, T. H., & Waring, T. S. (2013). The adoption of customer relationship management (CRM) technology in SMEs: An empirical study. Journal of Small Business and Enterprise Development, 20(4), 824–848. Nudurupati, S. S., Bititci, U. S., Kumar, V., & Chan, F. T. S. (2011). State of the art literature review on performance measurement. Computers & Industrial Engineering, 60(2), 279–290. Orsato, R.-J. (2006). Competitive Environmental strategies: When does it pay to be green? California Management Review, 48(2), 127–143. Pala, M., Edum-Fotwe, F., Ruikar, K., Doughty, N., & Peters, C. (2014). Contractor practices for managing extended supply chain tiers. Supply Chain Management, 19(1), 31–45. Patyal, V. S., & Koilakuntla, M. (2015). Infrastructure and core quality practices in Indian manufacturing organizations: Scale development and validation. Journal of Advances in Management Research, Bingley, 12(2), 141–175. Poister, T. H. (2003). Measuring performance in public and nonprofit organizations. San Francisco, CA: Jossey-Bass. Quah, H. S., & Udin, Z. M. (2011). Supply chain management from the perspective of value chain flexibility: An exploratory study. Journal of Manufacturing Technology Management, 22(4), 506–526. Ramezankhani, M. J., Ali Torabi, S., & Vahidi, F. (2018). Supply chain performance measurement and evaluation: A mixed sustainability and resilience approach. Computers & Industrial Engineering, 126, 531–548. Santos, M. (2011). CSR in SMEs: strategies, practices, motivations and obstacles. Social Responsibility Journal, 7(3), 490–508. Seuring, S., & Muller, M. (2008). From a literature review to a conceptual framework for sustainable supply chain management. Journal of Cleaner Production, 16, 1699–1710. Sharma, M. K., Bhagwat, R., & Dangayach, G. S. (2005). Practice of performance measurement: Experience from Indian SMEs. International Journal of Globalisation and Small Business, 1(2), 183–213. Shen, W., Zhang, D., Liu, W., & Yang, G. (2016). Increasing discrimination of DEA evaluation by utilizing distances to anti-efficient frontiers. Computers & Operations Research, 75, 163–173. Shepherd, C., & Gunter, H. (2005). Measuring supply chain performance: Current research and future directions. International Journal of Productivity and Performance Management, 55(3–4), 242–258. Shi, Q., Zuo, J., & Zillante, G. (2012). Exploring the management of sustainable construction at the programme level—A Chinese case study. Construction Management and Economics, 30(6), 425–440. Smith, P. (1997). Model misspecification in data envelopment analysis. Annals of Operations Research, 73, 233–252. Sullivan-Taylor, B., & Branicki, L. (2011). Creating resilient SMEs: Why one size might not fit all. International Journal of Production Research, 49(18), 5565–5579. Su-Yol, L. (2008). Drivers for the participation of small and medium-sized suppliers in green supply chain initiatives. Supply Chain Management, 13(3), 185–198. Taticchi, P., Cagnazzo, L., & Tonelli, F. (2010). Performance measurement and management: A literature review and a research agenda. Measuring Business Excellence, 14(1), 4–18. Taticchi, P., Tonelli, F., & Cagnazzo, L. (2009). A decomposition and hierarchical approach for business performance measurement and management. Measuring Business Excellence, 13(4), 47–57. Taticchi, P., Tonelli, F., & Pasqualino, R. (2013). Performance measurement of sustainable supply chains. A literature review and a research agenda. International Journal of Productivity and Performance Management, 62(8), 782–804. Thanassoulis, E., Dey, P. K., Petridis, K., Goniadis, I., & Georgiou, A. C. (2017). Evaluating higher education teaching performance using combined analytic hierarchy process and data envelopment analysis. Journal of the Operational Research Society, 68, 431–554. Thomas, D. J., & Griffin, P. M. (1996). Co-ordinated supply chain management. European Journal of Operational Research, 94(3), 1–15. Towers, N., & Burnes, B. (2008). A composite framework of supply chain management and enterprise planning for small and medium-sized manufacturing enterprises. Supply Chain Management, 13(5), 349–355. Vidal, J. (2013). UK government failing legal duty on air pollution. Supreme Court rules, Environment, guardian.co.uk, Guardian. Walker, H., & Jones, N. (2012). Sustainable supply chain management across the UK private sector. Supply Chain Management, 17(1), 15–28. West, J. (2014). Challenges of funding open innovation platforms: Lessons from Symbian Ltd. In H. Chesbrough, W. Vanhaverbeke, J. West (Eds.), New Frontiers in Open Innovation (pp.29–49). Oxford: Oxford University Press. Whyman, P. B., & Petrescu, A. I. (2015). Workplace flexibility practices in SMEs: Relationship with performance via redundancies, absenteeism, and financial turnover. Journal of Small Business Management, 53(4), 1097. Wolff, J. A., & Pett, T. L. (2006). Small-firm performance: Modeling the role of product and process improvements. Journal of Small Business Management, 44(2), 268–284. Wong, W. P., & Wong, K. Y. (2007). Supply chain performance measurement system using DEA modelling. Industrial management & Data system, 107(3), 361–381. Wyld, J., Pugh, G., & Tyrrall, D. (2012). Can powerful buyers "exploit" SME suppliers? Journal of Small Business and Enterprise Development, 19(2), 322–334. Zhu, Q., & Sarkis, J. (2004). Relationships between operational practices and performance among early adopters of green supply chain management practices in Chinese manufacturing enterprises. Journal of Operations Management, 22, 265–289. Zhu, Q., Sarkis, J., & Lai, K.-H. (2007a). Green supply chain management: Pressures, practices and performance within the Chinese automobile industry. Journal of Cleaner Production, 15(11–12), 1041–1052. Zhu, Q., Sarkis, J., & Lai, K.-H. (2007b). Initiatives and outcomes of green supply chain management implementation by Chinese manufacturers. Journal of Environmental Management, 85(1), 179–189. This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 788692. The second author would like to acknowledge the support from National Natural Science Foundation of China (NSFC, No. 71671181). Aston Business School, Aston University, Birmingham, B4 7ET, UK Prasanta Kumar Dey, Chrysovalantis Malesios & Debashree De Institutes of Science and Development, Chinese Academy of Sciences, Beijing, 100190, China Guo-liang Yang Department of Environment, University of the Aegean, 81100, Mytilene, Greece Konstantinos Evangelinos Prasanta Kumar Dey Chrysovalantis Malesios Debashree De Correspondence to Chrysovalantis Malesios. See Tables 5, 6, 7, 8, 9, 10 and 11. Table 6 Raw factor scores obtained from SEM analysis for the 30 British SMEs Table 7 Raw factor scores obtained from SEM analysis for the 54 French SMEs Table 8 The input slacks of the inefficient SMEs in the UK Table 9 The output targets of the inefficient SMEs in the UK Table 10 The input slacks of the inefficient SMEs in France Table 11 The output targets of the inefficient SMEs in France Dey, P.K., Yang, Gl., Malesios, C. et al. Performance Management of Supply Chain Sustainability in Small and Medium-Sized Enterprises Using a Combined Structural Equation Modelling and Data Envelopment Analysis. Comput Econ 58, 573–613 (2021). https://doi.org/10.1007/s10614-019-09948-1 Issue Date: October 2021 Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition Not affiliated © 2022 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. Equivalence tests for non-normal data? I have some data that I can't necessarily assume to be drawn from normal distributions, and I would like to conduct tests of equivalence between groups. For normal data, there are techniques like TOST (two one-sided t-tests). Is there anything analogous to TOST for non-normal data? hypothesis-testing equivalence tost Ryan C. ThompsonRyan C. Thompson $\begingroup$ I'm not familiar with TOST, but are you looking for Mann-Whitney? This is a nonparametrical test (in the sense that no assumptions on the distributions are made) that may provide evidence that two groups come from different distributions. $\endgroup$ – Nick Sabbe Mar 21 '13 at 7:52 $\begingroup$ I'm looking for a test where the null hypothesis is that there is a difference, and the alternative hypothesis is that there is (almost) no difference. $\endgroup$ – Ryan C. Thompson Mar 21 '13 at 17:19 $\begingroup$ For small samples, you might have a look at the answers in stats.stackexchange.com/questions/49782/…. For larger samples, the classic approach with t tests is fine thanks to the Central Limit Theorem. $\endgroup$ – Michael M Dec 30 '13 at 10:41 $\begingroup$ Nothing in the phrase "Two one-sided tests" - nor the underlying logic implies normal-theory. It should be perfectly possible to adapt it to a location-shift alternative with a non-normal distribution. But beware - in many cases with non-normal data what you really want is a scale-shift kind of equivalence test, and with other kinds of data, something else instead. Knowing what is needed really depends on what you're measuring and what problem you're solving. Rather than trying to squeeze your peg into a round hole, it pays to examine the peg. $\endgroup$ – Glen_b Jan 29 '14 at 4:53 The logic of TOST employed for Wald-type t and z test statistics (i.e. $\theta / s_{\theta}$ and $\theta / \sigma_{\theta}$, respectively) can be applied to the z approximations for nonparametric tests like the sign, sign rank, and rank sum tests. For simplicity I assume that equivalence is expressed symmetrically with a single term, but extending my answer to asymmetric equivalence terms is straightforward. One issue that arises when doing this is that if one is accustomed to expressing the equivalence term (say, $\Delta$) in the same units as $\theta$, then the the equivalence term must be expressed in units of the particular sign, signed rank, or rank sum statistic, which is both abstruse, and dependent on N. However, one can also express TOST equivalence terms in units of the test statistic itself. Consider that in TOST, if $z = \theta/\sigma_{\theta}$, then $z_{1} = (\Delta - \theta)/\sigma_{\theta}$, and $z_{2} = (\theta + \Delta)/\sigma_{\theta}$. If we let $\varepsilon = \Delta / \sigma_{\theta}$, then $z_{1} = \varepsilon - z$, and $z_{2} = z + \varepsilon$. (The statistics expressed here are both evaluated in the right tail: $p_{1} = \text{P}(Z > z_{1})$ and $p_{2} = \text{P}(Z > z_{2})$.) Using units of the z distribution to define the equivalence/relevance threshold may be preferable for non-parametric tests, since the alternative defines the threshold in units of signed-ranks or rank sums, which may be substantively meaningless to researchers and difficult to interpret. If we recognize that (for symmetric equivalence intervals) it is not possible to reject any TOST null hypothesis when $\varepsilon \le z_{1-\alpha}$, then we might proceed to make decisions on appropriate size of the equivalence term accordingly. For example $\varepsilon = z_{1-\alpha} + 0.5$. This approach has been implemented with options for continuity correction, etc. in the package tost for Stata (which now includes specific TOST implementations for the Shapiro-Wilk and Shapiro-Francia tests), which you can access by typing in Stata: Edit: Why the logic of TOST is sound, and equivalence test formations have been applied to omnibus tests, I have been persuaded that my solution was based on a deep misunderstanding of the approximate statistics for the Shapiro-Wilk and Shapiro-Francia tests AlexisAlexis It's not a TOST per se, but the Komolgorov-Smirnov test allows one to test for the significance of the difference between a sample distribution and a second reference distribution you can specify. You can use this test to rule out a specific kind of different distribution, but not different distributions in general (at least, not without controlling for error inflation across tests of all possible alternatives...if that's somehow possible itself). The alternative hypothesis for any one test will remain the less specific "catch-all" hypothesis, as usual. If you can settle for a test of distributional differences between two groups where the null hypothesis is that the two groups are equivalently distributed, you can use the Komolgorov-Smirnov test to compare one group's distribution to the other group's. That's probably the conventional approach: ignore the differences if they're not statistically significant, and justify this decision with a test statistic. In any case, you may want to consider some deeper issues arising from the "all-or-nothing" approach to rejecting a null hypothesis. One such issue is very popular here on Cross Validated: "Is normality testing 'essentially useless'?" People like to answer normality-testing questions with a question: "Why do you want to test this?" The intention, I assume, is generally to invalidate the reason for testing, which may ultimately lead in the right direction. The gist of useful responses to the question I've linked here seems to be as follows: If you're concerned about violations of parametric test assumptions, you should just find a nonparametric test that doesn't make distributional assumptions instead. Don't test whether you need to use the nonparametric test; just use it! You should replace the question, "Is my distribution significantly non-normal?" with, "How non-normal is my distribution, and how is this likely to affect my analyses of interest?" For instance, tests regarding central tendency (especially involving means) may be more sensitive to skewness than to kurtosis, and vice versa for tests regarding (co)variance. Nonetheless, there are robust alternatives for most analytic purposes that aren't very sensitive to either kind of non-normality. If you still wish to pursue a test of equivalence, here's another popular discussion on Cross Validated that involves equivalence testing. Nick StaunerNick Stauner $\begingroup$ Equivalence testing is well established, and you misunderstand its null hypotheses, which are generally of the form H$^{-}_{0}: |\theta-\theta_{0}| \ge \Delta$. This is an interval hypothesis which can translate, for example, into two one-sided tests (TOST): H$^{-}_{01}: \theta-\theta_{0} \ge \Delta$, or H$^{-}_{01}: \theta-\theta_{0} \le -\Delta$. If one rejects H$^{-}_{01}$ & H$^{-}_{02}$, then you must conclude that $-\Delta < \theta - \theta_{0} < \Delta$, i.e. that your groups are equivalent within the interval $[-\Delta,\Delta]$. $\endgroup$ – Alexis Apr 24 '14 at 4:23 $\begingroup$ Fair enough; I was probably a bit misleading. I've removed the parts to which you seem to object. However, I think you've worded your comment a bit too strongly. Despite the fact that the forced dichotomous fail to/reject approach is well-established, most samples can't completely preclude the possibility that the null is true. There is almost always some chance of false rejection error if one insists on rejection, which is usually not literally necessary. That was probably the more important point I intended to make originally. Hopefully it's a little clearer now without the deleted stuff $\endgroup$ – Nick Stauner Apr 24 '14 at 4:41 $\begingroup$ Well, in my opinion, the strength of equivalence tests (e.g. H$^{-}_{0}$) comes from combining them with the familiar tests for difference (e.g. H$^{+}_{0}$). Check it out: (1) Reject H$^{+}_{0}$ & Not Reject H$^{-}_{0}$, conclude relevant difference; (2) Not Reject H$^{+}_{0}$ & Reject H$^{-}_{0}$, conclude equivalence (for $\Delta$); (3) Reject H$^{+}_{0}$ & Reject H$^{-}_{0}$, conclude trivial difference (i.e. it's there, but you don't care); and (4) Not Reject H$^{+}_{0}$ & Not Reject H$^{-}_{0}$, conclude indeterminacy_/_underpowered tests. Puts power usefully into the analysis. $\endgroup$ – Alexis Apr 24 '14 at 4:49 $\begingroup$ Of course, issues of sensitivity and specificity, PPV and NPV do not go away. $\endgroup$ – Alexis Apr 24 '14 at 4:50 Equivalence is never something we can test. Think about the hypothesis: $\mathcal{H}_0: f_x \ne f_y$ vs $\mathcal{H}_1: f_x = f_y$. NHST theory tells us that, under the null, we can choose anything under $\mathcal{H}_0$ that best fits the data. That means we can almost always get arbitrarily close to the distribution. For instance, if I want to test $f_x \sim \mathcal{N}(0, 1)$, the probability model that allows for separate distributions of $\hat{f}_x$ and $\hat{f}_y$ will always be more likely under the null, a violation of critical testing assumptions. Even if the sample $X=Y$ identically, I can get a likelihood ratio that is arbitrarily close to 1 with $f_y \approx f_x$. If you know a suitable probability model for the data, you can use a penalized information criterion to rank alternate models. One way is to use the BICs of the two probability models (the one estimated under $\mathcal{H}_0$ and $\mathcal{H}_1$. I've used a normal probability model, but you can easily get a BIC from any type of maximum likelihood procedure, either by hand or using the GLM. This Stackoverflow post gets in nitty-gritty for fitting distributions. An example of doing this is here: set.seed(123) p <- replicate(1000, { ## generate data under the null x <- rnorm(100) g <- sample(0:1, 100, replace=T) BIC(lm(x~1)) > BIC(lm(x~g)) mean(p) > mean(p) [1] 0.034 $p$ here is the proportion of times that the BIC of the null model (separate models) is better (lower) than the alternative model (equivalent model). This is remarkably close to the nominal 0.05 level of statistical tests. On the other hand if we take: x <- x + 0.4*g Gives: As with NHST there are subtle issues of power and false positive error rates that should be explored with simulation before making definitive conclusions. I think a similar (perhaps more general method) is using Bayesian stats to compare the posterior estimated under either probability model. AdamOAdamO $\begingroup$ AdamO you seem to be conflating "testing equality" with "testing equivalence". There is a decades old and solid literature in the methods and application of the latter. $\endgroup$ – Alexis Apr 3 '18 at 23:24 $\begingroup$ See, for example, Wellek, S. (2010). Testing Statistical Hypotheses of Equivalence and Noninferiority. Chapman and Hall/CRC Press, second edition. $\endgroup$ – Alexis Apr 3 '18 at 23:31 $\begingroup$ @Alexis hmm, we don't have access to a library unfortunately. Are you saying equivalence is the same as non-inferiority insofar as estimates lying within a margin are considered equivalent? $\endgroup$ – AdamO Apr 4 '18 at 14:33 $\begingroup$ Not quite: non-inferiority is a one-sided test of whether a new treatment performs no worse than some standard minus a smallest relevant difference that is specified a priori. Tests for equivalence are tests of the null hypothesis that two (or more) quantities are different—in either direction—by more than a smallest relevant difference specified a priori. Some seminal papers: $\endgroup$ – Alexis Apr 4 '18 at 15:46 $\begingroup$ Schuirmann, D. A. (1987). A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. Journal of Pharmacokinetics and Biopharmaceutics, 15(6):657–680. $\endgroup$ – Alexis Apr 4 '18 at 15:46 Thanks for contributing an answer to Cross Validated! Not the answer you're looking for? Browse other questions tagged hypothesis-testing equivalence tost or ask your own question. Is normality testing 'essentially useless'? How to test hypothesis of no group differences? Is there a simple equivalence test version of the Kolmogorov–Smirnov test? non-parametric two-sample equivalence tests with unequal sample sizes hypothesis test: difference greater than threshold Use of Wilcoxon test for non-normal data akin to Two One Sided T-test Intuitive explanation of differences between TOST and UMP tests for equivalence Point hypothesis and equivalence hypothesis at once Equivalence testing - tost method - why CI of 90%? ANOVA for equivalence testing Is there a equivalence test for beta coefficients in regression analysis?
CommonCrawl
What's the difference between estimating equations and method of moments estimators? From my understanding, both are estimators that are based on first providing an unbiased statistic $T(X)$ and obtaining the root to the equation: $$c(X) \left( T(X) - E(T(X)) \right) = 0$$ Secondly both are in some sense "nonparametric" in that, regardless of what the actual probability model for $X$ may be, if you think of $T(\cdot)$ as a meaningful summary of the data, then you will be consistently estimating that "thing" regardless of whether that thing has any probabilistic connection with the actual probability model for the data. (e.g. estimating the sample mean from Weibull distributed failure times without censoring). However, method of moments seems to insinuate that the $T(X)$ of interest must be a moment for a readily assumed probability model, however, one estimates it with an estimating equation and not maximum likelihood (even though they may agree, as is the case for means of normally distributed random variables). Calling something a "moment" to me has the connotation of insinuating a probability model. However, supposing for instance we have log normally distributed data, is the method of moments estimator for the 3rd central moment based on the 3rd sample moment, e.g. $$\hat{\mu_3} = \frac{1}{n}\sum_{i=1}^n \left( X_i - \bar{X} \right)^3$$ Or does one estimate the first and second moment, transform them to estimate the probability model parameters, $\mu$ and $\sigma$ (whose estimates I will denote with hat notation) and then use these estimates as plug-ins for the derive skewness of lognormal data, i.e. $$ \hat{\mu_3} = \left( \exp \left( \hat{\sigma}^2 \right) + 2\right) \sqrt{\exp \left( \hat{\sigma}^2-1\right)}$$ mathematical-statistics estimation maximum-likelihood gee method-of-moments kjetil b halvorsen AdamOAdamO The most common justification of the method of moments is simply the law of large numbers, which would seem to make your suggestion of estimating $\mu_3$ by $\hat{\mu}_3$ "method of moments" (and I'd be inclined to call it MoM in any case). However, a number of books and documents, such as this for example (and to some extent the wikipedia page on method of moments) imply that you take the lowest $k$ moments* and estimate the required quantities for given the probability model from that, as you imply by estimating $\mu_3$ from the first two moments. *(where you need to estimate $k$ parameters to obtain the required quantity) Ultimately, I guess it comes down to "who defines what counts as method of moments?" Do we look to Pearson? Do we look to the most common conventions? Do we accept any convenient definition? --- Any of those choices has problems, and benefits. The interesting bit, to me, is whether one can always or almost always reparameterize a parametric family to characterize an estimation problem in EE as the solution to the moments of a (possibly bizarre) distribution function? Clearly there are large classes of distribution for which method of moments would be useless. For an obvious example, the mean of the Cauchy distribution is undefined. Even when moments exist and are finite, there could be a large number of situations where the set of equations $f(\mathbf{\theta},\mathbf{y})=0$ has 0 solutions (think of some curve that never crosses the x-axis) or multiple solutions (one that crosses the axis repeatedly -- though multiple solutions aren't necessarily an insurmountable problem if you have a way to choose between them). Of course, we also commonly see situations where a solution exists but doesn't lie in the parameter space (there may even be cases where there's never a solution in the parameter space, but I don't know of any -- it would be an interesting question to discover if some such cases exist). I imagine there can be more complicated situations still, though I don't have any in mind at the moment. Glen_b -Reinstate MonicaGlen_b -Reinstate Monica $\begingroup$ The wikipedia article seems to say that MoM is used when you are interested in estimating the DF of iid data, $X_1, X_2, \ldots, X_n \sim_{iid} F$ where $F$ has a known parametric distribution and by estimating the necessary $k$ moments for solving systems of equations for the $p$ parameters, $p < k???$ you get the appropriate estimator. The interesting bit, to me, is whether one can always or almost always reparameterize a parametric family to characterize an estimation problem in EE as the solution to the moments of a (possibly bizarre) distribution function? $\endgroup$ – AdamO Nov 3 '14 at 19:18 $\begingroup$ Hi AdamO -- I wrote a comment but it started to get too long so I moved it up to the end of my answer. If you can expand on where that interest in your last sentence lies, I think that might make for some stimulating areas to explore (I don't claim to know any more than you on this, but I'd love to try to learn more). $\endgroup$ – Glen_b -Reinstate Monica Nov 3 '14 at 21:06 Estimating equations is a more general method, it doesnt specify from where you get the estimating equation. Maximum likelihood is also an example of estimating equations, as it leads to the score equation. Various forms of quasi (or pseudo)-likelihood are other examples! as are method of moments. kjetil b halvorsenkjetil b halvorsen $\begingroup$ Likelihoods with local extrema do not have unique solutions when solving their corresponding score functions (but directly maximizing the likelihood would address this), so to me, there are some pathological cases where ML is more general than EE... but for somewhat regular parametric models, their ML solution can be obtained with score equations as EEs. I am not as familiar with MoM. $\endgroup$ – AdamO Nov 3 '14 at 23:10 Not the answer you're looking for? Browse other questions tagged mathematical-statistics estimation maximum-likelihood gee method-of-moments or ask your own question. What is the logic behind method of moments? t distribution method of moments Does the second moment estimator of the uniform distribution parameter have the same properties as that of the first moment? Why is the arithmetic mean smaller than the distribution mean in a log-normal distribution?
CommonCrawl
Active Calculus Matthew Boelkins ☰Contents Peer Instruction (Instructor)Peer Instruction (Student) Edit ProfileChange PasswordLog Out ✏ Choose avatar ▻ ✔️You! ✔️Open SansAaBbCc 123 PreTeXt Roboto SerifAaBbCc 123 PreTeXt f a r t h e r Word spacing smaller gap larger gap Light/dark mode ✔️default Reading ruler ✔️none L-underline grey bar sunrise underline Motion by: ✔️follow the mouse up/down arrows - not yet eye tracking - not yet <Prev^UpNext> \(\newcommand{\dollar}{\$} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\arctanh}{arctanh} \DeclareMathOperator{\arcsec}{arcsec} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \definecolor{fillinmathshade}{gray}{0.9} \newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}} \) Active Calculus: Our Goals Features of the Text Students! Read this! Instructors! Read this! 1 Understanding the Derivative 1.1 How do we measure velocity? 1.1.1 Position and average velocity 1.1.2 Instantaneous Velocity 1.1.4 Exercises 1.2 The notion of limit 1.2.1 The Notion of Limit 1.3 The derivative of a function at a point 1.3.1 The Derivative of a Function at a Point 1.4 The derivative function 1.4.1 How the derivative is itself a function 1.5 Interpreting, estimating, and using the derivative 1.5.1 Units of the derivative function 1.5.2 Toward more accurate derivative estimates 1.6 The second derivative 1.6.1 Increasing or decreasing 1.6.2 The Second Derivative 1.6.3 Concavity 1.7 Limits, Continuity, and Differentiability 1.7.1 Having a limit at a point 1.7.2 Being continuous at a point 1.7.3 Being differentiable at a point 1.8 The Tangent Line Approximation 1.8.1 The tangent line 1.8.2 The local linearization 2 Computing Derivatives 2.1 Elementary derivative rules 2.1.1 Some Key Notation 2.1.2 Constant, Power, and Exponential Functions 2.1.3 Constant Multiples and Sums of Functions 2.2 The sine and cosine functions 2.2.1 The sine and cosine functions 2.3 The product and quotient rules 2.3.1 The product rule 2.3.2 The quotient rule 2.3.3 Combining rules 2.4 Derivatives of other trigonometric functions 2.4.1 Derivatives of the cotangent, secant, and cosecant functions 2.5 The chain rule 2.5.1 The chain rule 2.5.2 Using multiple rules simultaneously 2.5.3 The composite version of basic function rules 2.6 Derivatives of Inverse Functions 2.6.1 Basic facts about inverse functions 2.6.2 The derivative of the natural logarithm function 2.6.3 Inverse trigonometric functions and their derivatives 2.6.4 The link between the derivative of a function and the derivative of its inverse 2.7 Derivatives of Functions Given Implicitly 2.7.1 Implicit Differentiation 2.8 Using Derivatives to Evaluate Limits 2.8.1 Using derivatives to evaluate indeterminate limits of the form \(\frac{0}{0}\text{.}\) 2.8.2 Limits involving \(\infty\) 3 Using Derivatives 3.1 Using derivatives to identify extreme values 3.1.1 Critical numbers and the first derivative test 3.1.2 The second derivative test 3.2 Using derivatives to describe families of functions 3.2.1 Describing families of functions in terms of parameters 3.3 Global Optimization 3.3.1 Global Optimization 3.3.2 Moving toward applications 3.4 Applied Optimization 3.4.1 More applied optimization problems 3.5 Related Rates 3.5.1 Related Rates Problems 4 The Definite Integral 4.1 Determining distance traveled from velocity 4.1.1 Area under the graph of the velocity function 4.1.2 Two approaches: area and antidifferentiation 4.1.3 When velocity is negative 4.2 Riemann Sums 4.2.1 Sigma Notation 4.2.2 Riemann Sums 4.2.3 When the function is sometimes negative 4.3 The Definite Integral 4.3.1 The definition of the definite integral 4.3.2 Some properties of the definite integral 4.3.3 How the definite integral is connected to a function's average value 4.4 The Fundamental Theorem of Calculus 4.4.1 The Fundamental Theorem of Calculus 4.4.2 Basic antiderivatives 4.4.3 The total change theorem 5 Evaluating Integrals 5.1 Constructing Accurate Graphs of Antiderivatives 5.1.1 Constructing the graph of an antiderivative 5.1.2 Multiple antiderivatives of a single function 5.1.3 Functions defined by integrals 5.2 The Second Fundamental Theorem of Calculus 5.2.1 The Second Fundamental Theorem of Calculus 5.2.2 Understanding Integral Functions 5.2.3 Differentiating an Integral Function 5.3 Integration by Substitution 5.3.1 Reversing the Chain Rule: First Steps 5.3.2 Reversing the Chain Rule: \(u\)-substitution 5.3.3 Evaluating Definite Integrals via \(u\)-substitution 5.4 Integration by Parts 5.4.1 Reversing the Product Rule: Integration by Parts 5.4.2 Some Subtleties with Integration by Parts 5.4.3 Using Integration by Parts Multiple Times 5.4.4 Evaluating Definite Integrals Using Integration by Parts 5.4.5 When \(u\)-substitution and Integration by Parts Fail to Help 5.5 Other Options for Finding Algebraic Antiderivatives 5.5.1 The Method of Partial Fractions 5.5.2 Using an Integral Table 5.5.3 Using Computer Algebra Systems 5.6 Numerical Integration 5.6.1 The Trapezoid Rule 5.6.2 Comparing the Midpoint and Trapezoid Rules 5.6.3 Simpson's Rule 5.6.4 Overall observations regarding \(L_n\text{,}\) \(R_n\text{,}\) \(T_n\text{,}\) \(M_n\text{,}\) and \(S_{2n}\text{.}\) 6 Using Definite Integrals 6.1 Using Definite Integrals to Find Area and Length 6.1.1 The Area Between Two Curves 6.1.2 Finding Area with Horizontal Slices 6.1.3 Finding the length of a curve 6.2 Using Definite Integrals to Find Volume 6.2.1 The Volume of a Solid of Revolution 6.2.2 Revolving about the \(y\)-axis 6.2.3 Revolving about horizontal and vertical lines other than the coordinate axes 6.3 Density, Mass, and Center of Mass 6.3.1 Density 6.3.2 Weighted Averages 6.3.3 Center of Mass 6.4 Physics Applications: Work, Force, and Pressure 6.4.1 Work 6.4.2 Work: Pumping Liquid from a Tank 6.4.3 Force due to Hydrostatic Pressure 6.5 Improper Integrals 6.5.1 Improper Integrals Involving Unbounded Intervals 6.5.2 Convergence and Divergence 6.5.3 Improper Integrals Involving Unbounded Integrands 7 Differential Equations 7.1 An Introduction to Differential Equations 7.1.1 What is a differential equation? 7.1.2 Differential equations in the world around us 7.1.3 Solving a differential equation 7.2 Qualitative behavior of solutions to DEs 7.2.1 Slope fields 7.2.2 Equilibrium solutions and stability 7.3 Euler's method 7.3.1 Euler's Method 7.3.2 The error in Euler's method 7.4 Separable differential equations 7.4.1 Solving separable differential equations 7.5 Modeling with differential equations 7.5.1 Developing a differential equation 7.6 Population Growth and the Logistic Equation 7.6.1 The earth's population 7.6.2 Solving the logistic differential equation 8 Sequences and Series 8.1 Sequences 8.1.1 Sequences 8.2 Geometric Series 8.2.1 Geometric Series 8.3 Series of Real Numbers 8.3.1 Infinite Series 8.3.2 The Divergence Test 8.3.3 The Integral Test 8.3.4 The Limit Comparison Test 8.3.5 The Ratio Test 8.4 Alternating Series 8.4.1 The Alternating Series Test 8.4.2 Estimating Alternating Sums 8.4.3 Absolute and Conditional Convergence 8.4.4 Summary of Tests for Convergence of Series 8.5 Taylor Polynomials and Taylor Series 8.5.1 Taylor Polynomials 8.5.2 Taylor Series 8.5.3 The Interval of Convergence of a Taylor Series 8.5.4 Error Approximations for Taylor Polynomials 8.6 Power Series 8.6.1 Power Series 8.6.2 Manipulating Power Series A A Short Table of Integrals B Answers to Activities C Answers to Selected Exercises Section 3.5 Related Rates Motivating Questions If two quantities that are related, such as the radius and volume of a spherical balloon, are both changing as implicit functions of time, how are their rates of change related? That is, how does the relationship between the values of the quantities affect the relationship between their respective derivatives with respect to time? In most of our applications of the derivative so far, we have been interested in the instantaneous rate at which one variable, say \(y\text{,}\) changes with respect to another, say \(x\text{,}\) leading us to compute and interpret \(\frac{dy}{dx}\text{.}\) We next consider situations where several variable quantities are related, but where each quantity is implicitly a function of time, which will be represented by the variable \(t\text{.}\) Through knowing how the quantities are related, we will be interested in determining how their respective rates of change with respect to time are related. For example, suppose that air is being pumped into a spherical balloon so that its volume increases at a constant rate of 20 cubic inches per second. Since the balloon's volume and radius are related, by knowing how fast the volume is changing, we ought to be able to discover how fast the radius is changing. We are interested in questions such as: can we determine how fast the radius of the balloon is increasing at the moment the balloon's diameter is 12 inches? Preview Activity 3.5.1. A spherical balloon is being inflated at a constant rate of 20 cubic inches per second. How fast is the radius of the balloon changing at the instant the balloon's diameter is 12 inches? Is the radius changing more rapidly when \(d = 12\) or when \(d = 16\text{?}\) Why? Draw several spheres with different radii, and observe that as volume changes, the radius, diameter, and surface area of the balloon also change. Recall that the volume of a sphere of radius \(r\) is \(V = \frac{4}{3} \pi r^3\text{.}\) Note well that in the setting of this problem, both \(V\) and \(r\) are changing as time \(t\) changes, and thus both \(V\) and \(r\) may be viewed as implicit functions of \(t\text{,}\) with respective derivatives \(\frac{dV}{dt}\) and \(\frac{dr}{dt}\text{.}\) Differentiate both sides of the equation \(V = \frac{4}{3} \pi r^3\) with respect to \(t\) (using the chain rule on the right) to find a formula for \(\frac{dV}{dt}\) that depends on both \(r\) and \(\frac{dr}{dt}\text{.}\) At this point in the problem, by differentiating we have "related the rates" of change of \(V\) and \(r\text{.}\) Recall that we are given in the problem that the balloon is being inflated at a constant rate of 20 cubic inches per second. Is this rate the value of \(\frac{dr}{dt}\) or \(\frac{dV}{dt}\text{?}\) Why? From part (c), we know the value of \(\frac{dV}{dt}\) at every value of \(t\text{.}\) Next, observe that when the diameter of the balloon is 12, we know the value of the radius. In the equation \(\frac{dV}{dt} = 4\pi r^2 \frac{dr}{dt}\text{,}\) substitute these values for the relevant quantities and solve for the remaining unknown quantity, which is \(\frac{dr}{dt}\text{.}\) How fast is the radius changing at the instant \(d = 12\text{?}\) How is the situation different when \(d = 16\text{?}\) When is the radius changing more rapidly, when \(d = 12\) or when \(d = 16\text{?}\) Subsection 3.5.1 Related Rates Problems In problems where two or more quantities can be related to one another, and all of the variables involved are implicitly functions of time, \(t\text{,}\) we are often interested in how their rates are related; we call these related rates problems. Once we have an equation establishing the relationship among the variables, we differentiate implicitly with respect to time to find connections among the rates of change. Example 3.5.1. Sand is being dumped by a conveyor belt onto a pile so that the sand forms a right circular cone, as pictured in Figure 3.5.2. How are the instantaneous rates of change of the sand's volume, height, and radius related to one another? Figure 3.5.2. A conical pile of sand. As sand falls from the conveyor belt, several features of the sand pile will change: the volume of the pile will grow, the height will increase, and the radius will get bigger, too. All of these quantities are related to one another, and the rate at which each is changing is related to the rate at which sand falls from the conveyor. We begin by identifying which variables are changing and how they are related. In this problem, we observe that the radius and height of the pile are related to its volume by the standard equation for the volume of a cone, \begin{equation*} V = \frac{1}{3} \pi r^2 h\text{.} \end{equation*} Viewing each of \(V\text{,}\) \(r\text{,}\) and \(h\) as functions of \(t\text{,}\) we differentiate implicitly to arrive at an equation that relates their respective rates of change. Taking the derivative of each side of the equation with respect to \(t\text{,}\) we find \begin{equation*} \frac{d}{dt}[V] = \frac{d}{dt}\left[\frac{1}{3} \pi r^2 h\right]\text{.} \end{equation*} On the left, \(\frac{d}{dt}[V]\) is simply \(\frac{dV}{dt}\text{.}\) On the right, the situation is more complicated, as both \(r\) and \(h\) are implicit functions of \(t\text{.}\) Hence we need the product and chain rules. We find that \begin{align*} \frac{dV}{dt} &= \frac{d}{dt}\left[\frac{1}{3} \pi r^2 h\right]\\ &= \frac{1}{3} \pi r^2 \frac{d}{dt}[h] + \frac{1}{3} \pi h \frac{d}{dt}[r^2]\\ &= \frac{1}{3} \pi r^2 \frac{dh}{dt} + \frac{1}{3} \pi h 2r \frac{dr}{dt} \end{align*} (Note particularly how we are using ideas from Section 2.7 on implicit differentiation. There we found that when \(y\) is an implicit function of \(x\text{,}\) \(\frac{d}{dx}[y^2] = 2y \frac{dy}{dx}\text{.}\) The same principles are applied here when we compute \(\frac{d}{dt}[r^2] = 2r \frac{dr}{dt}\text{.}\)) \begin{equation*} \frac{dV}{dt} = \frac{1}{3} \pi r^2 \frac{dh}{dt} + \frac{2}{3} \pi rh \frac{dr}{dt}\text{,} \end{equation*} relates the rates of change of \(V\text{,}\) \(h\text{,}\) and \(r\text{.}\) If we are given sufficient additional information, we may then find the value of one or more of these rates of change at a specific point in time. In the setting of Example 3.5.1, suppose we also know the following: (a) sand falls from the conveyor in such a way that the height of the pile is always half the radius, and (b) sand falls from the conveyor belt at a constant rate of 10 cubic feet per minute. How fast is the height of the sandpile changing at the moment the radius is 4 feet? The information that the height is always half the radius tells us that for all values of \(t\text{,}\) \(h = \frac{1}{2}r\text{.}\) Differentiating with respect to \(t\text{,}\) it follows that \(\frac{dh}{dt} = \frac{1}{2} \frac{dr}{dt}\text{.}\) These relationships enable us to relate \(\frac{dV}{dt}\) to just one of \(r\) or \(h\text{.}\) Substituting the expressions involving \(r\) and \(\frac{dr}{dt}\) for \(h\) and \(\frac{dh}{dt}\text{,}\) we now have that \begin{equation} \frac{dV}{dt} = \frac{1}{3} \pi r^2 \cdot \frac{1}{2} \frac{dr}{dt} + \frac{2}{3} \pi r \cdot \frac{1}{2}r \cdot \frac{dr}{dt}\text{.}\tag{3.5.1} \end{equation} Since sand falls from the conveyor at the constant rate of 10 cubic feet per minute, the value of \(\frac{dV}{dt}\text{,}\) the rate at which the volume of the sand pile changes, is \(\frac{dV}{dt} = 10\) ft\(^3\)/min. We are interested in how fast the height of the pile is changing at the instant when \(r = 4\text{,}\) so we substitute \(r = 4\) and \(\frac{dV}{dt} = 10\) into Equation (3.5.1), to find \begin{equation*} 10 = \frac{1}{3} \pi 4^2 \cdot \frac{1}{2} \left. \frac{dr}{dt} \right|_{r=4} + \frac{2}{3} \pi 4 \cdot \frac{1}{2}4 \cdot \left. \frac{dr}{dt} \right|_{r=4} = \frac{8}{3}\pi \left. \frac{dr}{dt} \right|_{r=4} + \frac{16}{3} \pi \left. \frac{dr}{dt} \right|_{r=4}\text{.} \end{equation*} Only the value of \(\left. \frac{dr}{dt} \right|_{r=4}\) remains unknown. We combine like terms on the right side of the equation above to get \(10 = 8 \pi \left. \frac{dr}{dt} \right|_{r=4}\text{,}\) and solve for \(\left. \frac{dr}{dt} \right|_{r=4}\) to find \begin{equation*} \left. \frac{dr}{dt} \right|_{r=4} = \frac{10}{8\pi} \approx 0.39789 \end{equation*} feet per second. Because we were interested in how fast the height of the pile was changing at this instant, we want to know \(\frac{dh}{dt}\) when \(r = 4\text{.}\) Since \(\frac{dh}{dt} = \frac{1}{2} \frac{dr}{dt}\) for all values of \(t\text{,}\) it follows \begin{equation*} \left. \frac{dh}{dt} \right|_{r=4} = \frac{5}{8\pi} \approx 0.19894 \ \text{ft/min}\text{.} \end{equation*} Note the difference between the notations \(\frac{dr}{dt}\) and \(\left. \frac{dr}{dt} \right|_{r=4}\text{.}\) The former represents the rate of change of \(r\) with respect to \(t\) at an arbitrary value of \(t\text{,}\) while the latter is the rate of change of \(r\) with respect to \(t\) at a particular moment, the moment when \(r = 4\text{.}\) Had we known that \(h = \frac{1}{2}r\) at the beginning of Example 3.5.1, we could have immediately simplified our work by writing \(V\) solely in terms of \(r\) to have \begin{equation*} V = \frac{1}{3} \pi r^2 \left(\frac{1}{2}h\right) = \frac{1}{6} \pi r^3\text{.} \end{equation*} From this last equation, differentiating with respect to \(t\) implies \begin{equation*} \frac{dV}{dt} = \frac{1}{2} \pi r^2 \frac{dr}{dt}\text{,} \end{equation*} from which the same conclusions can be made. Our work with the sandpile problem above is similar in many ways to our approach in Preview Activity 3.5.1, and these steps are typical of most related rates problems. In certain ways, they also resemble work we do in applied optimization problems, and here we summarize the main approach for consideration in subsequent problems. Note 3.5.4. Identify the quantities in the problem that are changing and choose clearly defined variable names for them. Draw one or more figures that clearly represent the situation. Determine all rates of change that are known or given and identify the rate(s) of change to be found. Find an equation that relates the variables whose rates of change are known to those variables whose rates of change are to be found. Differentiate implicitly with respect to \(t\) to relate the rates of change of the involved quantities. Evaluate the derivatives and variables at the information relevant to the instant at which a certain rate of change is sought. Use proper notation to identify when a derivative is being evaluated at a particular instant, such as \(\left. \frac{dr}{dt} \right|_{r=4}\text{.}\) When identifying variables and drawing a picture, it is important to think about the dynamic ways in which the quantities change. Sometimes a sequence of pictures can be helpful; for some pictures that can be easily modified as applets built in Geogebra, see the following links, 1 which represent how a circular oil slick's area grows 2 as its radius increases; how the location of the base of a ladder and its height along a wall change 3 as the ladder slides; how the water level changes 4 in a conical tank as it fills with water at a constant rate (compare the problem in Activity 3.5.2); how a skateboarder's shadow changes 5 as he moves past a lamppost. Drawing well-labeled diagrams and envisioning how different parts of the figure change is a key part of understanding related rates problems and being successful at solving them. Activity 3.5.2. A water tank has the shape of an inverted circular cone (point down) with a base of radius 6 feet and a depth of 8 feet. Suppose that water is being pumped into the tank at a constant instantaneous rate of 4 cubic feet per minute. Draw a picture of the conical tank, including a sketch of the water level at a point in time when the tank is not yet full. Introduce variables that measure the radius of the water's surface and the water's depth in the tank, and label them on your figure. Say that \(r\) is the radius and \(h\) the depth of the water at a given time, \(t\text{.}\) What equation relates the radius and height of the water, and why? Determine an equation that relates the volume of water in the tank at time \(t\) to the depth \(h\) of the water at that time. Through differentiation, find an equation that relates the instantaneous rate of change of water volume with respect to time to the instantaneous rate of change of water depth at time \(t\text{.}\) Find the instantaneous rate at which the water level is rising when the water in the tank is 3 feet deep. When is the water rising most rapidly: at \(h = 3\text{,}\) \(h = 4\text{,}\) or \(h = 5\text{?}\) Hint. Think about similar triangles. Recall that the volume of a cone is \(V = \frac{1}{3} \pi r^2 h\text{.}\) Remember to differentiate implicitly with respect to \(t\text{.}\) Use \(h = 3\) and the fact that the value of \(\frac{dV}{dt}\) is given. Consider this visualization 6 . Why does this phenomenon occur? Recognizing which geometric relationships are relevant in a given problem is often the key to finding the function to optimize. For instance, although the problem in Activity 3.5.2 is about a conical tank, the most important fact is that there are two similar right triangles involved. In another setting, we might use the Pythagorean Theorem to relate the legs of the triangle. But in the conical tank, the fact that the water fills the tank so that that the ratio of radius to depth is constant turns out to be the important relationship. In other situations where a changing angle is involved, trigonometric functions may provide the means to find relationships among various parts of the triangle. A television camera is positioned 4000 feet from the base of a rocket launching pad. The angle of elevation of the camera has to change at the correct rate in order to keep the rocket in sight. In addition, the auto-focus of the camera has to take into account the increasing distance between the camera and the rocket. We assume that the rocket rises vertically. (A similar problem is discussed and pictured dynamically 7 . Exploring the applet at the link will be helpful to you in answering the questions that follow.) Draw a figure that summarizes the given situation. What parts of the picture are changing? What parts are constant? Introduce appropriate variables to represent the quantities that are changing. Find an equation that relates the camera's angle of elevation to the height of the rocket, and then find an equation that relates the instantaneous rate of change of the camera's elevation angle to the instantaneous rate of change of the rocket's height (where all rates of change are with respect to time). Find an equation that relates the distance from the camera to the rocket to the rocket's height, as well as an equation that relates the instantaneous rate of change of distance from the camera to the rocket to the instantaneous rate of change of the rocket's height (where all rates of change are with respect to time). Suppose that the rocket's speed is 600 ft/sec at the instant it has risen 3000 feet. How fast is the distance from the television camera to the rocket changing at that moment? If the camera is following the rocket, how fast is the camera's angle of elevation changing at that same moment? If from an elevation of 3000 feet onward the rocket continues to rise at 600 feet/sec, will the rate of change of distance with respect to time be greater when the elevation is 4000 feet than it was at 3000 feet, or less? Why? Let \(\theta\) represent the camera angle and note that one leg of the right triangle is constant. Which two are changing? Think trigonometrically. Think like Pythagoras. Use the facts that \(h = 3000\) and \(\frac{dh}{dt} = 600\) in your preceding work. You can answer this question intuitively or by changing the value of \(h\) in your work in (d). In addition to finding instantaneous rates of change at particular points in time, we can often make more general observations about how particular rates themselves will change over time. For instance, when a conical tank is filling with water at a constant rate, it seems obvious that the depth of the water should increase more slowly over time. Note how carefully we must phrase the relationship: we mean to say that while the depth, \(h\text{,}\) of the water is increasing, its rate of change, \(\frac{dh}{dt}\text{,}\) is decreasing (both as a function of \(t\) and as a function of \(h\)). We make this observation by solving the equation that relates the various rates for one particular rate, without substituting any particular values for known variables or rates. For instance, in the conical tank problem in Activity 3.5.2, we established that \begin{equation*} \frac{dV}{dt} = \frac{1}{16} \pi h^2 \frac{dh}{dt}\text{,} \end{equation*} and hence \begin{equation*} \frac{dh}{dt} = \frac{16}{\pi h^2} \frac{dV}{dt}\text{.} \end{equation*} Provided that \(\frac{dV}{dt}\) is constant, it is immediately apparent that as \(h\) gets larger, \(\frac{dh}{dt}\) will get smaller but remain positive. Hence, the depth of the water is increasing at a decreasing rate. As pictured in the applet 8 , a skateboarder who is 6 feet tall rides under a 15 foot tall lamppost at a constant rate of 3 feet per second. We are interested in understanding how fast his shadow is changing at various points in time. Draw an appropriate right triangle that represents a snapshot in time of the skateboarder, lamppost, and his shadow. Let \(x\) denote the horizontal distance from the base of the lamppost to the skateboarder and \(s\) represent the length of his shadow. Label these quantities, as well as the skateboarder's height and the lamppost's height on the diagram. Observe that the skateboarder and the lamppost represent parallel line segments in the diagram, and thus similar triangles are present. Use similar triangles to establish an equation that relates \(x\) and \(s\text{.}\) Use your work in (b) to find an equation that relates \(\frac{dx}{dt}\) and \(\frac{ds}{dt}\text{.}\) At what rate is the length of the skateboarder's shadow increasing at the instant the skateboarder is 8 feet from the lamppost? As the skateboarder's distance from the lamppost increases, is his shadow's length increasing at an increasing rate, increasing at a decreasing rate, or increasing at a constant rate? Which is moving more rapidly: the skateboarder or the tip of his shadow? Explain, and justify your answer. Note that the lengths of the legs of the right triangle will be \(15\) for the vertical one and \(x + s\) for the horizontal one. The small triangle formed by the skateboarder and his shadow, with legs \(6\) and \(s\) is similar to the large triangle that has the lamppost as one of its legs. Simplify the equation in (b) as much as possible before differentiating implicitly with respect to \(t\text{.}\) Find \(\left. \frac{ds}{dt} \right|_{x=8}\text{.}\) Does the equation that relates \(\frac{dx}{dt}\) and \(\frac{ds}{dt}\) involve \(x\text{?}\) Is \(\frac{dx}{dt}\) changing or constant? Let \(y\) represent the location of the tip of the shadow, so that \(y = x + s\text{.}\) In the first three activities of this section, we provided guided instruction to build a solution in a step by step way. For the closing activity and the following exercises, most of the detailed work is left to the reader. A baseball diamond is \(90'\) square. A batter hits a ball along the third base line and runs to first base. At what rate is the distance between the ball and first base changing when the ball is halfway to third base, if at that instant the ball is traveling \(100\) feet/sec? At what rate is the distance between the ball and the runner changing at the same instant, if at the same instant the runner is \(1/8\) of the way to first base running at \(30\) feet/sec? Let \(x\) denote the position of the ball along the third base line at time \(t\text{,}\) and \(z\) the distance from the ball to first base. Note that the basepaths meet at 90 degree angles. Subsection 3.5.2 Summary When two or more related quantities are changing as implicit functions of time, their rates of change can be related by implicitly differentiating the equation that relates the quantities themselves. For instance, if the sides of a right triangle are all changing as functions of time, say having lengths \(x\text{,}\) \(y\text{,}\) and \(z\text{,}\) then these quantities are related by the Pythagorean Theorem: \(x^2 + y^2 = z^2\text{.}\) It follows by implicitly differentiating with respect to \(t\) that their rates are related by the equation \begin{equation*} 2x \frac{dx}{dt} + 2y\frac{dy}{dt} = 2z \frac{dz}{dt}\text{,} \end{equation*} so that if we know the values of \(x\text{,}\) \(y\text{,}\) and \(z\) at a particular time, as well as two of the three rates, we can deduce the value of the third. Exercises 3.5.3 Exercises 1. Height of a conical pile of gravel. Gravel is being dumped from a conveyor belt at a rate of \(10\) cubic feet per minute. It forms a pile in the shape of a right circular cone whose base diameter and height are always the same. How fast is the height of the pile increasing when the pile is \(23\) feet high? Recall that the volume of a right circular cone with height \(h\) and radius of the base \(r\) is given by \(V= \frac{1}{3}\pi r^2h\text{.}\) When the pile is \(23\) feet high, its height is increasing at feet per minute. 2. Movement of a shadow. A street light is at the top of a 13 foot tall pole. A 6 foot tall woman walks away from the pole with a speed of 6 ft/sec along a straight path. How fast is the tip of her shadow moving when she is 30 feet from the base of the pole? The tip of the shadow is moving at ft/sec. 3. A leaking conical tank. Water is leaking out of an inverted conical tank at a rate of \(9600.0\) \(\textrm{cm}^3/\textrm{min}\) at the same time that water is being pumped into the tank at a constant rate. The tank has height \(7.0 \ \textrm{m}\) and the the diameter at the top is \(5.0 \ \textrm{m}\text{.}\) If the water level is rising at a rate of \(22.0 \ \textrm{cm}/\textrm{min}\) when the height of the water is \(1.5 \ \textrm{m}\text{,}\) find the rate at which water is being pumped into the tank in cubic centimeters per minute. Answer: \(\textrm{cm}^3/\textrm{min}\) \(R\) be the unknown rate at which water is being pumped in. Then you know that if \(V\) is volume of water, \(\frac{dV}{dt}=R-9600.0\text{.}\) Use geometry (similar triangles) to find the relationship between the height of the water and the volume of the water at any given time. Recall that the volume of a cone with base radius \(r\) and height \(h\) is given by \(\frac{1}{3} \pi r^2 h\) A sailboat is sitting at rest near its dock. A rope attached to the bow of the boat is drawn in over a pulley that stands on a post on the end of the dock that is 5 feet higher than the bow. If the rope is being pulled in at a rate of 2 feet per second, how fast is the boat approaching the dock when the length of rope from bow to pulley is 13 feet? A swimming pool is \(60\) feet long and \(25\) feet wide. Its depth varies uniformly from \(3\) feet at the shallow end to \(15\) feet at the deep end, as shown in the Figure 3.5.5. Figure 3.5.5. The swimming pool. Suppose the pool has been emptied and is now being filled with water at a rate of \(800\) cubic feet per minute. At what rate is the depth of water (measured at the deepest point of the pool) increasing when it is \(5\) feet deep at that end? Over time, describe how the depth of the water will increase: at an increasing rate, at a decreasing rate, or at a constant rate. Explain. A baseball diamond is a square with sides \(90\) feet long. Suppose a baseball player is advancing from second to third base at the rate of \(24\) feet per second, and an umpire is standing on home plate. Let \(\theta\) be the angle between the third baseline and the line of sight from the umpire to the runner. How fast is \(\theta\) changing when the runner is \(30\) feet from third base? Sand is being dumped off a conveyor belt onto a pile in such a way that the pile forms in the shape of a cone whose radius is always equal to its height. Assuming that the sand is being dumped at a rate of \(10\) cubic feet per minute, how fast is the height of the pile changing when there are \(1000\) cubic feet on the pile? We again refer to the work of Prof. Marc Renault of Shippensburg University, found at gvsu.edu/s/5p. gvsu.edu/s/9n gvsu.edu/s/9o gvsu.edu/s/9p gvsu.edu/s/9q gvsu.edu/s/9t <Prev^TopNext> Hosted on Runestone
CommonCrawl
Large-time existence for one-dimensional Green-Naghdi equations with vorticity Characterisation of the pressure term in the incompressible Navier–Stokes equations on the whole space August 2021, 14(8): 2933-2946. doi: 10.3934/dcdss.2020393 Global attractor for damped forced nonlinear logarithmic Schrödinger equations Olivier Goubet 1,, and Ezzeddine Zahrouni 2, LAMFA, UMR 7352 CNRS UPJV, 33 rue St Leu, 80039, Amiens Cedex, France Equipe de recherche Analyse, Probabilités et Fractals, Av. de l'environnement, 5000 Monastir, Tunisie * Corresponding author: Olivier Goubet Received October 2019 Revised February 2020 Published August 2021 Early access June 2020 Fund Project: This article is dedicated to the memory of Ezzeddine Zahrouni who passed away in december 2018. We consider here a damped forced nonlinear logarithmic Schrödinger equation in $ \mathbb{R}^N $. We prove the existence of a global attractor in a suitable energy space. We complete this article with some open issues for nonlinear logarithmic Schrödinger equations in the framework of infinite-dimensional dynamical systems. Keywords: Global attractor, damped forced nonlinear logarithmic Schrodinger equations. Mathematics Subject Classification: Primary: 35Q55; Secondary: 35B41. Citation: Olivier Goubet, Ezzeddine Zahrouni. Global attractor for damped forced nonlinear logarithmic Schrödinger equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2933-2946. doi: 10.3934/dcdss.2020393 M. Abounouh, Asymptotic behaviour for a weakly damped Schrödinger equation in dimension two, Appl. Math. Lett., 6 (1993), 29-32. doi: 10.1016/0893-9659(93)90073-V. Google Scholar M. Abounouh, H. Al Moatassime, J. P. Chehab and O. Goubet, Discrete Schrödinger equations and dissipative dynamical systems, Commun. Pure Appl. Anal., 7 (2008), 211-127. doi: 10.3934/cpaa.2008.7.211. Google Scholar R. A. Adams, Sobolev Spaces, Pure and Applied Mathematics, Vol. 65. Academic Press, New York-London, 1975. Google Scholar A. H. Ardila, Logarithmic Schrödinger equation: On the orbital stability of the Gausson, Journal of Differential Equations, 335 (2016), 1-9. Google Scholar A. H. Ardila, Existence and stability of standing waves for nonlinear fractional Schrödinger equation with logarithmic nonlinearity, Nonlinear Analysis, 155 (2017), 52-64. doi: 10.3934/eect.2017009. Google Scholar A. H. Ardila, Stability of ground states for logarithmic Schrödinger equation with a $\delta'$-interaction, Evol. Equ. Control Theory, 6 (2017), 155-175. doi: 10.3934/eect.2017009. Google Scholar A. H. Ardila and M. Squassina, Gausson dynamic for logarithmic Schrödinger equations, Asymptot. Anal., 107 (2018), 203-226. doi: 10.3233/ASY-171458. Google Scholar D. Bakry, I. Gentil and M. Ledoux, Analysis and Geometry of Markov Diffusion Operators, Grundlehren der Mathematischen Wissenschaften, 348. Springer, Cham, 2014. doi: 10.1007/978-3-319-00227-9. Google Scholar J. M. Ball, Global attractors for damped semilinear wave equations, Discrete Contin. Dyn. Syst., 10 (2004), 31-52. doi: 10.3934/dcds.2004.10.31. Google Scholar W. Z. Bao, R. Carles, C. M. Su and Q. L. Tang, Error estimates of a regularized finite difference method for the logarithmic Schrödinger equation, SIAM J. Numer. Anal., 57 (2019), 657-680. doi: 10.1137/18M1177445. Google Scholar W. Z. Bao, R. Carles, C. M. Su and Q. L. Tang, Regularized numerical methods for the logarithmic Schrödinger equation, Numer. Math., 143 (2019), 461-487. doi: 10.1007/s00211-019-01058-2. Google Scholar W. Z. Bao and D. Jacksch, An explicit unconditionaly stable numerical method for solving damped nonlinear Schrödinger equation with focusing nonlinearity, SIAM J. Numer. Anal., 41 (2003), 1406-1426. doi: 10.1137/S0036142902413391. Google Scholar P. Benilan, M. G. Crandall and A. Pazy, "Bonnes solutions" d'un problème d'évolution semi-linéaire, C. R. Acad. Sci. Paris sér I Math., 306 (1988), 527-530. Google Scholar C. Besse, A relaxation scheme for nonlinear Schrödinger equations, SIAM J. Num. Anal., 42 (2004), 934-952. doi: 10.1137/S0036142901396521. Google Scholar I. Bialynicki-Birula and J. Mycielski, Nonlinear wave mechanics, Ann. Physics, 100 (1976), 62-93. doi: 10.1016/0003-4916(76)90057-9. Google Scholar H. R. Brezis, Les opérateurs monotones, Secrétariat Mathématique, Paris, Séminaire Choquet: 1965/66, Initiation à l' Analyse, Fasc. 2, Exp. 10, (1968), 33 pp. Google Scholar R. Carles and I. Gallagher, Universal dynamics for the defocusing logarithmic Schrödinger equation, Duke Math. J., 167 (2018), 1761-1801. doi: 10.1215/00127094-2018-0006. Google Scholar C. Calgaro, O. Goubet and E. Zahrouni, Finite-dimensional global attractor for a semi-discrete fractional nonlinear Schrödinger equation, Math. Methods Appl. Sci., 40 (2017), 5563-5574. doi: 10.1002/mma.4409. Google Scholar T. Cazenave, Stable solutions of the logarithmic Schrödinger equation, Nonlinear Anal., 7 (1983), 1127-1140. doi: 10.1016/0362-546X(83)90022-6. Google Scholar T. Cazenave, Semilinear Schrödinger Equations, Courant Lecture Notes in Mathematics, 10. New York University, Courant Institute of Mathematical Sciences, New York, American Mathematical Society, Providence, RI, 2003. doi: 10.1090/cln/010. Google Scholar T. Cazenave and A. Haraux Équation d'évolution avec non linéarité logarithmique, Ann. Fac. Sci. Toulouse Math. (5), 2 (1980), 21–51. doi: 10.5802/afst.543. Google Scholar P. d'Avenia, E. Montefusco and M. Squassina, On the logarithmic Schrödinger equation, Commun. Contemp. Math., 16 (2014), 1350032, 15 pp. doi: 10.1142/S0219199713500326. Google Scholar E. Ezzoug, O. Goubet and E. Zahrouni, Semi-discrete weakly damped nonlinear 2-D Schrödinger equation, Differential Integral Equations, 23 (2010), 237-252. Google Scholar G. Ferriere, The focusing logarithmic Schrödinger equation: Analysis of breathers and nonlinear superposition, arXiv: 1910.09436v1. Google Scholar C. Gallo, Schrödinger group on Zhidkov spaces, Adv. Differential Equations, 9 (2004), 509-538. Google Scholar P. Gérard, The Cauchy problem for the Gross-Pitaevskii equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 23 (2006), 765-779. doi: 10.1016/j.anihpc.2005.09.004. Google Scholar J.-M. Ghidaglia, Finite dimensional behavior for the weakly damped driven Schrödinger equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 5 (1988), 365-405. doi: 10.1016/S0294-1449(16)30343-2. Google Scholar J.-M. Ghidaglia, Explicit upper and lower bounds on the number of degrees of freedom for damped and driven cubic Schr${{\rm{\ddot d}}}$inger equations. Attractors, inertial manifolds and their approximation, RAIRO Modél. Math. Anal. Numér., 23 (1989), 433-443. doi: 10.1051/m2an/1989230304331. Google Scholar O. Goubet, Regularity of the attractor for the weakly damped nonlinear Schrödinger equations, Applicable Anal., 60 (1996), 99-119. doi: 10.1080/00036819608840420. Google Scholar O. Goubet and L. Molinet, Global attractor for weakly damped nonlinear Schrödinger equations in $L^2(\mathbb{R})$, Nonlinear Anal., 71 (2009), 317-320. doi: 10.1016/j.na.2008.10.078. Google Scholar O. Goubet and E. Zahrouni, On a time discretization of a weakly damped forced nonlinear Schrödinger equation, Commun. Pure Appl. Anal., 7 (2008), 1429-1442. doi: 10.3934/cpaa.2008.7.1429. Google Scholar J. K. Hale, Asymptotic Behavior of Dissipative Systems, Mathematical Surveys and Monographs, 25. American Mathematical Society, Providence, RI, 1988. Google Scholar K. N. Lu and B. X. Wang, Global attractor for the Klein-Gordon-Schrödinger equations in unbounded domains, Journal of Differential Equations, 170 (2001), 281-316. doi: 10.1006/jdeq.2000.3827. Google Scholar A. Miranville and S. Zelik, Attractors for dissipative partial differential equations in bounded and unbounded domains, Handbook of Differential Equations: Evolutionary Equations, Handb. Differ. Equ., Elsevier/North-Holland, Amsterdam, 4 (2008), 103-200. doi: 10.1016/S1874-5717(08)00003-0. Google Scholar G. Raugel, Global attractors in partial differential equations, Handbook of Dynamical Systems, North-Holland, Amsterdam, 2 (2002), 885-982. doi: 10.1016/S1874-575X(02)80038-8. Google Scholar J. M. Sanz-Serna and J. G. Verwer, Conservative and nonconservative schemes for the solution of the nonlinear Schrödinger equation, IMA J. of Num. Anal., 6 (1986), 25-42. doi: 10.1093/imanum/6.1.25. Google Scholar R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, Second edition, Applied Mathematical Sciences, 68. Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar K. Tsugawa, Existence of the global attractor for weakly damped, forced KdV equation on Sobolev spaces of negative index, Commun. Pure Appl. Anal., 3 (2004), 301-318. doi: 10.3934/cpaa.2004.3.301. Google Scholar Boling Guo, Zhaohui Huo. The global attractor of the damped, forced generalized Korteweg de Vries-Benjamin-Ono equation in $L^2$. Discrete & Continuous Dynamical Systems, 2006, 16 (1) : 121-136. doi: 10.3934/dcds.2006.16.121 Kotaro Tsugawa. Existence of the global attractor for weakly damped, forced KdV equation on Sobolev spaces of negative index. Communications on Pure & Applied Analysis, 2004, 3 (2) : 301-318. doi: 10.3934/cpaa.2004.3.301 Ming Wang. Global attractor for weakly damped gKdV equations in higher sobolev spaces. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3799-3825. doi: 10.3934/dcds.2015.35.3799 Marilena N. Poulou, Nikolaos M. Stavrakakis. Global attractor for a Klein-Gordon-Schrodinger type system. Conference Publications, 2007, 2007 (Special) : 844-854. doi: 10.3934/proc.2007.2007.844 Takahiro Hashimoto. Nonexistence of global solutions of nonlinear Schrodinger equations in non star-shaped domains. Conference Publications, 2007, 2007 (Special) : 487-494. doi: 10.3934/proc.2007.2007.487 Yi Cheng, Ying Chu. A class of fourth-order hyperbolic equations with strongly damped and nonlinear logarithmic terms. Electronic Research Archive, 2021, 29 (6) : 3867-3887. doi: 10.3934/era.2021066 Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5321-5335. doi: 10.3934/dcdsb.2020345 Antonio Segatti. Global attractor for a class of doubly nonlinear abstract evolution equations. Discrete & Continuous Dynamical Systems, 2006, 14 (4) : 801-820. doi: 10.3934/dcds.2006.14.801 Francesca Bucci, Igor Chueshov, Irena Lasiecka. Global attractor for a composite system of nonlinear wave and plate equations. Communications on Pure & Applied Analysis, 2007, 6 (1) : 113-140. doi: 10.3934/cpaa.2007.6.113 Hiroshi Takeda. Global existence of solutions for higher order nonlinear damped wave equations. Conference Publications, 2011, 2011 (Special) : 1358-1367. doi: 10.3934/proc.2011.2011.1358 Masahoto Ohta, Grozdena Todorova. Remarks on global existence and blowup for damped nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2009, 23 (4) : 1313-1325. doi: 10.3934/dcds.2009.23.1313 Sandra Lucente. Global existence for equivalent nonlinear special scale invariant damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021159 Davit Martirosyan. Exponential mixing for the white-forced damped nonlinear wave equation. Evolution Equations & Control Theory, 2014, 3 (4) : 645-670. doi: 10.3934/eect.2014.3.645 Olivier Goubet, Ezzeddine Zahrouni. On a time discretization of a weakly damped forced nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1429-1442. doi: 10.3934/cpaa.2008.7.1429 Milena Stanislavova. On the global attractor for the damped Benjamin-Bona-Mahony equation. Conference Publications, 2005, 2005 (Special) : 824-832. doi: 10.3934/proc.2005.2005.824 Zhijian Yang, Zhiming Liu. Global attractor for a strongly damped wave equation with fully supercritical nonlinearities. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 2181-2205. doi: 10.3934/dcds.2017094 Brahim Alouini. Global attractor for a one dimensional weakly damped half-wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2655-2670. doi: 10.3934/dcdss.2020410 Brahim Alouini. Finite dimensional global attractor for a class of two-coupled nonlinear fractional Schrödinger equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021013 Varga K. Kalantarov, Edriss S. Titi. Global stabilization of the Navier-Stokes-Voight and the damped nonlinear wave equations by finite number of feedback controllers. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1325-1345. doi: 10.3934/dcdsb.2018153 Olivier Goubet Ezzeddine Zahrouni
CommonCrawl
Multiset filters Amr Zakaria ORCID: orcid.org/0000-0003-4071-00421,2, Sunil Jacob John3 & K. P. Girish4 A multiset is a collection of objects in which repetition of elements is essential. This paper is an attempt to generalize the notion of filters in the multiset context. In addition, many deviations between multiset filters and ordinary filters have been presented. The relation between multiset filter and multiset ideal has been mentioned. Many properties of multiset filters, multiset ultrafilters, and convergence of multiset filters have been introduced. Also, the notions of basis and subbasis have been mentioned in the multiset context. Finally, several examples have been studied. In classical set theory, a set is a well-defined collection of distinct objects. If repeated occurrences of any object is allowed in a set, then a mathematical structure is known as multiset (mset [1] or bag [2], for short). Thus, an mset differs from a set in the sense that each element has a multiplicity and a natural number not necessarily one that indicates how many times it is a member of the mset. One of the most natural and simplest examples is the mset of prime factors of a positive integer n. The number 400 has the factorization 400=2452 which gives the mset {2,2,2,2,5,5}. Also, the cubic equation x3−5x2+3x+9=0 has roots 3,3, and − 1 which give the mset {3,3,− 1}. Classical set theory is a basic concept to represent various situations in mathematical notation where repeated occurrences of elements are not allowed. But in various circumstances, repetition of elements becomes mandatory to the system. For example, in a graph with loops, there are many hydrogen atoms, many water molecules, many strands of identical DNA, etc. This leads to effectively three possible relations between any two physical objects: they are different, they are the same but separate, or they are coinciding and identical. For example, ammonia NH3 has with three hydrogen atoms, say H, H, and H, and one nitrogen atom, say N. Clearly, H and N are different. However H, H, and H are the same but separate, while H and H are coinciding and identical. There are many other examples, for instance, carbon dioxide CO2, sulfuric acid H2SO4, and water H2O. This paper is an attempt to explore the theoretical aspects of msets by extending the notions of filters, ultrafilters, and convergence of filters to the mset context. The "Preliminaries and basic definitions" section has a collection of all basic definitions and notions for further study. In the "On multiset topologies" section, examples of new mset topologies are introduced. In the "Filters in multiset context" section, the notion of mset filters has been introduced. Further, many properties of this notion have been mentioned. In the "Basis and subbasis in multiset filters" section, basis and subbasis of mset filters are mentioned. In the "Multiset ultrafilter" section, the concept of mset ultrafilter has been presented and several examples and properties of this notion are introduced. In the "Convergence of multiset filters" section, convergence of mset filters and its properties are studied. Preliminaries and basic definitions In this section, a brief survey of the notion of msets as introduced by Yager [2], Blizard [1, 3], and Jena et al. [4] have been collected. Furthermore, the different types of collections of msets, the basic definitions, and notions of relations and functions in mset context are introduced by Girish and John [5–8]. Other important research about multiset theory and its applications can be found in [9–16]. Definition 1 A collection of elements containing duplicates is called an mset. Formally, if X is a set of elements, an mset M drawn from the set X is represented by a function count M or CM defined as \(C_{M}: X\rightarrow \mathbb {N}\) where \(\mathbb {N}\) represents the set of nonnegative integers. Let M be an mset from the set X={x1,x2,…,xn} with x appearing n times in M. It is denoted by x∈nM. The mset M drawn from the set X is denoted by M={k1/x1,k2/x2,…,kn/xn} where M is an mset with x1 appearing k1 times, x2 appearing k2 times, and so on. In Definition 10, CM(x) is the number of occurrences of the element x in the mset M. However, those elements which are not included in the mset M have zero count. An mset M is a set if CM(x)=0 or 1 ∀ x∈X. A domain X is defined as a set of elements from which msets are constructed. The mset space [ X]m is the set of all msets whose elements are in X such that no element in the mset occurs more than m times. The set [ X]∞ is the set of all msets over a domain X such that there is no limit on the number of occurrences of an element in an mset. Let M,N∈[X]m. Then, the following are defined: M is a submset of N denoted by (M⊆N) if CM(x)≤CN(x) ∀ x∈X. M=N if M⊆N and N⊆M. M is a proper submset of N denoted by (M⊂N) if CM(x)≤CN(x) ∀ x∈X and there exists at least one element x∈X such that CM(x)<CN(x). P=M∪N if CP(x)= max{CM(x),CN(x)} for all x∈X. P=M∩N if CP(x)= min{CM(x),CN(x)} for all x∈X. Addition of M and N results is a new mset P=M⊕N such that CP(x)= min{CM(x)+CN(x),m} for all x∈X. Subtraction of M and N results in a new mset P=M⊖N such that CP(x)= max{CM(x)−CN(x),0} for all x∈X, where ⊕ and ⊖ represent mset addition and mset subtraction, respectively. An mset M is empty if CM(x)=0 ∀ x∈X. The support set of M denoted by M∗ is a subset of X and M∗={x∈X:CM(x)>0}; that is, M∗ is an ordinary set and it is also called root set. The cardinality of an mset M drawn from a set X is Card\((M)=\sum \limits _{x\in X} C_{M}(x)\). M and N are said to be equivalent if and only if Card (M)=Card(N). Let M∈[ X]m and N⊆M. Then, the complement Nc of N in [ X]m is an element of [ X]m such that Nc=M⊖N. A submset N of M is a whole submset of M with each element in N having full multiplicity as in M; that is, CN(x)=CM(x) for every x∈N∗. A submset N of M is a partial whole submset of M with at least one element in N having full multiplicity as in M. i.e., CN(x)=CM(x) for some x∈N∗. A submset N of M is a full submset of M if each element in M is an element in N with the same or lesser non-zero multiplicity as in M, i.e., M∗=N∗ with CN(x)≤CM(x) for every x∈N∗. Let M∈[X]m. The power whole mset of M denoted by PW(M) is defined as the set of all whole submsets of M. Let M∈[ X]m. The power full msets of M, PF(M), is defined as the set of all full submsets of M. The cardinality of PF(M) is the product of the counts of the elements in M. Let M∈[ X]m. The power mset P(M) of M is the set of all submsets of M. We have N∈P(M) if and only if N⊆M. If N=ϕ, then N∈1P(M), and if N≠ϕ, then N∈kP(M) such that \(k=\prod _{z}\dbinom {|[M]_{z}|}{|[N]_{z}|}\), the product \(\prod _{z}\) is taken over distinct elements of the mset N and |[M]z|=miff z∈mM, |[N]z|=n iff z∈nN, then\(\dbinom {|[M]_{z}|}{|[N]_{z}|}=\dbinom {m}{n}=\frac {m!}{n!(m-n)!}\). The power set of an mset is the support set of the power mset and is denoted by P∗(M). The following theorem shows the cardinality of the power set of an mset. Definition 10 Let M1 and M2 be two msets drawn from a set X, then the Cartesian product of M1 and M2 is defined as M1×M2={(m/x,n/y)/mn:x∈mM1, y∈nM2}. Here, the entry (m/x,n/y)/mn in M1×M2 denotes x is repeated m times in M1, y is repeated n times in M2, and the pair (x,y) is repeated mn times in M1×M2. A submset R of M1×M2 is said to be an mset relation on M if every member (m/x,n/y) of R has a count, the product of C1(x,y) and C2(x,y). m/x related to n/y is denoted by (m/x)R(n/y). The domain of the mset relation R on M is defined as follows: $$Dom(R)=\{x\in^{r}M: \exists\;y\in^{s}M\;such\;that\;(r/x)R(s/y)\}, \;where$$ $$C_{Dom\;(R)}(x)=Sup\{C_{1}(x,y):x\in^{r}M\}.$$ Also, the range of the mset relation R on M is defined as follows: $$Ran(R)=\{y\in^{s}M: \exists\;x\in^{r}M\;such\;that\;(r/x)R(s/y)\},\; where$$ $$C_{Ran\;(R)}(y)=Sup\{C_{2}(x,y):y\in^{s}M\}.$$ An mset relation f is called an mset function if for every element m/x in Dom f, there is exactly one n/y in Ran f such that (m/x,n/y) is in f with the pair occurring as the product of C1(x,y) and C2(x,y). An mset function f is one-one (injective) if no two elements in Dom f have the same image under f with C1(x,y)≤C2(x,y) for all (x,y) in f, i.e., if m1/x1,m2/x2 in Dom f and m1/x1≠m2/x2 implies that f(m1/x1)≠f(m2/x2). Thus, the one-one mset function is the mapping of the distinct elements of the domain to the distinct elements of the range. An mset function f is onto (surjective) if Ran f is equal to co-dom f and C1(x,y)≥C2(x,y) for all (x,y) in f. It may be noted that images of distinct elements of the domain need not be distinct elements of the range. Let M be an mset drawn from a set X and τ⊆P∗(M). Then, τ is called an mset topology if τ satisfies the following properties: ϕ and M are in τ. The union of the elements of any subcollection of τ is in τ. The intersection of the elements of any finite subcollection of τ is in τ. An mset topological space is an ordered pair (M,τ) consisting of an mset M and an mset topology τ⊆P∗(M) on M. Note that τ is an ordinary set whose elements are msets and the mset topology is abbreviated as an M-topology. Also, a submset U of M is an open mset of M if U belongs to the collection τ. Moreover, a submset N of M is closed mset M⊖N is open mset. Let (M,τ) be an M-topological space and N be a submset of M. Then, the interior of N is defined as the mset union of all open msets contained in N and is denoted by No; that is, No=∪{V⊆M:V is an open mset and V⊆N} and \(C_{N^{o}}(x)=\max \{C_{V}(x): V\subseteq N\}\). Let (M,τ) be an M-topological space and N be a submset of M. Then, the closure of N is defined as the mset intersection of all closed msets containing N and is denoted by \(\overline {N}\); that is, \(\overline {N}=\cap \{K\subseteq M: K\) is a closed mset and N⊆K} and \(C_{\overline {N}}(x)=\min \{C_{K}(x): N\subseteq K\}\). An mset M is called simple if all its elements are the same. For example, {k/x}. In addition, k/x is called simple multipoint (for short mpoint). Let (M,τ) be a M-topological space, x∈kM, and N⊆M. Then, N is said to be a neighborhood of k/x if there is an open mset V in τ such that x∈kV and CV(y)≤CN(y) for all y≠x that is, \(\mathcal {N}_{k/x}=\{N\subseteq M: \exists \;V\in \tau \;\)such that x∈kV and CV(y)≤CN(y) for all y≠x} is the collection of all τ-neighborhood of k/x. On multiset topologies Let f:M1→M2 be an mset function, V⊆M2 and N⊆M1. Then: f−1(M2⊖V)=f−1(M2)⊖f−1(V). N⊆f−1(f(N)), equality holds if f is one-one. f(f−1(V))⊆V, equality holds if f is onto. Let x∈kf−1(M2⊖V). Hence f(k/x)∈(M2⊖V). So f(k/x)∉V; that is, k/x∈f−1(M2) and k/x∉f−1(V). Thus, f−1(M2⊖V)⊆f−1(M2)⊖f−1(V). Also, let x∈kf−1(M2)⊖f−1(V). It follows that f(k/x)∈M2 and f(k/x)∉V. Consequently, x∈kf−1(M2⊖V). Therefore, f−1(M2)⊖f−1(V)⊆f−1(M2⊖V). Hence, f−1(M2⊖V)=f−1(M2)⊖f−1(V). Let x∈kN. Hence, f(k/x)∈f(N). So x∈kf−1(f(N)), and hence, N⊆f−1(f(N)). Now, let f be one-one and x∈kf−1(f(N)). It follows that f(k/x)∈f(N). So there exists y∈rN such that f(k/x)=f(r/y). Since f is one-one, then k/x=r/y. Therefore, x∈kN. Thus, if f is one-one, then f−1(f(N))⊆N. Let x∈kf(f−1(V)). It follows that there exists y∈rf−1(V) such that f(k/x)=f(r/y) and f(r/y)∈V. So f(k/x)∈V. Thus, f(f−1(V))⊆V. Also, if f is onto and x∈kV. Hence, f−1(k/x)∈f−1(V), f is onto, so k/x∈f(f−1(V)). Therefore, if f is onto, then V⊆f(f−1(V)). Let N1 and N2 be submsets of an mset M. Then: If \(C_{(N_{1}\cap N_{2})}(x)=0\) for all x∈M∗, then \(C_{N_{1}}(x)\leq C_{(M\ominus N_{2})}(x)\) for all x∈M∗. \(C_{N_{1}}(x)\leq C_{N_{2}}(x)\Leftrightarrow C_{(M\ominus N_{2})}(x)\leq C_{(M\ominus N_{1})}(x)\) for all x∈M∗. If \(\phantom {\dot {i}\!}C_{(N_{1}\cap N_{2})}(x)=0\) for all x∈M∗. Since \(\phantom {\dot {i}\!}C_{(N_{1}\cap N_{2})}(x)=\min \{C_{N_{1}}(x), C_{N_{2}}(x)\}\), then \(C_{N_{1}}(x)=0\) or \(C_{N_{2}}(x)=0\) for all x∈M∗. It follows that \(\phantom {\dot {i}\!}C_{N_{1}}(x)+C_{N_{2}}(x)\leq C_{M}(x)\) for all x∈M∗, and hence, \(\phantom {\dot {i}\!}C_{N_{1}}(x)\leq C_{M}(x)-C_{N_{2}}(x)=C_{(M\ominus N_{2})}(x)\) for all x∈M∗, then the result. \(C_{N_{1}}(x)\leq C_{N_{2}}(x)\Leftrightarrow -C_{N_{2}}(x)\leq -C_{N_{1}}(x)\Leftrightarrow C_{M}(x)-C_{N_{2}}(x)\leq C_{M}(x)-C_{N_{1}}(x)\Leftrightarrow C_{(M\ominus N_{2})}(x)\leq C_{(M\ominus N_{1})}(x)\) for all x∈M∗. The following example shows that the converse of Theorem 2 is not true in general. Let M={2/a,4/b,5/c}, N1={1/a,1/b,2/c}, and N2={1/a,1/b}. Hence, M⊖N2={1/a,3/b,5/c}. It is clear that N1⊆M⊖N2 but N1∩N2={1/a,1/b}. The following example shows that N1⊖N2≠N1∩(M⊖N2) in general. Let M={3/x,4/y}, N1={2/x,3/y}, and N2={1/x,2/y}. Hence, M⊖N2={2/x,2/y}, N1⊖N2={1/x,1/y}, and N1∩(M⊖N2)={2/x,2/y}. Let X be an infinite set. Then, M={kα/xα:α∈Λ} be an infinite mset drawn from X. That is, the infinite mset M drawn from X is denoted by M={k1/x1,k2/x2,k3/x3,… }. The mset space \([\!X]_{\infty }^{m}\) is the set of all infinite msets whose elements are in X such that no element in the mset occurs more than m times. It may be noted that the following examples of mset topologies are not tackled before. Let \(M\in [\!X]_{\infty }^{m}\) and {k0/x0} be a simple submset of M. Then, the collection \(\tau _{(k_{0}/x_{0})}=\{V\subseteq M: C_{V}(x_{0})\geq k_{0}\}\cup \{\emptyset \}\) is an M-topology on M called the particular point M-topology. Let \(M\in [\!X]_{\infty }^{m}\) and {k0/x0} be a simple submset of M. Then, the collection \(\tau _{k_{0}/x_{0}}=\{V\subseteq M: C_{V}(x_{0})< k_{0}\}\cup \{M\}\) is an M-topology on M called the excluded point M-topology. Let \(M\in [\!X]_{\infty }^{m}\). Then, the collection τ={V⊆M:M⊖V is finite }∪{∅} is an M-topology on M called the cofinite M-topology. Let \(M\in [\!X]_{\infty }^{m}\) and N be a submset of M. Then, the collection τ(N)={V⊆M:CN(x)≤CV(x) for all x∈M∗}∪{∅} is an M-topology on M. Let \(M\in [\!X]_{\infty }^{m}\) and N be a submset of M. Then, the collection τN={V⊆M:CN(x)≥CV(x) for all x∈M∗}∪{M} is an M-topology on M. Filters in multiset context An mset filter \(\mathcal {F}\) on an mset M is a nonempty collection of nonempty submsets of M with the properties: \(({\mathcal {M}\mathcal {F}}_{1})\)\(\phi \not \in \mathcal {F}\), \(({\mathcal {M}\mathcal {F}}_{2})\) If \(N_{1}, N_{2}\in \mathcal {F}\), then \(N_{1}\cap N_{2}\in \mathcal {F}\), \(({\mathcal {M}\mathcal {F}}_{3})\) If \(N_{1}\in \mathcal {F}\) and \(C_{N_{1}}(x)\leq C_{N_{2}}(x)\) for all x∈M∗, then \(N_{2}\in \mathcal {F}\). It should be noted that \(\mathcal {F}\) is an ordinary set whose elements are msets and the multiset filter is abbreviated as an M-filter. Let \(\mathcal {F}\) be an M-filter on a nonempty mset M. Then: \(M\in \mathcal {F}\), Finite intersections of members of \(\mathcal {F}\) are in \(\mathcal {F}\). The result follows immediately from Definition 21. □ It should be noted that the collection of complements of msets in a proper M-filter is a nonempty collection closed under the operations of subsets and finite unions. Such a collection is called M-ideal [17]. PF(M) is an M-filter on M. P∗(M) is not an M-filter. For one thing, the empty set belongs to it. Secondly, it contains the disjoint msets. Let \(\mathcal {F}=\{M\}\). Then, \(\mathcal {F}\) is an M-filter. This is the smallest M-filter one can define on M and is called the indiscrete M-filter on M. Let x∈kM and <k/x>={N⊆M:k/x∈N}. Then, <k/x> is an M-filter called the principle M-filter at <k/x>. More generally, let N be a nonempty submset of M and <N>={G⊆M:N⊆G}. Then, <N> is an M-filter called the principle M-filter at N. In addition to that, the indiscrete M-filter is the principle M-filter at M. Let M be an infinite mset and \(\mathcal {F}=\{N\subseteq M: N^{c} \)is a finite }. Then, \(\mathcal {F}\) is called the cofinite M-filter on M. Let (M,τ) be an M-topological space and x∈kM. Then, \(\mathcal {N}_{k/x}\) is an M-filter on M. It should be noted that the M-filter may contain the submset and it is complement because the intersection between submset and its complement is not necessary empty. Let M={2/a,3/b} and \(\mathcal {F}=\{M, \{1/a\}, \{2/a\}, \{1/a, 1/b\}, \{1/a, 2/b\}, \{1/a, 3/b\}, \{2/a, 1/b\}, \{2/a, 2/b\}\}\). It is clear that \(\mathcal {F}\) is an M-filter and {1/a} and its complement {1/a,3/b} belong to \(\mathcal {F}\). Let M be a nonempty mset and \(\mathcal {F}_{1}\), \(\mathcal {F}_{2}\) be two M-filters on M. Then, \(\mathcal {F}_{1}\) is said to be coarser or smaller than \(\mathcal {F}_{2}\), denoted by \(\mathcal {F}_{1}\leq \mathcal {F}_{2}\), if \(\mathcal {F}_{1}\subseteq \mathcal {F}_{2}\), or alternatively \(\mathcal {F}_{2}\) is said to be finer or stronger than \(\mathcal {F}_{1}\). Let M be an mset and \(\{\mathcal {F}_{i}\}\), i∈I be a nonempty family of M-filters on M. Then, \(\mathcal {F}=\cap _{i\in I}\mathcal {F}_{i}\) is an M-filter on M. Since \(M\in \mathcal {F}_{i}\) for each i∈I, hence \(M\in \cap _{i\in I}\mathcal {F}_{i}\); that is, \(M\in \mathcal {F}\). Moreover, \(({\mathcal {M}\mathcal {F}}_{1})\) implies \(\phi \not \in \mathcal {F}_{i}\) for each i∈I. Therefore, \(\mathcal {F}\) be a nonempty collection of a nonempty submsets of M. Let \(N_{1}, N_{2}\in \mathcal {F}\), then \(N_{1}, N_{2}\in \mathcal {F}_{i}\) for each i∈I. Since \(\mathcal {F}_{i}\) is an M-filter for each i∈I, hence \(({\mathcal {M}\mathcal {F}}_{2})\) implies \(N_{1}\cap N_{2}\in \mathcal {F}_{i}\) for each i∈I. Thus, \(N_{1}\cap N_{2}\in \mathcal {F}\). Now let \(N_{1}\in \mathcal {F}\) and \(C_{N_{1}}(x)\leq C_{N_{2}}(x)\) for all x∈M∗. It follows that \(N_{1}\in \mathcal {F}_{i}\) for each i∈I. Hence,\(({\mathcal {M}\mathcal {F}}_{2})\) implies that \(N_{2}\in \mathcal {F}_{i}\) for each i∈I. Therefore, \(N_{2}\in \mathcal {F}\), and hence, the result follows. □ The following example shows that the union of two M-filters on a nonempty mset M is not necessarily an M-filter on M. Let \(M=\{3/a, 4/b, 2/c, 5/d\},\;\mathcal {F}_{1}=\{M, \{3/a, 4/b, 2/c\}\}\), and \(\mathcal {F}_{2}=\{M, \{3/a, 4/b, 5/d\}\}\). Then, \(\mathcal {F}_{1}\cup \mathcal {F}_{2}=\{M, \{3/a, 4/b, 2/c\}, \{3/a, 4/b, 5/d\}\}\). Although \(\mathcal {F}_{1}\) and \(\mathcal {F}_{2}\) are two M-filters on M, \(\mathcal {F}_{1}\cup \mathcal {F}_{2}\) is not M-filter. Since \(\{3/a, 4/b, 2/c\}, \{3/a, 4/b, 5/d\}\in \mathcal {F}_{1}\cup \mathcal {F}_{2}\), but \(\{3/a, 4/b, 2/c\}\cap \{3/a, 4/b, 5/d\}=\{3/a, 4/b\}\not \in \mathcal {F}_{1}\cup \mathcal {F}_{2}\). Basis and subbasis in multiset filters Let \(\mathcal {B}\) be a nonempty collection of a nonempty submsets of M. Then, \(\mathcal {B}\) is called an M-filter basis on M if \(({\mathcal {M}\mathcal {B}}_{1})\)\(\phi \not \in \mathcal {B}\), \(({\mathcal {M}\mathcal {B}}_{2})\) If \(B_{1}, B_{2}\in \mathcal {B}\), then there exists a \(B\in \mathcal {B}\) such that \(C_{B}(x)\leq C_{(B_{1}\cap B_{2})}(x)\) for all x∈M∗. Let \(\mathcal {B}\) be an M-filter basis on M, and \(\mathcal {F}\) consists of all msets which are super msets in \(\mathcal {B}\); that is, \(\mathcal {F}=\{N\subseteq M: \;\forall \;x\in M^{*}\;C_{N}(x)\geq C_{N_{1}}(x),\;\phantom {\dot {i}\!}\)for some\(\;N_{1}\in \mathcal {B}\}. \)Then, \(\mathcal {F}\) is an M-filter on M. Furthermore, it is the smallest M-filter which contains \(\mathcal {B}\). It is called the M-filter generated by \(\mathcal {B}\). Since \(\mathcal {F}\) consists of all msets which are super msets in \(\mathcal {B}\), hence every member of \(\mathcal {B}\) is also a member of \(\mathcal {F}\). Consequently, \(\mathcal {B}\subseteq \mathcal {F}\) and hence \(\mathcal {F}\neq \phi \). Since \(\mathcal {F}\) contains all submsets of M which contain a member of \(\mathcal {B}\) and \(\phi \not \in \mathcal {B}\), hence \(\phi \not \in \mathcal {F}\). Thus, \(\mathcal {F}\) satisfies \(({\mathcal {M}\mathcal {F}}_{1})\). To prove that \(\mathcal {F}\) satisfies \(({\mathcal {M}\mathcal {F}}_{2})\), let \(N_{1}, N_{2}\in \mathcal {F}\). Hence, for all x∈M∗, \(C_{N_{1}}(x)\geq B_{1}(x)\) and \(C_{N_{2}}(x)\geq C_{B_{2}}(x)\phantom {\dot {i}\!}\) for some \(B_{1}, B_{2}\in \mathcal {B}\). It follows that there exists \(B\in \mathcal {B}\) such that \(C_{B}(x)\leq C_{(B_{1}\cap B_{2})}(x)\) for all x∈M∗ and hence \(C_{(N_{1}\cap N_{2})}(x)\geq C_{(B_{1}\cap B_{2})}(x)\geq C_{B}(x)\) for all x∈M∗. Consequently, \(N_{1}\cap N_{2}\in \mathcal {F}\). For \(({\mathcal {M}\mathcal {F}}_{3})\), let \(N_{1}\in \mathcal {F}\) and \(C_{N_{1}}(x)\leq C_{N_{2}}(x)\) for all x∈M∗. It follows that for all \(x\in M^{*} C_{N_{1}}(x)\geq C_{B}(x)\phantom {\dot {i}\!}\) for some \(B\in \mathcal {B}\). Therefore, for all \(x\in M^{*} C_{N_{2}}(x)\geq C_{N_{1}}(x)\geq C_{B}(x)\phantom {\dot {i}\!}\) for some \(B\in \mathcal {B}\). Thus, \(N_{2}\in \mathcal {F}\). Hence, \(\mathcal {F}\) is an M-filter on M. Now, let \(\mathcal {F}_{1}\) be an M-filter which contains \(\mathcal {B}\) such that \(\mathcal {F}_{1}\leq \mathcal {F}\). Let \(N\in \mathcal {F}\). It follows that for all \(x\in M^{*}\;C_{N}(x)\geq C_{N_{1}}(x)\phantom {\dot {i}\!}\), for some \(N_{1}\in \mathcal {B}\). This result, combined with \(N_{1}\in \mathcal {F}_{1}\) and \(({\mathcal {M}\mathcal {F}}_{3})\), implies \(N\in \mathcal {F}_{1}\). Hence, \(\mathcal {F}\leq \mathcal {F}_{1}\). Therefore, \(\mathcal {F}=\mathcal {F}_{1}\). Thus, \(\mathcal {F}\) is the smallest M-filter which contains \(\mathcal {B}\). □ Every M-filter is trivially an M-filter basis of itself. \(\mathcal {B}=\{k/x\}\) is an M-filter basis and generates the principle M-filter at k/x. \(\mathcal {B}=\{N\}\) is an M-filter basis and generates the principle M-filter at N. Let M={k1/x1,k2/x2,k3/x3,…,km/xn}. Then, \(\mathcal {B}=\{1/x_{1}, 1/x_{2}, 1/x_{3},\dots, 1/x_{n}\}\) is an M-filter basis and generates PF(M). Let M be a nonempty mset, \(\mathcal {B}\) an M-filter basis which generates \(\mathcal {F}_{1}\), and \(\mathcal {B^{*}}\) an M-filter basis which generates \(\mathcal {F}_{2}\). Then, \(\mathcal {F}_{1}\leq \mathcal {F}_{2}\) if and only if every member of \(\mathcal {B}\) contains a member of \(\mathcal {B^{*}}\). Suppose \(\mathcal {F}_{1}\leq \mathcal {F}_{2}\phantom {\dot {i}\!}\) and \(B\in \mathcal {B}\). Since \(\mathcal {B}\) is an M-filter basis which generates \(\mathcal {F}_{1}\), then \(B\in \mathcal {F}_{1}\). Since \(\mathcal {F}_{1}\leq \mathcal {F}_{2}\), thus \(B\in \mathcal {F}_{2}\), which implies that there exists \(B^{*}\in \mathcal {B^{*}}\phantom {\dot {i}\!}\) such that \(C_{B^{*}}(x)\leq C_{B}(x)\phantom {\dot {i}\!}\) for all x∈M∗. Therefore, every member of \(\mathcal {B}\) contains a member of \(\mathcal {B^{*}}\). On the other hand, let every member of \(\mathcal {B}\) contain a member of \(\mathcal {B^{*}}\) and \(F\in \mathcal {F}_{1}\). Since \(\mathcal {B}\) is an M-filter basis which generates \(\mathcal {F}_{1}\), it follows that there exists \(B\in \mathcal {B}\) such that CB(x)≤CF(x) for all x∈M∗. From the assumption, there exists \(B^{*}\in \mathcal {B^{*}}\) such that \(C_{B^{*}}(x)\leq C_{B}(x)\leq C_{F}(x)\phantom {\dot {i}\!}\) for all x∈M∗. This result, combined with \(\mathcal {B^{*}}\) is an M-filter basis which generates \(\mathcal {F}_{2}\), implies \(F\in \mathcal {F}_{2}\). Consequently, \(\mathcal {F}_{1}\leq \mathcal {F}_{2}\). □ The two M-filter basis (M-filter subbasis) are \(\mathcal {F}\)-equivalent if they generate the same M-filter. Let \(\mathcal {B}\) and \(\mathcal {B^{*}}\) be an M-filter basis on a nonempty mset M. Then, \(\mathcal {B}\) and \(\mathcal {B^{*}}\) are equivalent if and only if every member of \(\mathcal {B}\) contains a member of \(\mathcal {B^{*}}\) and every member of \(\mathcal {B^{*}}\) contains a member of \(\mathcal {B}\). The result follows immediately from Theorem 4. □ Let M1 and M2 be two nonempty msets drawn from X and Y, respectively, f:M1→M2 be an mset function, \(\mathcal {B}\) be an M-filter basis on M1, and \(\mathcal {B^{*}}\) be an M-filter basis on M2. Then: \(K_{1}=\{f(B) : B\in \mathcal {B}\}\) is an M-filter basis on M2. If every member of \(\mathcal {B^{*}}\) intersects f(M1), then \(K_{2}=\{f^{-1}(B^{*}) : B^{*}\in \mathcal {B^{*}}\}\) is an M-filter basis on M1. Since \(\mathcal {B}\) is an M-filter basis on M1. It follows that \(\mathcal {B}\neq \phi \). So, K1≠ϕ. For \(({\mathcal {M}\mathcal {B}}_{1})\), since \(\phi \not \in \mathcal {B}\), hence ϕ∉K1. To prove \(({\mathcal {M}\mathcal {B}}_{2})\), let \(f(B_{1}), f(B_{2})\in \mathcal {B}\) such that \(B_{1}, B_{2}\in \mathcal {B}\). Since \(\mathcal {B}\) is an M-filter basis on M1, it follows that there exists \(B\in \mathcal {B}\) such that \(C_{B}(x)\leq C_{(B_{1}\cap B_{2})}(x)\) for all x∈M∗. Thus, \(C_{f(B)}(y)\leq C_{f(B_{1}\cap B_{2})}(y)\leq C_{f(B_{1})\cap f(B_{2})}(y)\) for all y∈Y. Therefore, there exists f(B)∈K1 such that \(C_{f(B)}(y)\leq C_{f(B_{1})\cap f(B_{2})}(y)\) for all y∈Y. Hence, K1 is an M-filter basis on M2. The proof is similar to part (1). Multiset ultrafilter An M-filter \(\mathcal {F}\) is called an mset ultrafilter on M, M-ultrafilter, if there is no strictly finer M-filter than \(\mathcal {F}\). That is, if \(\mathcal {F}^{*}\) is an M-ultrafilter and \(\mathcal {F^{*}}\geq \mathcal {F}\), then \(\mathcal {F^{*}}=\mathcal {F}\). Let M={2/a,3/b}. Then, \(\mathcal {F}_{1}=\{M, \{1/a\},\{2/a\},\{1/a, 1/b\}, \{1/a, 2/b\},\{1/a, 3/b\}, \{2/a, 1/b\},\{2/a, 2/b\}\}\) and \(\mathcal {F}_{2}=\{M, \{1/b\}, \{2/b\}, \{3/b\}, \{1/a, 1/b\}, \{2/a, 1/b\}, \{1/a, 2/b\}, \{2/a, 2/b\}, \{1/a, 3/b\}, \{2/a, 3/b\}\}\) are M-ultrafilters on M. Let M be a nonempty mset. An M-filter \(\mathcal {F}\) is an M-ultrafilter if it contains all submsets of M which intersects every member of \(\mathcal {F}\). Let \(\mathcal {F}\) be an M-ultrafilter on M and N be a submset of M such that F∩N≠ϕ for all \(F\in \mathcal {F}\). Now, we want to show that the collection \(\mathcal {F}^{*}=\{F^{*}: \forall \;x\in M^{*}\;C_{F^{*}}(x)\geq C_{(N\cap F)}(x)\;\)for some \(F\in \mathcal {F}\}\) is an M-filter on M. For \(({\mathcal {M}\mathcal {F}}_{1})\), since C(N∩F)(x)≥Cϕ(x) for all x∈M∗. Thus, \(\phi \not \in \mathcal {F}^{*}\). For \(({\mathcal {M}\mathcal {F}}_{2})\), let \(F_{1}^{*}\), \(F_{2}^{*}\in \mathcal {F}^{*}\phantom {\dot {i}\!}\). Hence, for all \(x\in M^{*} C_{F_{1}^{*}}(x)\geq C_{(N\cap F_{1})}(x)\) and \(C_{F_{2}^{*}}(x)\geq C_{(N\cap F_{2})}(x)\phantom {\dot {i}\!}\) for some \(F_{1},F_{2}\in \mathcal {F}\). It follows that for all \(x\in M^{*} C_{(F_{1}^{*}\cap F_{2}^{*})}(x)\geq C_{[N\cap (F_{1}\cap F_{2})]}(x)\). Therefore, \(F_{1}^{*}\cap F_{2}^{*}\in \mathcal {F}^{*}\). To prove that \(\mathcal {F}^{*}\) satisfies \(({\mathcal {M}\mathcal {F}}_{3})\), let \(F_{1}^{*}\in \mathcal {F}^{*}\) and \(\phantom {\dot {i}\!}C_{F_{1}^{*}}(x)\leq C_{F_{2}^{*}}(x)\) for all x∈M∗. It follows that for all \(\phantom {\dot {i}\!}x\in M^{*} C_{F_{2}^{*}}(x)\geq C_{F_{1}^{*}}(x)\geq C_{(N\cap F)}(x)\) for some \(F\in \mathcal {F}\). Consequently, \(F_{2}^{*}\in \mathcal {F}^{*}\). Hence, \(\mathcal {F}^{*}\) is an M-filter on M. Since CF(x)≥C(N∩F)(x) for all x∈M∗, then \(\mathcal {F}^{*}\geq \mathcal {F}\). Since \(\mathcal {F}\) is an M-ultrafilter on M, it follows that \(\mathcal {F}^{*}=\mathcal {F}\). Moreover, \(N\in \mathcal {F}\) as for all x∈M∗CN(x)≥C(N∩F)(x). □ Let \(\mathcal {F}\) be an M-ultrafilter on a nonempty mset M. Then, for each N⊆M, either N or \(N^{c}\in \mathcal {F}\). Let \(\mathcal {F}\) be an M-ultrafilter on M and N⊆M. If there exists \(F\in \mathcal {F}\) such that C(F∩N)(x)=0 for all x∈M∗, then Theorem 2 part (1) implies \(\phantom {\dot {i}\!}C_{F}(x)\leq C_{N^{c}}(x)\) for all x∈M∗. Thus, \(N^{c}\in \mathcal {F}\). Otherwise, C(F∩N)(x)>0 for all x∈M∗. Thus, Theorem 8 implies \(N\in \mathcal {F}\), then the result. □ The following example shows that the converse of Theorem 9 is incorrect in general. Let M={2/a,3/b}. Then, \(\mathcal {F}=\{M, \{2/b\}, \{3/b\}, \{1/a, 2/b\}, \{1/a, 3/b\}, \{2/a, 2/b\}, \{2/a, 1/b\},\{2/a, 2/b\}\}\) is an M-filter on M. Although for each N⊆M, either N or \(N^{c}\in \mathcal {F}\), \(\mathcal {F}\) is not M-ultrafilter. As \(\mathcal {F}^{*}=\{M, \{1/b\},\{2/b\}, \{3/b\}, \{1/a, 1/b\}, \{1/a, 2/b\}, \{1/a, 3/b\}, \{2/a, 1/b\}, \{2/a, 2/b\},\{2/a, 3/b\}\}\) is finer than \(\mathcal {F}\). Let \(\mathcal {F}\) be an M-ultrafilter on a nonempty mset M. Then, for each two nonempty submsets N1,N2 of M such that \(N_{1}\cup N_{2}\in \mathcal {F}\), either \(N_{1}\in \mathcal {F}\) or \(N_{2}\in \mathcal {F}\). Assume \(N_{1}\cup N_{2}\in \mathcal {F}\) and \(N_{1}\in \mathcal {F}\) and \(N_{2}\in \mathcal {F}\). Define \(\phantom {\dot {i}\!}\mathcal {F}^{*}=\{G\subseteq M: G\cup N_{2}\in \mathcal {F}\}\). Now, we want to prove that \(\mathcal {F}^{*}\) is an M-filter on M. Since \(N_{1}\cup N_{2}\in \mathcal {F}\), then \(\phantom {\dot {i}\!}N_{1}\in \mathcal {F}^{*}\). Hence, \(\mathcal {F}^{*}\neq \phi \). For \(({\mathcal {M}\mathcal {F}}_{1})\), since \(\phi \cup N_{2}=N_{2}\not \in \mathcal {F}^{*}\), it follows that \(\phi \not \in \mathcal {F}^{*}\). To prove that \(\mathcal {F}^{*}\) satisfies \(({\mathcal {M}\mathcal {F}}_{2})\), let \(\phantom {\dot {i}\!}G_{1}, G_{2}\in \mathcal {F}^{*}\). Hence, \(G_{1}\cup N_{2}\in \mathcal {F}\) and \(G_{2}\cup N_{2}\in \mathcal {F}\). Thus, \(\phantom {\dot {i}\!}(G_{1}\cup N_{2})\cap (G_{2}\cup N_{2})\in \mathcal {F}\). Therefore, \((G_{1}\cap G_{2})\cup N_{2}\in \mathcal {F}\). Hence, \(G_{1}\cap G_{2}\in \mathcal {F}\). For \(({\mathcal {M}\mathcal {F}}_{3})\), let \(\phantom {\dot {i}\!}G_{1}\in \mathcal {F}^{*}\) and \(C_{G_{1}}(x)\leq C_{G_{2}}(x)\phantom {\dot {i}\!}\) for all x∈M∗. Hence, \(G_{1}\cup N_{2}\in \mathcal {F}\) and \(C_{(G_{1}\cup N_{2})}(x)\leq C_{(G_{2}\cup N_{2})}(x)\) for all x∈M∗. Thus, \(G_{2}\cup N_{2}\in \mathcal {F}\). Hence, \(G_{2}\in \mathcal {F}^{*}\phantom {\dot {i}\!}\). Consequently, \(\mathcal {F}^{*}\) is an M-filter on M. Let \(F\in \mathcal {F}\). Since \(\phantom {\dot {i}\!}C_{F}(x)\leq C_{(F\cup N_{2})}(x)\) for all x∈M∗, then \(({\mathcal {M}\mathcal {F}}_{3})\) implies \(F\cup N_{2}\in \mathcal {F}\). Therefore, \(F\in \mathcal {F}^{*}\); that is, \(\mathcal {F}\leq \mathcal {F}^{*}\). But \(\mathcal {F}\) is an M-ultrafilter. Thus, there is a contradiction. Therefore, \(N_{1}\in \mathcal {F}\) or \(N_{2}\in \mathcal {F}\). □ The following example shows that the converse of Theorem 10 is wrong in general. Let M={3/a,4/b} and \(\mathcal {F}=\{M, \{3/a\}, \{3/a, 1/b\}, \{3/a, 2/b\}, \{3/a, 3/b\}\}\) be an M-filter on M. Although for all N1,N2⊆M such that \(N_{1}\cup N_{2}\in \mathcal {F}\), either \(N_{1}\in \mathcal {F}\) or \(N_{2}\in \mathcal {F}\), \(\mathcal {F}\) is not M-ultrafilter. As \(\mathcal {F}^{*}=\{M, \{3/a\}, \{4/b\}, \{3/a, 1/b\}, \{3/a, 2/b\}, \{3/a, 3/b\}, \{1/a, 4/b\}, \{2/a, 4/b\}\}\) is finer than \(\mathcal {F}\). Convergence of multiset filters Let (M,τ) be an M-topological space and \(\mathcal {F}\) be an M-filter on M. \(\mathcal {F}\) is said to τ-converge to k/x (written \(\mathcal {F}\overset {\tau }{\longrightarrow }k/x\)) if \(\mathcal {N}_{k/x}\subseteq \mathcal {F}\); that is, if \(\mathcal {F}\geq \mathcal {N}_{k/x}\). For each mpoint, k/x, \(\mathcal {N}_{k/x}\) converges to k/x. Let τ be the cofinite M-topology on M and \(\mathcal {F}\) be the cofinite M-filter. Then, \(\mathcal {F}\) converges to each mpoint. Let (M,τ) be the indiscrete M- topology and \(\mathcal {F}\) any M-filter on M; then, \(\mathcal {F}\) converges to each mpoint. Let (M,τ) be an M-topological space and \(\mathcal {F}\) and \(\mathcal {F}^{*}\) be M-filters on M such that \(\mathcal {F}^{*}\geq \mathcal {F}\). If \(\mathcal {F}\overset {\tau }{\longrightarrow }k/x\), then \(\mathcal {F}^{*}\overset {\tau }{\longrightarrow }k/x\). Since \(\mathcal {F}\overset {\tau }{\longrightarrow }k/x\) and \(\mathcal {F}^{*}\geq \mathcal {F}\), it follows that \(\mathcal {F}^{*}\geq \mathcal {F}\geq \mathcal {N}_{k/x}\). Hence, Definition 26 implies that \(\mathcal {F}^{*}\overset {\tau }{\longrightarrow }k/x\). □ Let (M,τ1) and (M,τ2) be two M-topological spaces such that τ2≤τ1 and \(\mathcal {F}\) be an M-filter on M such that \(\mathcal {F}\overset {\tau _{1}}{\longrightarrow }k/x\). Then, \(\mathcal {F}\overset {\tau _{2}}{\longrightarrow }k/x\). Since τ2≤τ1 and \(\mathcal {F}\overset {\tau _{1}}{\longrightarrow }k/x\), it follows that \(\mathcal {N}^{\tau _{2}}_{k/x}\leq \mathcal {N}^{\tau _{1}}_{k/x}\) and \(\mathcal {N}^{\tau _{1}}_{k/x}\leq \mathcal {F}\). Thus, \(\mathcal {N}^{\tau _{2}}_{k/x}\leq \mathcal {N}^{\tau _{1}}_{k/x}\leq \mathcal {F}\). Hence, Definition 26 implies \(\mathcal {F}\overset {\tau _{2}}{\longrightarrow }k/x\), then the result. □ Let (M,τ) be an M-topological space, then the following assertions are equivalent: \(\mathcal {F}\overset {\tau }{\longrightarrow }k/x\), Every M-ultrafilter containing \(\mathcal {F}\) converges to k/x. On the one hand, let \(\mathcal {F}^{*}\) be an M-ultrafilter containing \(\mathcal {F}\); that is, \(\mathcal {F}\leq \mathcal {F}^{*}\). This result, combined with assertion (1), implies \(\mathcal {N}_{k/x}\leq \mathcal {F}\leq \mathcal {F}^{*}\). Thus, Definition 26 implies \(\mathcal {F}^{*}\overset {\tau }{\longrightarrow }k/x\). Hence, (1) implies (2). On the other hand, (2) implies that \(\mathcal {N}_{k/x}\) is contained in every M-ultrafilter containing \(\mathcal {F}\). Hence, \(\mathcal {N}_{k/x}\) is contained in the intersection of all M-ultrafilter containing \(\mathcal {F}\). This result, combined with \(\mathcal {F}\) is the intersection of all M-ultrafilter containing \(\mathcal {F}\), implies \(\mathcal {N}_{k/x}\leq \mathcal {F}\). Then, \(\mathcal {F}\overset {\tau }{\longrightarrow }k/x\). Hence, (2) implies (1). □ Let (M,τ) be an M-topological space and N be a nonempty submset of M; then, the following assertions are equivalent: N∈τ, If \(\mathcal {F}\overset {\tau }{\longrightarrow }k/x\) such that x∈kN, then \(N\in \mathcal {F}\). The first direction is a direct consequence of Definition 26 and assertion (1). On the other hand, let x∈mN. Then Example 24 shows that \(\mathcal {N}_{m/x}\overset {\tau }{\longrightarrow }m/x\). Thus, assertion (2) implies that \(N\in \mathcal {N}_{m/x}\); that is, N is a neighborhood of m/x. Hence, N is a neighborhood for every x∈kN. Thus, N∈τ; that is, (2) implies (1). □ mset : M-filter : multiset filter M-ultrafilter : Blizard, W. D.: Multiset theory. Notre Dame J. Form. Log. 30, 36–65 (1989). Yager, R. R.: On the theory of bags. Int. J. Gen. Syst. 13, 23–37 (1986). https://doi.org/10.1080/03081078608934952. Blizard, W. D.: Negative Membership. Notre Dame J. Form. Log. 31, 346–368 (1990). Jena, S. P., Ghosh, S. K., Tripathy, B. K.: On the theory of bags and lists. Inform. Sci. 132, 241–254 (2001). https://doi.org/10.1016/S0020-0255(01)00066-4. Girish, K. P., John, S. J.: General relations between partially ordered multisets and their chains and antichains. Math. Commun. 14, 193–206 (2009). Girish, K. P., John, S. J.: Relations and functions in multiset context. Inform. Sci. 179, 758–768 (2009). https://doi.org/10.1016/j.ins.2008.11.002. Girish, K. P., John, S. J.: Multiset topologies induced by multiset relations. Inform. Sci. 188, 298–313 (2012). https://doi.org/10.1016/j.ins.2011.11.023. Girish, K. P., John, S. J.: On multiset topologies. Theory Appl. Math. Comput. Sci. 2, 37–52 (2012). Kandil, A, Tantawy, O. A, El–Sheikh S.A., Zakaria, A.: Multiset proximity spaces. J. Egypt. Math. Soc. 24, 562–567 (2016). https://doi.org/10.1016/j.joems.2015.12.002. Osmanoğlu, I., Tokat, D.: Connectedness on soft multi topological spaces. J. New Results Sci. 2, 8–18 (2013). Osmanoğlu, I., Tokat, D.: Compact soft multi spaces. Eur. J. Pure Appl. Math. 7, 97–108 (2014). Syropoulos, A.: Mathematics of multisets. Multiset Process. LNCS. 2235, 347–358 (2001). https://doi.org/10.1007/3-540-45523-X_17. Syropoulos, A.: Categorical models of multisets. Rom. J. Inf. Sci. Technol. 6, 393–400 (2003). Singh, D., Ibrahim, A. M., Yohanna, T., Singh, J. N.: An overview of the applications of multisets. Novi Sad J. Math. 37, 73–92 (2007). Singh, D., Singh, J. N.: A note on the definition of multisubset, Association for Symbolic Logic. USA, Annual Conference (2007). El–Sheikh, S. A., Zakaria, A.: Note on rough multiset and its multiset topology. Ann. Fuzzy Math. Inf. 10, 235–238 (2015). Zakaria, A., John, S. J., El–Sheikh, S. A.: Generalized rough multiset via multiset ideals. Intell. Fuzzy Syst. 30, 1791–1802 (2016). https://doi.org/10.3233/IFS-151891. Doctoral School of Mathematical and Computational Sciences, University of Debrecen, Pf. 400, Debrecen, H-4002, Hungary Amr Zakaria Department of Mathematics, Faculty of Education, Ain Shams University, Cairo, 11341, Egypt Department of Mathematics, National Institute of Technology Calicut, Calicut, 673 601, Kerala, India Sunil Jacob John Department of Mathematics, MES Kalladi College Mannarkkad, Palakkad, 678 583, Kerala, India K. P. Girish All authors jointly worked on the results, and they read and approved the final manuscript Correspondence to Amr Zakaria. Zakaria, A., John, S.J. & Girish, K.P. Multiset filters. J Egypt Math Soc 27, 51 (2019). https://doi.org/10.1186/s42787-019-0056-3 Multiset topology Mathematics Subject Classification (2010) 54A05; 03E70
CommonCrawl
pp. A21-A33 •https://doi.org/10.1364/JOCN.402187 TAPI-enabled SDN control for partially disaggregated multi-domain (OLS) and multi-layer (WDM over SDM) optical networks [Invited] C. Manso, R. Muñoz, N. Yoshikane, R. Casellas, R. Vilalta, R. Martínez, T. Tsuritani, and I. Morita C. Manso,1 R. Muñoz,1,* N. Yoshikane,2 R. Casellas,1 R. Vilalta,1 R. Martínez,1 T. Tsuritani,2 and I. Morita2 1Optical Networks and Systems Department, Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA), 08860 Castelldefels, Barcelona, Spain 2KDDI Research, Inc., 2-1-15 Ohara, Fujimino-shi, Saitama 356-8502, Japan *Corresponding author: [email protected] R. Muñoz https://orcid.org/0000-0003-4651-4499 R. Casellas https://orcid.org/0000-0002-2663-6571 R. Vilalta https://orcid.org/0000-0003-0391-9728 R. Martínez https://orcid.org/0000-0003-3097-5485 C Manso R Muñoz N Yoshikane R Casellas R Vilalta R Martínez T Tsuritani I Morita C. Manso, R. Muñoz, N. Yoshikane, R. Casellas, R. Vilalta, R. Martínez, T. Tsuritani, and I. Morita, "TAPI-enabled SDN control for partially disaggregated multi-domain (OLS) and multi-layer (WDM over SDM) optical networks [Invited]," J. Opt. Commun. Netw. 13, A21-A33 (2021) Optical amplifiers Optical standards Quadrature phase shift keying Single mode fibers Space division multiplexing Original Manuscript: July 8, 2020 Journal of Optical Communications and Networking OFC 2020 (2021) Network operators are facing a critical issue on their optical transport networks to deploy 5G$+$ and Internet of Things services. They need to address the capacity increase by a factor of 10, while keeping a similar cost per user. Over the past years, network operators have been working on the optical disaggregated approach with great interest for achieving the required efficiency and cost reduction. In particular, partially disaggregated optical networks make it possible to decouple the transponders from the transport system (known as an open line system) that are provided by different vendors. On the other hand, space division multiplexing (SDM) has been proposed as the key technology to overcome the capacity crunch that the optical standard single-mode fibers are facing to support the forecasted $10 \times$ growth. Spatial core switching is gaining interest because it makes it possible to deploy SDM networks to bypass the overloaded wavelength division multiplexing (WDM) networks, by provisioning spatial media channels between WDM nodes. This paper presents, to the best of our knowledge, the first experimental demonstration of transport-application-programming-interface-enabled software defined networking control architecture for partially disaggregated multi-domain and multi-layer (WDM over SDM) optical networks. Adaptive software defined networking control of space division multiplexing super-channels exploiting the spatial-mode dimension R. Muñoz, N. Yoshikane, R. Vilalta, J. M. Fàbrega, L. Rodríguez, D. Soma, S. Beppu, S. Sumita, R. Casellas, R. Martínez, T. Tsuritani, and I. Morita J. Opt. Commun. Netw. 12(1) A58-A69 (2020) SDN Control of Sliceable Multidimensional (Spectral and Spatial) Transceivers with YANG/NETCONF R. Muñoz, N. Yoshikane, R. Vilalta, J. M. Fàbrega, L. Rodríguez, R. Casellas, M. Svaluto Moreolo, R. Martínez, L. Nadal, D. Soma, Y. Wakayama, S. Beppu, S. Sumita, T. Tsuritani, and I. Morita J. Opt. Commun. Netw. 11(2) A123-A133 (2019) Enabling fully programmable transponder white boxes [Invited] Victor Lopez, Wataru Ishida, Arturo Mayoral, Takafumi Tanaka, Oscar Gonzalez de Dios, and Juan Pedro Fernandez-Palacios Control of open and disaggregated transport networks using the Open Network Operating System (ONOS) [Invited] Alessio Giorgetti, Andrea Sgambelluri, Ramon Casellas, Roberto Morro, Andrea Campanella, and Piero Castoldi Programmable disaggregated multi-dimensional S-BVT as an enabler for high capacity optical metro networks Laia Nadal, Josep M. Fàbrega, Michela Svaluto Moreolo, F. Javier Vílchez, Ramon Casellas, Raul Muñoz, Ricard Vilalta, and Ricardo Martínez J. Opt. Commun. Netw. 13(6) C31-C40 (2021) Andrew Lord, Editor-in-Chief Listing 1. OLS (WDM) SIP Showing the MC Pool Listing 2. Example of an OLS (WDM) Create-Connectivity Request Payload
CommonCrawl
Nonstationary iterated thresholding algorithms for image deblurring IPI Home Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation August 2013, 7(3): 697-716. doi: 10.3934/ipi.2013.7.697 Non-Gaussian dynamics of a tumor growth system with immunization Mengli Hao 1, , Ting Gao 2, , Jinqiao Duan 3, and Wei Xu 1, Department of Applied Mathematics, Northwestern Polytechnical University, Xi'an, 710129, China, China Institute for Pure and Applied Mathematics, University of California, Los Angeles, Los Angeles, CA 90095, United States Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616 Received June 2012 Revised March 2013 Published September 2013 This paper is devoted to exploring the effects of non-Gaussian fluctuations on dynamical evolution of a tumor growth model with immunization, subject to non-Gaussian $\alpha$-stable type Lévy noise. The corresponding deterministic model has two meaningful states which represent the state of tumor extinction and the state of stable tumor, respectively. To characterize the time for different initial densities of tumor cells staying in the domain between these two states and the likelihood of crossing this domain, the mean exit time and the escape probability are quantified by numerically solving differential-integral equations with appropriate exterior boundary conditions. The relationships between the dynamical properties and the noise parameters are examined. It is found that in the different stages of tumor, the noise parameters have different influences on the time and the likelihood inducing tumor extinction. These results are relevant for determining efficient therapeutic regimes to induce the extinction of tumor cells. Keywords: stochastic dynamics of tumor growth, Tumor growth with immunization, escape probability., non-Gaussian Lévy process, mean exit time, Lévy jump measure. Mathematics Subject Classification: 60H15, 62P10, 65C50, 92C4. Citation: Mengli Hao, Ting Gao, Jinqiao Duan, Wei Xu. Non-Gaussian dynamics of a tumor growth system with immunization. Inverse Problems & Imaging, 2013, 7 (3) : 697-716. doi: 10.3934/ipi.2013.7.697 J. A. Adam, The dynamics of growth-factor-modified immune response to cancer growth: One dimensional models, Mathl. Comput. Modelling, 17 (1993), 83-106. doi: 10.1016/0895-7177(93)90041-V. Google Scholar S. Albeverrio, B. Rüdiger and J. L. Wu, Invariant measures and symmetry property of lévy type operators, Potential Analysis, 13 (2000), 147-168. doi: 10.1023/A:1008705820024. Google Scholar D. Applebaum, "Lévy Processes and Stochastic Calculus," Cambridge Studies in Advanced Mathematics, 93. Cambridge University Press, Cambridge, 2004. doi: 10.1017/CBO9780511755323. Google Scholar F. Bartumeus, J. Catalan, U. L. Fulco, M. L. Lyra and G. Viswanathan, Optimizing the encounter rate in biological interactions: Lévy versus brownian strategies, Phys. Rev. Lett., 88 (2002), 097901, 4pp. doi: 10.1103/PhysRevLett.88.097901. Google Scholar T. Bose and S. Trimper, Stochastic model for tumor growth with immunization, Phys. Rev. E, 79 (2009), 051903, 10 pp. doi: 10.1103/PhysRevE.79.051903. Google Scholar J. R. Brannan, J. Duan and V. J. Ervin, Escape probability, mean residence time and geophysical fluid particle dynamics, Predictability: Quantifying uncertainty in models of complex phenomena (Los Alamos, NM, 1998), Physica D, 133 (1999), 23-33. doi: 10.1016/S0167-2789(99)00096-2. Google Scholar H. Chen, J. Duan, X. Li and C. Zhang, A computational analysis for mean exit time under non-Gaussian lévy noises, Applied Mathematics and Computation, 218 (2011), 1845-1856. doi: 10.1016/j.amc.2011.06.068. Google Scholar Z. Chen, P. Kim and R. Song, Heat kernel estimates for Dirichlet fractional laplacian, J. European Math. Soc., 12 (2010), 1307-1329. doi: 10.4171/JEMS/231. Google Scholar L. G. de Pillis, W. Gu and A. E. Radunskaya, Mixed immunotherapy and chemotherapy of tumors: Modeling, applications and biological interpretations, J. Theoret. Biol., 238 (2006), 841-862. doi: 10.1016/j.jtbi.2005.06.037. Google Scholar J. R. R. Duarte, M. V. D. Vermelho and M. L. Lyra, Stochastic resonance of a periodically driven neuron under non-Gaussian noise, Physica A, 387 (2008), 1446-1454. doi: 10.1016/j.physa.2007.11.011. Google Scholar A. Fiasconaro, A. Ochab-Marcinek, B. Spagnolo and E. Gudowska-Nowak, Monitoring noise-resonant effects in cancer growth influenced by external fluctuations and periodic treatment, Eur. Phys. J. B, 65 (2008), 435-442. doi: 10.1140/epjb/e2008-00246-2. Google Scholar A. Fiasconaro and B. Spagnolo, Co-occurrence of resonant activation and noise-enhanced stability in a model of cancer growth in the presence of immune response, Phys. Rev. E, 74 (2006), 041904, 10pp. doi: 10.1103/PhysRevE.74.041904. Google Scholar T. Gao, J. Duan, X. Li and R. Song, Mean exit time and escape probability for dynamical systems driven by lévy noise, preprint,, , (). Google Scholar R. P. Garay and R. Lefever, A kinetic approach to the immunology of cancer: Stationary states properties of effector-target cell reactions, J. Theor. Biol., 73 (1978), 417-438. doi: 10.1016/0022-5193(78)90150-9. Google Scholar W. Horsthemke and R. Lefever, "Noise-Induced Transitions. Theory and Applications in Physics, Chemistry and Biology," Springer Series in Synergetics, 15. Springer-Verlag, Berlin, 1984, xv+318 pp. Google Scholar N. E. Humphries et al., Environmental context explains lévy and brownian movement patterns of marine predators, Nature, 465 (2010), 1066-1069. doi: 10.1038/nature09116. Google Scholar L. Jiang, X. Luo, D. Wu and S. Zhu, Stochastic properties of tumor growth driven by white lévy noise, Modern Physics Letters B, 26 (2012), 1250149, 9pp. doi: 10.1142/S0217984912501497. Google Scholar D. Kirschner and J. C. Panetta, Modeling immunotherapy of the tumor-immune interaction, J. Math. Biol., 37 (1998), 235-252. doi: 10.1007/s002850050127. Google Scholar A. E. Kyprianou, "Introductory Lectures on Fluctuations of Lévy Processes with Applications," Springer-Verlag, Berlin, 2006, xiv+373 pp. Google Scholar R. Lefever and W. Horsthemk, Bistability in fluctuating environments. Implications in tumor immumology, Bulletin of Mathematical Biology, 41 (1979), 469-490. doi: 10.1007/BF02458325. Google Scholar D. Li, W. Xu, Y. Guo and Y. Xu, Fluctuations induced extinction and stochastic resonance effect in a model of tumor growth with periodic treatment, Physics Letters A, 375 (2011), 886-890. doi: 10.1016/j.physleta.2010.12.066. Google Scholar M. Liao, The dirichlet problem of a discontinuous markov process, A Chinese summary appears in Acta Math., Sinica 33 (1989), 286pp. Acta Math. Sinica (New Series), 5 (1989), 9-15. doi: 10.1007/BF02107618. Google Scholar T. Naeh, M. M. Klosek, B. J. Matkowsky and Z. Schuss, A direct approach to the exit problem, SIAM J. Appl. Math., 50 (1990), 595-627. doi: 10.1137/0150036. Google Scholar A. Ochab-Marcinek and E. Gudowska-Nowak, Population growth and control in stochastic models of cancer development, Physica A, 343 (2004), 557-572. doi: 10.1016/j.physa.2004.06.071. Google Scholar I. Prigogine and R. Lefever, Stability problems in cancer growth and nucleation, Comp. Biochem. Physiol, 67 (1980), 389-393. doi: 10.1016/0305-0491(80)90326-0. Google Scholar H. Qiao, X. Kan and J. Duan, Escape probability for stochastic dynamical systems with jumps, Malliavin Calculus and Stochastic Analysis, 34 (2013), 195-216. doi: 10.1007/978-1-4614-5906-4_9. Google Scholar K.-I. Sato, "Lévy Processes and Infinitely Divisible Distributions," Translated from the 1990 Japanese original. Revised by the author. Cambridge Studies in Advanced Mathematics, 68. Cambridge University Press, Cambridge, 1999, xii+486 pp. Google Scholar D. Schertzer, M. Larchevêque, J. Duan, V. V. Yanovsky and S. Lovejoy, Fractional Fokker-Planck equation for nonlinear stochastic differential equations driven by non-Gaussian lévy stable noises, J. Math. Phys., 42 (2001), 200-212. doi: 10.1063/1.1318734. Google Scholar Z. Schuss, "Theory and Applications of Stochastic Differential Equations," Wiley Series in Probability and Statistics, John Wiley $&$ Sons, Inc., New York, 1980. Google Scholar C. Zeng, X. Zhou and S. Tao, Cross-correlation enhanced stability in a tumor cell growth model with immune surveillance driven by cross-correlated noises, J. Phys. A, 42 (2009), 495002, 8 pp. doi: 10.1088/1751-8113/42/49/495002. Google Scholar C. Zeng and H. Wang, Colored noise enhanced stability in a tumor cell growth system under immune response, J. Stat. Phys., 141 (2010), 889-908. doi: 10.1007/s10955-010-0068-8. Google Scholar C. Zeng, Effects of correlated noise in a tumor cell growth model in the presence of immune response, Phys. Scr., 81 (2010), 025009, 5pp. doi: 10.1088/0031-8949/81/02/025009. Google Scholar W. Zhong, Y. Shao and Z. He, Pure multiplicative stochastic resonance of a theoretical anti-tumor model with seasonal modulability, Phys. Rev. E, 73 (2006), 060902, 4pp. doi: 10.1103/PhysRevE.73.060902. Google Scholar W. Zhong, Y. Shao and Z. He, Spatiotemporal fluctuation-induced transition in a tumor model with immune surveillance, Phys. Rev. E, 74 (2006), 011916, 4pp. doi: 10.1103/PhysRevE.74.011916. Google Scholar Hongjun Gao, Fei Liang. On the stochastic beam equation driven by a Non-Gaussian Lévy process. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1027-1045. doi: 10.3934/dcdsb.2014.19.1027 Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133 Yong-Kum Cho. On the Boltzmann equation with the symmetric stable Lévy process. Kinetic & Related Models, 2015, 8 (1) : 53-77. doi: 10.3934/krm.2015.8.53 Ziheng Chen, Siqing Gan, Xiaojie Wang. Mean-square approximations of Lévy noise driven SDEs with super-linearly growing diffusion and jump coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4513-4545. doi: 10.3934/dcdsb.2019154 Badr-eddine Berrhazi, Mohamed El Fatini, Tomás Caraballo, Roger Pettersson. A stochastic SIRI epidemic model with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2415-2431. doi: 10.3934/dcdsb.2018057 Karel Kadlec, Bohdan Maslowski. Ergodic boundary and point control for linear stochastic PDEs driven by a cylindrical Lévy process. Discrete & Continuous Dynamical Systems - B, 2020, 25 (10) : 4039-4055. doi: 10.3934/dcdsb.2020137 Ahuod Alsheri, Ebraheem O. Alzahrani, Asim Asiri, Mohamed M. El-Dessoky, Yang Kuang. Tumor growth dynamics with nutrient limitation and cell proliferation time delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3771-3782. doi: 10.3934/dcdsb.2017189 Yongxia Zhao, Rongming Wang, Chuancun Yin. Optimal dividends and capital injections for a spectrally positive Lévy process. Journal of Industrial & Management Optimization, 2017, 13 (1) : 1-21. doi: 10.3934/jimo.2016001 Dongxi Li, Ni Zhang, Ming Yan, Yanya Xing. Survival analysis for tumor growth model with stochastic perturbation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5707-5722. doi: 10.3934/dcdsb.2021041 Elena Izquierdo-Kulich, Margarita Amigó de Quesada, Carlos Manuel Pérez-Amor, Magda Lopes Texeira, José Manuel Nieto-Villar. The dynamics of tumor growth and cells pattern morphology. Mathematical Biosciences & Engineering, 2009, 6 (3) : 547-559. doi: 10.3934/mbe.2009.6.547 Yangjin Kim, Hans G. Othmer. Hybrid models of cell and tissue dynamics in tumor growth. Mathematical Biosciences & Engineering, 2015, 12 (6) : 1141-1156. doi: 10.3934/mbe.2015.12.1141 Min Niu, Bin Xie. Comparison theorem and correlation for stochastic heat equations driven by Lévy space-time white noises. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 2989-3009. doi: 10.3934/dcdsb.2018296 Jiangyan Peng, Dingcheng Wang. Asymptotics for ruin probabilities of a non-standard renewal risk model with dependence structures and exponential Lévy process investment returns. Journal of Industrial & Management Optimization, 2017, 13 (1) : 155-185. doi: 10.3934/jimo.2016010 Tomasz Kosmala, Markus Riedle. Variational solutions of stochastic partial differential equations with cylindrical Lévy noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 2879-2898. doi: 10.3934/dcdsb.2020209 Kexue Li, Jigen Peng, Junxiong Jia. Explosive solutions of parabolic stochastic partial differential equations with lévy noise. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5105-5125. doi: 10.3934/dcds.2017221 Justin Cyr, Phuong Nguyen, Sisi Tang, Roger Temam. Review of local and global existence results for stochastic pdes with Lévy noise. Discrete & Continuous Dynamical Systems, 2020, 40 (10) : 5639-5710. doi: 10.3934/dcds.2020241 Justin Cyr, Phuong Nguyen, Roger Temam. Stochastic one layer shallow water equations with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3765-3818. doi: 10.3934/dcdsb.2018331 Elena Izquierdo-Kulich, José Manuel Nieto-Villar. Mesoscopic model for tumor growth. Mathematical Biosciences & Engineering, 2007, 4 (4) : 687-698. doi: 10.3934/mbe.2007.4.687 Max-Olivier Hongler. Mean-field games and swarms dynamics in Gaussian and non-Gaussian environments. Journal of Dynamics & Games, 2020, 7 (1) : 1-20. doi: 10.3934/jdg.2020001 Wen Chen, Song Wang. A finite difference method for pricing European and American options under a geometric Lévy process. Journal of Industrial & Management Optimization, 2015, 11 (1) : 241-264. doi: 10.3934/jimo.2015.11.241 Mengli Hao Ting Gao Jinqiao Duan Wei Xu
CommonCrawl
Volume 12 Supplement 9 Genetic Analysis Workshop 20: envisioning the future of statistical genetics by exploring methods for epigenetic and pharmacogenomic data Proceedings | Open | Published: 17 September 2018 Gene-methylation epistatic analyses via the W-test identifies enriched signals of neuronal genes in patients undergoing lipid-control treatment Rui Sun1,2 na1, Haoyi Weng1,2 na1, Ruoting Men1,2, Xiaoxuan Xia1,2, Ka Chun Chong1,2, William K. K. Wu3, Benny Chung-Ying Zee1,2 & Maggie Haitian Wang1,2 BMC Proceedingsvolume 12, Article number: 53 (2018) | Download Citation An increasing number of studies are focused on the epigenetic regulation of DNA to affect gene expression without modifications to the DNA sequence. Methylation plays an important role in shaping disease traits; however, previous studies were mainly experiment, based, resulting in few reports that measured gene–methylation interaction effects via statistical means. In this study, we applied the data set adaptive W-test to measure gene–methylation interactions. Performance was evaluated by the ability to detect a given set of causal markers in the data set obtained from the GAW20. Results from simulation data analyses showed that the W-test was able to detect most markers. The method was also applied to chromosome 11 of the experimental data set and identified clusters of genes with neuronal and retinal functions, including MPPED2I, GUCY2E, NAV2, and ZBTB16. Genes from the TRIM family were also identified; these genes are potentially related to the regulation of triglyceride levels. Our results suggest that the W-test could be an efficient and effective method to detect gene–methylation interactions. Furthermore, the identified genes suggest an interesting relationship between lipid levels and the etiology of neurological disorders. Genetic variants, such as single-nucleotide polymorphisms (SNPs), have been found to influence risk for human diseases. Recent studies show that epigenetics affect SNPs in genes and subsequently influence the gene function and disease trait [1]. Epigenetic mechanisms consist of DNA methylation, histone modifications, and noncoding RNAs, all of which represent the patterns of chemical and structural modifications to DNA [2]. There are an increasing number of laboratory experiments that provide evidence of DNA methylation and gene expression regulation [3,4,5]. Only a few studies, however, have evaluated the genome–epigenome interactions through statistical means, which may potentially provide novel findings for the joint effects of SNPs and cytosine-phosphate-guanine (CpG) sites [6,7,8,9]. The search for SNP-CpG epistasis is usually conducted through multistage or integrated analyses, where the genome and methylation data are first analyzed separately and the results then combined [10, 11]. Some studies apply existing interaction-effect methods, such as regressions, to perform the joint analysis of methylation and genome data. The advantages of the W-test method are data set adaptive probability distributions and robustness for complicated genetic architectures, such as moderate data sparsity and population stratifications [12]. By applying the W-test to gene–methylation data directly, epistasis can be measured without a preselection of biomarkers, while also relying less on significant main effects for detecting important CpG–SNP interactions. The GAW20 provided an opportunity to study methylation and genome-wide association study (GWAS) data from participants who have undergone lipid control treatment. The W-test was applied in the detection of gene–methylation interactions, resulting in interesting findings with biological implications. GAW20 experimental and simulated data sets GAW20 provided the study data. The study participants were patients with diabetes who had undergone lipid-control treatments with the drug fenofibrate and were recruited from the Genetics of Lipid Lowering Drugs and Diet Network (GOLDN) clinical trial project. The analyzed data sets consisted of a simulated and experimental data sets. The triglyceride (TG) levels were collected at 4 clinical visits, with 2 measurements before treatment and 2 measurements after treatment. Age, sex, smoking status, and location were recorded. Genome-wide association data were sequenced with the Affymetrix Genome-wide Human SNP array 6.0, and DNA methylation profiling was performed with the Illumina Infinium Human Methylation 450 K Bead Chip Array, using the buffy coat harvest from blood samples collected at the second and fourth visits. In the simulated data, the phenotype of the simulated data set was generated using experimental genetic data under a hypothetical model [13]. The TG levels were generated from 5 SNPs with major effects and 5 CpG sites in physical proximity. A set of 5 SNP-CpG pairs with relatively high heritability but not related to TG levels was given as noise for testing the statistical methods. The simulated data contained 680 subjects after excluding individuals with missing phenotypes. For simulated data, the 84th replicate was used as suggested by GAW20. In the experimental data set, a total of 523 participants had complete genomic and clinical measurements. Participants with missing values were removed during the quality-control process, resulting in a remaining sample size of 476. The method was applied to chromosome 11 of the experimental data. Defining drug response The TG levels can be used as a measure of drug response. Because common clinical standards regard a 30% decrease in TG levels as an effective control of lipids [14], we adopted the same criteria in this study. First, the average pretreatment TG levels (TG_pre) were calculated by averaging the measurements from the first and second visits. The average posttreatment TG levels (TG_post) were calculated by averaging the measurements from the third and fourth visits. Next, a percentage of change was calculated as: ΔTG% = (TG_pre–TG_post)/TG_pre. If the percentage of change was greater than 30%, then the drug treatment was defined as effective; if less than 30%, treatment was defined as ineffective. The effectiveness of the drug response was the outcome variable for both the simulated and experimental data. The epistasis measure: The W-test The W-test measures the probability distributional differences for a set of biomarkers between the 2 groups of participants such as the 2 drug-response groups [12]. Under an additive genetic model, a SNP variable can be coded into 3 levels with the counts of the minor alleles. The quantitative CpG variable can be divided into high and low methylation levels by two-mean clustering. A SNP-CpG pair can form a genetic combination of 6 categories. The empirical distributions are compared through a sum of the square of the log odds ratio by: $$ W=h\sum \limits_{i=1}^k{\left[\log \frac{{\widehat{p}}_{1i}/\left(1-{\widehat{p}}_{1i}\right)}{{\widehat{p}}_{0i}/\left(1-{\widehat{p}}_{0i}\right)}/{SE}_i\right]}^2\sim {\chi}_f^2 $$ where \( {\widehat{p}}_{1i} \) and \( {\widehat{p}}_{0i} \) are the proportion of cases and controls in the ith category out of total cases or controls, respectively. SEi is the standard error of the log of odds ratios. The test statistics follows a chi-squared distribution with f degrees of freedom. Two parameters, h and f, are estimated using large-sample approximation by drawing smaller bootstrap samples under a null hypothesis. Consequently, the testing distribution is robust for complicated genetic architectures, as it adaptively adjusts to the data structure of the working data [12]. For detecting the cis-regulation patterns in the gene-methylation data, the SNPs and CpG sites located within a 10-kb genome distance on chromosome 11 were evaluated exhaustively [1]. Two types of logistic-regression models were applied as accompanying benchmarks to the W-test. The first logistic-regression model, LR-m1, considered the CpG site as a binary variable like the W-test, and the second logistic-regression model, LR-m2, included the CpG sites as a continuous variable using the original methylation values. Both logistic-regression models incorporated the main and interaction effects of SNP and CpG sites. In short, we denote: LR-m1: Y = SNP + CpG + SNP × CpG, where CpG is a binary variable; LR-m2: Y = SNP + CpG + SNP × CpG, where CpG is a continuous variable. The type I error rate is an average false-positive proportion using a permuted phenotype on a pair of gene–methylation markers in 2000 replicates. A total of 140,501 epistatic pairs were tested, and a Bonferroni correction resulted in a significance level of 3.56E-7 at a family-wise error rate of 5%. Performance of the W-test with simulated data In the simulated data set, the W-test, LR-m1, and LR-m2 were applied to the given causal and noise pairs. Table 1 displays the p values obtained from alternative methods. Generally, the W-test gave smaller p values than LR-m1 in most answer pairs, and also had comparable p values to LR-m2. The top 3 answer pairs were all identified to be significant by the 3 methods. The W-test also found the fourth answer pair (cg00045910, rs10828412) with a p value = 0.0475, which was slightly smaller than the p values from the LR-m1 (p value = 0.0532) and LR-m2 (p value = 0.0597). The results suggested that the W-test could be sensitive to small signals with lower heritability. In terms of the performance for the noise pairs, all methods yielded noise p values greater than 0.05. The Type I error rate of the W-test was 2.95%, less than the family-wise error rate of 5%. Meanwhile, the Type I error rates of LR-m1 was 5.40% and of LR-m2 was 5.43%. The results showed that the W-test was able to distinguish between signal and noise in the simulated data set. Table 1 p Values of 5 answers and 5 noises by the W-test and the logistic regression models LR-m1 and LR-m2 in simulated data Computing time Computing time was calculated on a laptop computer with a 1.6 GHz chipset and 4 GB of random access memory using 2000 replicates on 1 pair of markers. The W-test was 4 times faster than logistic regression on a general laptop (2.28 s by the W-test, 10.12 s by the LR-m1, and 9.37 s by the LR-m2). Identification of gene–methylation interaction in experimental data The W-test was applied to test the gene–methylation interactions for GAW20 experimental data on chromosome 11. No significant interaction pair passed the Bonferroni correction significance level of 3.56E-07 (Table 2). We checked the functions of the top 15 identified epistatic pairs and found interesting biological implications. The top 3 SNP-CpG pairs all resided in the gene MPPED2 (11p14.1; p value = 8E-06), which encoded the protein metallophosphoesterase and was reported to be related to neuronal function [15]. Previous GWAS studies and biomedical experiments reported that MPPED2 was associated with chronic kidney disease, and knockdown of this gene in zebrafish embryos suggested a role for it in renal function [16]. GUCY2E was ranked fourth and has been reported to function in the central nervous system and retinal [17, 18]. NAV2 (ranked 6th; p value = 1.78E-05) is a neuron navigator that induces neurite outgrowth for all-trans retinoic acid, and plays an essential role in the development of the cranial nerve and the regulation of blood pressure in humans [19]. ZBTB16 at 11q23.2 (ranked 15th; p value = 7.04E-05) also has been reported as an inhibitor of neurite outgrowth in the adult central nervous system [20]. Other genes in the top 15 identified pairs include TRIM5, TRIM6-TRIM34, and TRIM3 (smallest p value = 5.22E-05), which were highly correlated with TG levels in mice [21]. The quantile–quantile (Q-Q) plot of the gene–methylation tests showed no inflation in spurious relations for the experimental data (Fig. 1). Table 2 Top 15 gene–methylation pairs identified by the W-test in experimental dataa Q-Q plot of gene–methylation interaction using experimental data Discussion and conclusions There has been increasing evidence for the contribution of epigenetics in regulating gene expressions implicated in diseases. Previous studies were mainly focused on experimentally studying gene–methylation interactions. In this study, we demonstrated that the W-test can be used as an effective method to identify the epistatic interactions between SNPs and CpG sites in the GAW20 simulated and experimental data sets. One common obstacle in the analysis of epistasis in the genome and epigenome is the large number of pairwise tests, the volume of which is determined by the size of the cis-regulatory region. Existing methods solve the challenge by using stage-wise and integrated analyses, in which the SNPs are separately selected and then the epistatic interactions with CpG sites are jointly evaluated in regression-based approaches [10, 11]. The stage-wise analysis may potentially miss the markers that have weak main effects but strong epistasis effects. Previous studies also made a linear assumption about the relationship between the epistatic pairs and a transformed form of the response variable, while having the advantages of covariate and population structure control. Some nonparametric methods, such as the Mann-Whitney U-test, have been applied for the analysis of methylation data [22]. However, these nonparametric tests cannot handle the potential complicated genetic architectures such as sparse data or population stratification. The W-test has the advantage of being model-free and does not assume any form of interaction effect. It also follows a chi-squared distribution in which the degrees of freedom is estimated from the working data by bootstrapped sampling. In this way, the W-test is able to correct potential bias of the probability distribution caused by complicated data structures. This method is very efficient such that it can be applied directly on SNP-CpG evaluations without prior filtering with the main effect. Application of this method on the experimental data from patients who had undergone treatment for managing TG levels via fenofibrate identified genes that played roles in renal function, the central nervous system, and retinal functions. The enriched signals found in neuronal-related genes suggest that the blood lipid levels could be related to the neurological dysfunction in the brain, which is the most cholesterol-rich organ in the body. By performing an epistatic evaluation between SNPs and CpG sites, we identified MPPED2, GUCY2E, NAV2, and ZBTB16 as associated with hyperlipidemia. Among these 4 genes, MPPED2 was the most significant; it plays a role in neural development, and genetic variations in this gene are reported to be related to migraines, a common disease of the neural system disease [23]. Furthermore, mutations of CUCY2E are reported to be related to retinal disorders [24, 25]. ZBTB16 encodes a protein that is highly expressed in the brain, and polymorphisms in this gene are used as a marker for attention deficit hyperactivity disorder, a neuropsychiatric condition [26]. It is intriguing to note that the enriched signaling in neuronal and retinal genes are identified through epistasis evaluation between SNPs and CpG sites, but not through separate analysis of the main effect in those data sets. This shines light on the importance of integrated analysis of omics data: considering multiple facets or measurement of a common object may improve the chance of catching the underlining signal. Further studies on these threads are necessary to discover the underlying biological mechanism. Zhi D, Aslibekyan S, Irvin MR, Claas SA, Borecki IB, Ordovas JM, Absher DM, Arnett DK. SNPs located at CpG sites modulate genome-epigenome interaction. Epigenetics. 2013;8(8):802–6. Voisin S, Almén MS, Zheleznyakova GY, Lundberg L, Zarei S, Castillo S, Eriksson FE, Nilsson EK, Blüher M, Böttcher Y, et al. Many obesity-associated SNPs strongly associate with DNA methylation changes at proximal promoters and enhancers. Genome Med. 2015;7(1):103. Jaenisch R, Bird A. Epigenetic regulation of gene expression: how the genome integrates intrinsic and environmental signals. Nat Genet. 2003;33(Suppl):245–54. Osborn TC, Pires JC, Birchler JA, Auger DL, Chen ZJ, Lee HS, Comai L, Madlung A, Doerge RW, Colot V, et al. Understanding mechanisms of novel gene expression in polyploids. Trends Genet. 2003;19(3):141–7. Polansky JK, Kretschmer K, Freyer J, Floess S, Garbe A, Baron U, Olek S, Hamann A, von Boehmer H, Huehn J. DNA methylation controls Foxp3 gene expression. Eur J Immunol. 2008;38(6):1654–63. Gutierrez-Arcelus M, Lappalainen T, Montgomery SB, Buil A, Ongen H, Yurovsky A, Bryois J, Giger T, Romano L, Planchon A, et al. Passive and active DNA methylation and the interplay with genetic variation in gene regulation. Elife. 2013;2:e00523. Soto-Ramírez N, Arshad SH, Holloway JW, Zhang H, Schauberger E, Ewart S, Patil V, Karmaus W. The interaction of genetic variants and DNA methylation of the interleukin-4 receptor gene increase the risk of asthma at age 18 years. Clin Epigenetics. 2013;5(1):1. Bell AF, Carter CS, Steer CD, Golding J, Davis JM, Steffen AD, Rubin LH, Lillard TS, Gregory SP, Harris JC, et al. Interaction between oxytocin receptor DNA methylation and genotype is associated with risk of postpartum depression in women without depression in pregnancy. Front Genet. 2015;6:243. Bani-Fatemi A, Howe AS, Matmari M, Koga A, Zai C, Strauss J, De Luca V. Interaction between methylation and CpG single-nucleotide polymorphisms in the HTR2A gene: association analysis with suicide attempt in schizophrenia. Neuropsychobiology. 2016;73(1):10–5. Hannon E, Dempster E, Viana J, Burrage J, Smith AR, Macdonald R, St Clair D, Mustard C, Breen G, Therman S, et al. An integrated genetic-epigenetic analysis of schizophrenia: evidence for co-localization of genetic associations and differential DNA methylation. Genome Biol. 2016;17(1):176. Li R, Kim D, Dudek SM, Ritchie MD. An integrated analysis of genome-wide DNA methylation and genetic variants underlying etoposide-induced cytotoxicity in European and African populations. In: European Conference on the Applications of Evolutionary Computation. Berlin: Springer; 2014. p. 928–38. Wang MH, Sun R, Guo J, Weng H, Lee J, Hu I, Sham PC, Zee BC. A fast and powerful W-test for pairwise epistasis testing. Nucleic Acids Res. 2016;44(12):e115. Irvin MR, Zhi D, Joehanes R, Mendelson M, Aslibekyan S, Claas SA, Thibeault KS, Patel N, Day K, Jones LW, et al. Epigenome-wide association study of fasting blood lipids in the genetics of lipid-lowering drugs and diet network study. Circulation. 2014;130(7):565–72. Balfour JA, McTavish D, Heel RC. Fenofibrate. Drugs. 1990;40(2):260–90. Pattaro C, Köttgen A, Teumer A, Garnaas M, Böger CA, Fuchsberger C, Olden M, Chen MH, Tin A, Taliun D, et al. Genome-wide association and functional follow-up reveals new loci for kidney function. PLoS Genet. 2012;8:e1002584. Witasp A, Ekström TJ, Schalling M, Lindholm B, Stenvinkel P, Nordfors L. How can genetics and epigenetics help the nephrologist improve the diagnosis and treatment of chronic kidney disease patients? Nephrol Dial Transplant. 2014;29(5):970–80. DuVal MG, Allison WT. Impacts of the retinal environment and photoreceptor type on functional regeneration. Neural Regen Res. 2017;12(3):376. Boye SL, Peterson JJ, Choudhury S, Min SH, Ruan Q, McCullough KT, Zhang Z, Olshevskaya EV, Peshenko IV, Hauswirth WW, et al. Gene therapy fully restores vision to the all-cone Nrl(−/−) Gucy2e(−/−) mouse model of Leber congenital amaurosis-1. Hum Gene Ther. 2015;26(9):575–92. Marzinke MA, Mavencamp T, Duratinsky J, Clagett-Dame M. 14-3-3ε and NAV2 interact to regulate neurite outgrowth and axon elongation. Arch Biochem Biophys. 2013;540(1):94–100. Simpson MT, Venkatesh I, Callif BL, Thiel LK, Coley DM, Winsor KN, Wang Z, Kramer AA, Lerch JK, Blackmore MG. The tumor suppressor HHEX inhibits axon growth when prematurely expressed in developing central nervous system neurons. Mol Cell Neurosci. 2015;68:272–83. Orozco LD, Cokus SJ, Ghazalpour A, Ingram-Drake L, Wang S, van Nas A, Che N, Araujo JA, Pellegrini M, Lusis AJ. Copy number variation influences gene expression and metabolic traits in mice. Hum Mol Genet. 2009;18(21):4118–29. Fleischer T, Edvardsen H, Solvang HK, Daviaud C, Naume B, Børresen-Dale AL, Kristensen VN, Tost J. Integrated analysis of high-resolution DNA methylation profiles, gene expression, germline genotypes and clinical end points in breast cancer patients. Int J Cancer. 2014;134(11):2615–25. Gormley P, Anttila V, Winsvold BS, Palta P, Esko T, Pers TH, Farh K-H, Cuenca-Leon E, Muona M, Furlotte NA. Meta-analysis of 375,000 individuals identifies 38 susceptibility loci for migraine. Nat Genet. 2016;48(8):856–66. Semple-Rowland SL, Lee NR, Van Hooser JP, Palczewski K, Baehr W. A null mutation in the photoreceptor guanylate cyclase gene causes the retinal degeneration chicken phenotype. Proc Natl Acad Sci U S A. 1998;95(3):1271–6. Perrault I, Rozet JM, Calvas P, Gerber S, Camuzat A, Dollfus H, Châtelin S, Souied E, Ghazi I, Leowski C. Retinal-specific guanylate cyclase gene mutations in Leber's congenital amaurosis. Nat Genet. 1996;14(4):461–4. Zayats T, Athanasiu L, Sonderby I, Djurovic S, Westlye LT, Tamnes CK, Fladby T, Aase H, Zeiner P, Reichborn-Kjennerud T. Genome-wide analysis of attention deficit hyperactivity disorder in Norway. PLoS One. 2015;10(4):e0122501. We would like to thank GAW20 for providing the epigenome data, and both reviewers for their insightful comments. Publication of this article was supported by NIH R01 GM031575. This work is supported by the National Science Foundation of China (81473035, 31401124 to MHW). The data that support the findings of this study are available from the Genetic Analysis Workshop (GAW), but restrictions apply to the availability of these data, which were used under license for the current study. Qualified researchers may request these data directly from GAW. About this supplement This article has been published as part of BMC Proceedings Volume 12 Supplement 9, 2018: Genetic Analysis Workshop 20: envisioning the future of statistical genetics by exploring methods for epigenetic and pharmacogenomic data. The full contents of the supplement are available online at https://bmcproc.biomedcentral.com/articles/supplements/volume-12-supplement-9. Rui Sun and Haoyi Weng contributed equally to this work. Division of Biostatistics, Centre for Clinical Research and Biostatistics, JC School of Public Health and Primary Care, the Chinese University of Hong Kong, Shatin, N.T., Hong Kong, Hong Kong, Special Administrative Region of China Rui Sun , Haoyi Weng , Ruoting Men , Xiaoxuan Xia , Ka Chun Chong , Benny Chung-Ying Zee & Maggie Haitian Wang The Chinese University of Hong Kong Shenzhen Research Institute, Shenzhen, China Department of Anaesthesia and Intensive Care, the Chinese University of Hong Kong, Shatin, N.T., Hong Kong, Hong Kong, Special Administrative Region of China William K. K. Wu Search for Rui Sun in: Search for Haoyi Weng in: Search for Ruoting Men in: Search for Xiaoxuan Xia in: Search for Ka Chun Chong in: Search for William K. K. Wu in: Search for Benny Chung-Ying Zee in: Search for Maggie Haitian Wang in: RS designed the study and performed the analysis. RS, HW, RM, and MHW wrote the manuscript. MHW contributed to the study design. XXX assisted in the data analysis. BZ coordinated and approved the study. All authors read and approved the final manuscript. Correspondence to Maggie Haitian Wang.
CommonCrawl
Homeomorphisms of $B^ k\times T^ n$ by Terry Lawson PDF An elementary proof is given that a homeomorphism of ${B^k} \times {T^n} \operatorname {rel} \partial$ that is homotopic to the identity is pseudoisotopic to the identity $\forall k,n$. The key step is to show that each homeomorphism is pseudoisotopic to any standard finite cover. Robert D. Edwards and Robion C. Kirby, Deformations of spaces of imbeddings, Ann. of Math. (2) 93 (1971), 63–88. MR 283802, DOI 10.2307/1970753 Wu-chung Hsiang and Julius L. Shaneson, Fake tori, Topology of Manifolds (Proc. Inst., Univ. of Georgia, Athens, Ga., 1969) Markham, Chicago, Ill., 1970, pp. 18–51. MR 0281211 Terry C. Lawson, Splitting isomorphisms of mapping tori, Trans. Amer. Math. Soc. 205 (1975), 285–294. MR 358821, DOI 10.1090/S0002-9947-1975-0358821-X Terry C. Lawson, Remarks on the four and five dimensional $s$-cobordism conjectures, Duke Math. J. 41 (1974), 639–644. MR 362361 —, Automorphisms of ${B^k} \times {T^n}$-an elementary approach with applications, Proc 1974 USL Math. Conf., University of Southwestern Louisiana, 1975, pp. 1-7. L. C. Siebenmann, Disruption of low-dimensional handlebody theory by Rohlin's theorem. , Topology of Manifolds (Proc. Inst., Univ. of Georgia, Athens, Ga., 1969), Markham, Chicago, Ill., 1970, pp. 57–76. MR 0271955 Retrieve articles in Proceedings of the American Mathematical Society with MSC: 57D60, 57D80 Retrieve articles in all journals with MSC: 57D60, 57D80 MSC: Primary 57D60; Secondary 57D80
CommonCrawl
System Of Differential Equations Examples The notes begin with a study of well-posedness of initial value problems for a first- order differential equations and systems of such equations. For example: equation 2x 16 = 10 has a solution x =3, as 23 16 =6 16 = 10. A differential equation which does not depend on the variable, say x is known as an autonomous differential equation. An integrating factor, I (x), is found for the linear differential equation (1 + x 2) d y d x + x y = 0, and the equation is rewritten as d d x (I (x) y) = 0. Introduction. However, since we are beginners, we will mainly limit ourselves to 2×2 systems. The negative eigenenergies of the Hamiltonian are sought as a solution, because these represent the bound states of the atom. SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS 3 Example 4. It is therefore important to learn the theory of ordinary differential equation, an important tool for mathematical modeling and a basic language of science. 3) Asdescribed above, welookfor asolutionto (2. x 5 0 xy2. The ultimate test is this: does it satisfy the equation?. Partial Differential Equations For Scientists And Engineers This book list for those who looking for to read and enjoy the Partial Differential Equations For Scientists And Engineers, you can read or download Pdf/ePub books and don't forget to give credit to the trailblazing authors. More Examples Here are more examples of how to solve systems of equations in Algebra Calculator. Solve the following system non-linear first order Lokta Volterra equations with boundary conditions x0 = 10, y0 = 5. In partial differential equations, they may depend on more than one variable. Reduce Differential Order of DAE System. A "system" of linear equations means that all of the equations are true at the same time. Here we present a collection of examples of general systems of linear differential equations and some applications in Physics and the Technical Sciences. Partial Differential Equations For Scientists And Engineers. In the past 30 years, however, macroeconomics has seen. of a System of ODEs. Systems of Differential Equations Matrix Methods Characteristic Equation Cayley-Hamilton - Cayley-Hamilton Theorem - An Example - The Cayley-Hamilton-Ziebur Method for ~u0= A~u - A Working Rule for Solving ~u0= A~u Solving 2 2~u0= A~u - Finding ~d 1 and ~d 2 - A Matrix Method for Finding ~d 1 and ~d 2 Other Representations of the. In addition, we investigate our model in more depth. Bus Suspension System An Example to Show How to Reduce Coupled Differential Equations to a Set of First Order Differential Equations. Example 13: System of non-linear first order differential equations. Differential equations are a special type of integration problem. such equations as an equivalent system of first-order differential equations in terms of a vector y and its first derivative. Differential Equations Calculator. From the digital control schematic, we can see that a difference equation shows the relationship between an input signal e(k) and an output signal u(k) at discrete intervals of time where k represents the index of the sample. A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations. The Summer Program will consider the consequences of overdeterminacy and partial differential equations of finite type. 1 by taking h = 0. A differential operator is an operator defined as a function of the differentiation operator. The ideas rely on computing the eigenvalues and eigenvectors of the coefficient matrix. 1 (Modelling with differential equations). The differential equation is linear. x =location along the beam (in) E =Young's modulus of elasticity of the beam (psi) I =second moment of area (in4) q =uniform loading intensity (lb/in). We include a review of fundamental con- cepts, a description of elementary numerical methods and the concepts of convergence and order for stochastic di erential equation solvers. To use TEMATH's System of Differential Equations Solver, Select System Diff Eq from the Graph menu. Consider, for example, the system of linear differential equations. 1 A simple example system Here's a simple example of a system of differential equations: solve the coupled equations dy 1 dt =−2y 1 +y2 dy2 dt =y 1 −2y2 (1) for y 1 (t)and y2 (t)given some initial values y 1 (0)and y2 (0). 4), we should only use equation (and no other environment) to produce a single equation. Pick one of our Differential Equations practice tests now and begin!. Since the functions f (x,y) and g(x,y) do not depend on the variable t, changes in the initial value t 0 only have the effect of horizontally shifting the graphs. Example (initial value problem). This is an example of an initial value problem, where the initial position and the initial velocity are used to determine the solution. Sketch Question: Give An Example Of A System Of Differential Equations For Which (t, 1) Is A Solution. Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). 2 Equilibria of flrst order equations 129 5. Introduction and Motivation; Second Order Equations and Systems; Euler's Method for Systems; Qualitative Analysis ; Linear Systems. 3 A nonlinear pendulum 128 5. In mathematics, an ordinary differential equation (ODE) is a differential equation containing one or more functions of one independent variable and the derivatives of those functions. Once you represent the equation in this way, you can code it as an ODE M-file. Solve a System of Differential Equations Solve a system of several ordinary differential equations in several variables by using the dsolve function, with or without initial conditions. They can be divided into several types. DSolve can solve ordinary differential equations (ODEs), partial differential equations (PDEs), differential algebraic equations (DAEs), delay differential equations (DDEs), integral equations, integro-differential equations, and hybrid differential equations. Solve the system of ODEs. 4), we should only use equation (and no other environment) to produce a single equation. The function desolve solves systems of linear ordinary differential equations using Laplace transform. In the above six examples eqn 6. The particular solution functions x(t) and y(t) to the system of differential equations satisfying the given initial values will be graphed in blue (for x(t)) and green (for y(t)). Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). Differential Equations Calculator. First, some may ask why would do we care that we can convert a 3rd order or higher ODE into a system of equations? Well there are quite a few reasons. In the above six examples eqn 6. Solve a System of Differential Equations Solve a system of several ordinary differential equations in several variables by using the dsolve function, with or without initial conditions. differentiable" N ×N autonomous system of differential equations. The RLC Circuit The RLC circuit is the electrical circuit consisting of a resistor of resistance R, a coil of inductance L, a capacitor of capacitance C and a voltage source arranged in series. Analysis of a System of Linear Delay Differential Equations A new analytic approach to obtain the complete solution for systems of delay differential equations (DDE) based on the concept of Lambert functions is presented. When this is not the case the system is commonly known as being differential algebraic and this 1this may be subject to debate since the non-autonomous case can have special features 1. A differential story — Peter D Lax wins the 2005 Abel Prize for his work on differential equations. Differential Equations and Separation of Variables A differential equation is basically any equation that has a. Parabolic equations: exempli ed by solutions of the di usion equation. This might introduce extra solutions. 3, the initial condition y 0 =5 and the following differential equation. Substitution method. concentration of species A) with respect to an independent variable (e. This is a constant time factor so it's not the biggest deal, but I feel that we can improve some applications by reducing common latency here. 1 Matrices and Linear Systems 264 5. Remember also that the derivative term y'(t) describes the rate of change in y(t). In particular, phase portraits for such systems can be classified according to types of eigenvalues which appear (the sign of the real part, zero or nonzero imaginary parts) and the dimensions of the generalized eigenspaces. Consider the system. pdf - Example 1 Solve the following differential equation 0:5 d2 y d3y dy = x 2y 3 4 2 dx3 dx dx 0 00 y(1 = 4 y(1 = 3 y(1 = 2 a Using the rev_ivp. This is a constant time factor so it's not the biggest deal, but I feel that we can improve some applications by reducing common latency here. In general, the number of equations will be equal to the number of dependent variables i. An couple of examples would be Example 1: dx1 dt = 0. I have solved such a system once before, but that was using an adiabatic approximation, e. It may not be immediately obvious for Maxwell's equations unless you write out the divergence and curl in terms of partial derivatives. ME 563 Mechanical Vibrations Fall 2010 1-2 1 Introduction to Mechanical Vibrations 1. Under reasonable conditions on φ, such an equation has a solution and the corresponding initial value problem has a unique solution. We know means the slope of y with respect to x. MTH 244 - Matrix Method for ODE 1 MTH 244 - Additional Information for Chapter 3 Section 1 (Merino) and section 3 (Dobrushkin) - March 2003 1 Linear Systems of Differential Equations of Order One. 2 Crystal growth{a case study 137 5. Systems of Equations: Graphical Method In these lessons, we will learn how to solve systems of equations or simultaneous equations by graphing. The following are examples of ordinary differential equations: In these, y stands for the function, and either t or x is the independent variable. differential equation (1). Cramer's rule says that if the determinant of a coefficient matrix |A| is not 0, then the solutions to a system of linear equations can. System of differential equations, ex1 Differential operator notation, system of linear differential equations, solve system of differential equations by elimination, supreme hoodie ss17. Differential equations involve the derivatives of a function or a set of functions. The differential equation is linear. A calculator for solving differential equations. 6 is non-homogeneous where as the first five equations are homogeneous. discusses two-point boundary value problems: one-dimensional systems of differential equations in which the solution is a function of a single variable and the value of the solution is known at two points. 6 Linearization of Nonlinear Systems In this section we show how to perform linearization of systems described by nonlinear differential equations. example4 a mixture problem a tank contains 50 gallons of a solution. We include a review of fundamental con- cepts, a description of elementary numerical methods and the concepts of convergence and order for stochastic di erential equation solvers. But if the homogenous part of the solution has the same root, you would try multiplying it by t to get a linearly independent set. In this case, the Microsoft Excel 5. A couple of examples may help to give the flavor. I give only one example, which shows how the trigonometric functions may emerge in the solution of a system of two simultaneous linear equations, which, as we saw above, is equivalent to a second-order equation. DESJARDINS and R´emi VAILLANCOURT Notes for the course MAT 2384 3X Spring 2011 D´epartement de math´ematiques et de statistique Department of Mathematics and Statistics Universit´e d'Ottawa / University of Ottawa Ottawa, ON, Canada K1N 6N5. As in the above example, the solution of a system of linear equations can be a single ordered pair. Let's see some examples of first order, first degree DEs. The solution is given by the equations x1(t) = c1 +(c2 −2c3)e−t/3 cos(t/6) +(2c2 +c3)e−t/3 sin(t/6), x2(t) = 1 2 c1 +(−2c2 −c3)e−t/3 cos(t/6) +(c2 −2c3)e−t/3 sin(t/6),. The equations are said to be "coupled" if output variables (e. Consider, for example, the system of linear differential equations. saying that one of the differential equations was approximately zero on the timescale at which the others change. Solve a System of Differential Equations Solve a system of several ordinary differential equations in several variables by using the dsolve function, with or without initial conditions. A Differential Equation is a n equation with a function and one or more of its derivatives: Example: an equation with the function y and its derivative dy dx. The laws of the Natural and Physical world are usually written and modeled in the form of differential equations. Cain and Angela M. System of differential equations. Capable of finding both exact solutions and numerical approximations, Maple can solve ordinary differential equations (ODEs), boundary value problems (BVPs), and even differential algebraic equations (DAEs). Ramsay, Department of Psychology, 1205 Dr. I give only one example, which shows how the trigonometric functions may emerge in the solution of a system of two simultaneous linear equations, which, as we saw above, is equivalent to a second-order equation. The above problem can be. Solving Systems of Differential Equations. Simple Control Systems 4. In this course, I will mainly focus on, but not limited to, two important classes of mathematical models by ordinary differential equations: population dynamics in biology. Jun 17, 2017 · A system of differential equations is a set of two or more equations where there exists coupling between the equations. Ordinary differential equation examples by Duane Q. Using Mathcad to Solve Systems of Differential Equations Charles Nippert Getting Started Systems of differential equations are quite common in dynamic simulations. It is the same concept when solving differential equations - find general solution first, then substitute given numbers to find particular solutions. i use matlab commands 'ode23' and 'ode45' for solving systems of differential. 2 Equilibria of flrst order equations 129 5. of the system, emphasizing that the system of equations is a model of the physical behavior of the objects of the simulation. We just saw that there is a general method to solve any linear 1st order ODE. That is the main idea behind solving this system using the model in Figure 1. This example shows you how to convert a second-order differential equation into a system of differential equations that can be solved using the numerical solver ode45 of MATLAB®. The differential equation with input f(t) and output y(t) can represent many different systems. Calculus is required as specialized advanced topics not usually found in elementary differential equations courses are included, such as exploring the world of discrete dynamical systems and describing chaotic. This results in the differential equation. The differential equations we consider in most of the book are of the form Y′(t) = f(t,Y(t)), where Y(t) is an unknown function that is being sought. A differential equation is an equation that relates a function with one or more of its derivatives. The ultimate test is this: does it satisfy the equation?. $$\frac{dy(t)}{dt} = -k \; y(t)$$ The Python code first imports the needed Numpy, Scipy, and Matplotlib packages. In Section 4. Example: t y″ + 4 y′ = t 2 The standard form is y t t. 524 Systems of Differential Equations analysis, the recycled cascade is modeled by the non-triangular system x′ 1 = − 1 6 x1 + 1 6 x3, x′ 2= 1 6 x1 − 1 3 x , x′ 3= 1 3 x2 − 1 6 x. The examples ddex1, ddex2, ddex3, ddex4, and ddex5 form a mini tutorial on using these solvers. I made up the third equation to be able to get a solution. Let's see some examples of first order, first degree DEs. As a consequence, the analysis of nonlinear systems of differential equations is much more accessible than it once was. To solve a single differential equation, see Solve Differential Equation. 5, solution is AJ0. 4) In other words, p{m) is obtained from p{D) by replacing D by m. The solutions of such systems require much linear algebra (Math 220). Assembly of the single linear differential equation for a diagram com-. 0 Modeling a first order differential equation Let us understand how to simulate an ordinary differential equation (continuous time system) in Simulink through the following example from chemical engineering: "A mass balance for a chemical in a completely mixed reactor can be mathematically modeled as the differential equation 8 × Ö × ç. The effects of $\xi, \, \epsilon,\, \theta_1 $ and $\theta_2$ rates on the devices that moved from latent to recovered nodes are investigated. For this linear differential equation system, the origin is a stable node because any trajectory proceeds to the origin over time. A differential equation which does not depend on the variable, say x is known as an autonomous differential equation. Simultaneous equations can help us solve many real-world problems. A system of two first order linear differential equations is two dimensional because the state space of the solutions is two dimensional affine vector space. The ddex1 example shows how to solve the system of differential equations. The functional dependence of x_1, , x_n on an independent variable, for instance x, must be explicitly indicated in the variables and its derivatives. The particular solution functions x(t) and y(t) to the system of differential equations satisfying the given initial values will be graphed in blue (for x(t)) and green (for y(t)). 1 (Modelling with differential equations). A basic example showing how to solve systems of differential equations. Learn more about differential equations. How to Solve Differential Equations. Here, you can see both approaches to solving differential equations. The theory of systems of linear differential equations resembles the theory of higher order differential equations. Mar 28, 2018 · You are welcome, you have two systems of ODE with three unknown quantities (I1, I2 and v ). ) We show by a number of examples how they may. In the above six examples eqn 6. This article assumes that the reader understands basic calculus, single differential equations, and linear algebra. It's possible that a differential equation has no solutions. 1 First-Order Systems and Applications 228 4. is, those differential equations that have only one independent variable. Jul 25, 2016 · An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. Here we will consider a few variations on this classic. This issue will be used to track common interface option handling. We will also learn about systems of linear differential equations, including the very important normal modes problem. An application: linear systems of differential equations We use the eigenvalues and diagonalization of the coefficient matrix of a linear system of differential equations to solve it. Introduction Differential equations are a convenient way to express mathematically a change of a dependent variable (e. Oct 25, 2019 · Solving second order ordinary differential equations is much more complex than solving first order ODEs. I am trying to find the equilibrium points by hand but it seems like it is not possible without the help of a numerical method. Once you represent the equation in this way, you can code it as an ODE M-file. Find the particular solution given that `y(0)=3`. Maple is the world leader when it comes to solving differential equations, finding closed-form solutions to problems no other system can handle. Mathcad Professional includes a variety of additional, more specialized functions for solving differential equations. 4 solving differential equations using simulink the Gain value to "4. It may not be immediately obvious for Maxwell's equations unless you write out the divergence and curl in terms of partial derivatives. For example: equation 2x 16 = 10 has a solution x =3, as 23 16 =6 16 = 10. This second edition of Noonburg's best-selling textbook includes two new chapters on partial differential equations, making the book usable for a two-semester sequence in differential equations. Basically, one simply replaces the higher order terms with new variables and includes the equations that define the new variables to form a set of first order simultaneous differential. a system of difference equations x((m+1)T+) = g(x(mT+)). ,where 4 is the time constant In this case we want to pass 0 and * as parameters, to make it easy to be able to change values for these parameters We set * = 1 We set initial condition +(0) = 1 and 4 = 5. In this example, I will show you the process of converting two higher order linear differential equation into a sinble matrix equation. Ordinary differential equation examples by Duane Q. Elliptic equations: weak and strong minimum and maximum principles; Green's functions. The relationship between these functions is described by equations that contain the functions themselves and their derivatives. 1 Systems of differential equations Find the general solution to the following system: 8 <: x0 1 (t) = 1(t) x 2)+3 3) x0 2 (t) = x 1(t)+x 2(t) x 3(t) x0 3 (t) = x 1(t) x 2(t)+3x 3(t) First re-write the system in matrix form: x0= Ax Where: x = 2 4 x 1(t) x 2(t) x 3(t) 3 5 A= 2 4 1 1 3 1 1 1 1 1 3 3 5 1. An integrating factor, I (x), is found for the linear differential equation (1 + x 2) d y d x + x y = 0, and the equation is rewritten as d d x (I (x) y) = 0. Exact differential equation examples and solutions download exact differential equation examples and solutions free and unlimited. Denoting this known solution by y 1 , substitute y = y 1 v = xv into the given differential equation and solve for v. Ordinary Differential Equations (ODES) There are many situations in science and engineering in which one encounters ordinary differential equations. Systems of differential equations Handout Peyam Tabrizian Friday, November 18th, 2011 This handout is meant to give you a couple more example of all the techniques discussed in chapter 9, to counterbalance all the dry theory and complicated ap-plications in the differential equations book! Enjoy! :) Note: Make sure to read this carefully!. In our study of chaos, we will need to expand the definitions of linear and nonlinear to include differential equations. To solve a system with higher-order derivatives, you will first write a cascading system of simple first-order equations then use them in your differential function. We'll start by attempting to solve a couple of very simple equations of such type. An example: dx1 dt = 2x1x2 +x2 dx2 dt = x1 −t2x2. For example, much can be said about equations of the form ˙y = φ(t,y) where φ is a function of the two variables t and y. ( x0(t) = x2 +1, y0(t) = x(y −1). The first tank starts with 40 pounds of salt dissolved in it, and the second tank starts with 60 pounds of salt. The following graphic outlines the method of solution. I don't really have such information now. In this example, I will show you the process of converting two higher order linear differential equation into a sinble matrix equation. Flashcards. Second Order Differential Equations 19. McKinley October 22, 2013 In these notes, which replace the material in your textbook, we will learn a modern view of analyzing systems of differential equations. The above integral equation, however,. " Then, using the Sum component, these terms are added, or subtracted, and fed into the integrator. It is not possible to solve for three variables given two equations. In most applications, the functions represent physical quantities, the derivatives represent their. Homogeneous linear differential equations produce exponential solutions. Find a solution of the differential equation from the previous example that satisfies the condition y(0) = 2. Differential equations involve the derivatives of a function or a set of functions. NDSolve solves a wide range of ordinary differential equations as well as many partial differential equations. Linear Ordinary Differential Equations If differential equations can be written as the linear combinations of the derivatives of y, then it is known as linear ordinary differential equations. Example: t y″ + 4 y′ = t 2 The standard form is y t t. As matter of fact, the explicit solution method does not exist for the general class of linear equations with variable coefficients. Such systems arise when a model involves two and more variable. such equations as an equivalent system of first-order differential equations in terms of a vector y and its first derivative. Over the last hundred years, many techniques have been developed for the solution of ordinary differential equations and partial differential equations. It provides qualitative physical explanation of mathematical results while maint. ) We show by a number of examples how they may. , position or voltage) appear in more than one equation. System of differential equations. Since they feature homogeneous functions in one or the other form, it is crucial that we understand what are homogeneous functions first. An couple of examples would be Example 1: dx1 dt = 0. logo1 New Idea An Example Double Check Laplace Transforms for Systems of Differential Equations Bernd Schroder¨ Bernd Schroder¨ Louisiana Tech University, College of Engineering and Science. Then in the five sections that follow we learn how to solve linear higher-order differential equations. These worked examples begin with two basic separable differential equations. pdf - Example 1 Solve the following differential equation 0:5 d2 y d3y dy = x 2y 3 4 2 dx3 dx dx 0 00 y(1 = 4 y(1 = 3 y(1 = 2 a Using the rev_ivp. Exact differential equation examples and solutions download exact differential equation examples and solutions free and unlimited. After introducing each class of differential equations we consider finite difference methods for the numerical solution of equations in the class. Typically a complex system will have several differential equations. Parameter Estimation for Differential Equations: A Gen-eralized Smoothing Approach J. When working with differential equations, MATLAB provides two different approaches: numerical and symbolic. discusses two-point boundary value problems: one-dimensional systems of differential equations in which the solution is a function of a single variable and the value of the solution is known at two points. Oct 19, 2011 · Solving ordinary differential equations on the GPU. Elliptic equations: weak and strong minimum and maximum principles; Green's functions. Solve the differential equation for the spring, d2y dt2 = − k m y, if the mass were displaced by a distance y0 and then released. 3) a Nonlinear SystemofDifferentialEquations. Differential equations are a special type of integration problem. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Featured on Meta An apology to our community, and next steps. 4 System of two linear differential equations 51 to each other. So the equation is called an Ordinary Differential Equations (ODE) b) When the unknown function y depends on several independent variables r, s, t, etc. dsolve can't solve this system. In this section we will examine some of the underlying theory of linear DEs. ca The research was supported by Grant 320 from the Natural Science and Engineering. Here follows the continuation of a collection of examples from Calculus 4c-1, Systems of differential systems. Liberal use of examples and homework problems aids the student in the study of the topics presented and applying them to numerous applications in the real scientific world. The contents of the tank are kept. differentiable" N ×N autonomous system of differential equations. This is a system of differential equations which describes the changing positions of n bodies with mass interacting with each other under the influence of gravity. Elementary Differential Equations with Boundary Value Problems is written for students in science, en-gineering,and mathematics whohave completed calculus throughpartialdifferentiation. Solve a System of Differential Equations Solve a system of several ordinary differential equations in several variables by using the dsolve function, with or without initial conditions. Find the general solution for the differential equation `dy + 7x dx = 0` b. Do they approach the origin or are they repelled from it? We can graph the system by plotting direction arrows. Toggle Main Navigation. view of analyzing systems of differential equations. From the way these forms were constructed, it is clear that a three dimensional surface in the seven dimensional space with coordinates x, y, t, a, b, c, u which solves Pfaff's problem and can be parameterized by x, y, t corresponds to the graph of a solution to the system of differential equations, and hence to a solution of the wave equation. Clearly the trivial solution (\(x = 0\) and \(y = 0\)) is a solution, which is called a node for this system. After introducing each class of differential equations we consider finite difference methods for the numerical solution of equations in the class. The solution is given by the equations x1(t) = c1 +(c2 −2c3)e−t/3 cos(t/6) +(2c2 +c3)e−t/3 sin(t/6), x2(t) = 1 2 c1 +(−2c2 −c3)e−t/3 cos(t/6) +(c2 −2c3)e−t/3 sin(t/6),. pdf - Example 1 Solve the following differential equation 0:5 d2 y d3y dy = x 2y 3 4 2 dx3 dx dx 0 00 y(1 = 4 y(1 = 3 y(1 = 2 a Using the rev_ivp. The functional dependence of x_1, , x_n on an independent variable, for instance x, must be explicitly indicated in the variables and its derivatives. The laws of the Natural and Physical world are usually written and modeled in the form of differential equations. differential equation. Differential equations have a remarkable ability to predict the world around us. That is, we can solve the equation x t 4 separately from the equation u t 0. The theory is very deep, and so we will only be able to scratch the surface. acterises these differential equations as so-called dynamical systems. When writing a. of differential equations and view the results graphically are widely available. If you are solving several similar systems of ordinary differential equations in a matrix form, create your own solver for these systems, and then use it as a shortcut. Chapter 1 Differential and Difference Equations In this chapter we give a brief introduction to PDEs. For generality, let us consider the partial differential equation of the form [Sneddon, 1957] in a two-dimensional domain. If the highest-order derivative present in a differential equation is the first derivative, the equation is a first-order differential equation. The equations of a system are dependent if ALL the solutions of one equation are also solutions of the other equation. Systems of Differential Equations. Delay Differential Equations. To solve a system with higher-order derivatives, you will first write a cascading system of simple first-order equations then use them in your differential file. Differential equations. First-Order Linear ODE. The same equations describe a variety of mechanical and electrical systems. Aug 07, 2012 · Modeling with ordinary differential equations (ODEs) Simple examples of solving a system of ODEs Create a System of ODE's To run a fit, your system has to be written as a definition. View Videos or join the Second-order Differential Equation discussion. There are many areas where differential equations are used as a model for the problem at hand. This is just an overview of the techniques; MATLAB provides a rich set of functions to work with differential equations. 3 A nonlinear pendulum 128 5. Mathcad Professional includes a variety of additional, more specialized functions for solving differential equations. Homogeneous Differential Equations are of prime importance in physical applications of mathematics due to their simple structure and useful solutions. It is firstorder since only the first derivative of x appears in the equation. Here is a simple example of a real-world problem modeled by a differential equation involving a parameter (the constant rate H). Flexural vibration of beamsandheatconductionarestudiedasexamplesof application. Image: Second order ordinary differential equation (ODE) integrated in Xcos As you can see, both methods give the same results. Then in the five sections that follow we learn how to solve linear higher-order differential equations. Cain and Angela M. Linear Ordinary Differential Equations If differential equations can be written as the linear combinations of the derivatives of y, then it is known as linear ordinary differential equations. We focus in particular on the linear differential equations of second order of variable coefficients, although the amount of examples is far from exhausting. At the end of these lessons, we have a systems of equations calculator that can solve systems of equations graphically and algebraically. Now we have two differential equations for two mass (component of the system) and let's just combine the two equations into a system equations (simultaenous equations) as shown below. In addition to analytic and numerical methods, graphic methods are also used for the approximate solution of differential equations. System of differential equations, ex1 Differential operator notation, system of linear differential equations, solve system of differential equations by elimination, supreme hoodie ss17. Introduction and First Definitions; Vector. Nonlinear Differential Equations and The Beauty of Chaos 2 Examples of nonlinear equations 2 ( ) kx t dt d x t m =− Simple harmonic oscillator (linear ODE) More complicated motion (nonlinear ODE) ( )(1 ()) 2 ( ) kx t x t dt d x t m =− −α Other examples: weather patters, the turbulent motion of fluids Most natural phenomena are. A differential equation which does not depend on the variable, say x is known as an autonomous differential equation. Flashcards. Incontrast, a set of m equations with l unknown functions is called a system of m equations. 0 Modeling a first order differential equation Let us understand how to simulate an ordinary differential equation (continuous time system) in Simulink through the following example from chemical engineering: "A mass balance for a chemical in a completely mixed reactor can be mathematically modeled as the differential equation 8 × Ö × ç. 1 A simple example system Here's a simple example of a system of differential equations: solve the coupled equations dy 1 dt =−2y 1 +y2 dy2 dt =y 1 −2y2 (1) for y 1 (t)and y2 (t)given some initial values y 1 (0)and y2 (0). 3 A nonlinear pendulum 128 5. Example: Solve the system of equations by the substitution method. The diagram represents the classical brine tank problem of Figure 1. 98 CHAPTER 3 Higher-Order Differential Equations 3. When writing a. I The Navier-Stokes equations are a set of coupled, non-linea r, partial differential equations. Substitution method. A system (1) is called a strictly hyperbolic system if all roots of the characteristic equation are distinct for any non-zero vector. The following are examples of ordinary differential equations: In these, y stands for the function, and either t or x is the independent variable. You can also plot slope and direction fields with interactive implementations of Euler and Runge-Kutta methods. To solve a single differential equation, see Solve Differential Equation. Use * for multiplication a^2 is a 2. Such systems occur as the general form of (systems of) differential equations for vector-valued functions x in one independent variable t,. 2 Crystal growth{a case study 137 5. In this case, we speak of systems of differential equations. The equations of a system are independent if they do not share ALL solutions. 2 together with the y-axis. Count-abel even if not solve-able — The 2004 Abel Prize goes to Sir Michael Atiyah and Isadore Singer for their work on how to solve systems of equations. Find the general solution for the differential equation `dy + 7x dx = 0` b. The equations are said to be "coupled" if output variables (e. Mathematica 9 leverages the extensive numerical differential equation solving capabilities of Mathematica to provide functions that make working with parametric differential equations conceptually simple. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4. 3)byintroducingthecharacteristicequations, dx ds = a dt ds = 1 dz ds = 0: (2. Remember also that the derivative term y'(t) describes the rate of change in y(t). In particular, MATLAB offers several solvers to handle ordinary differential equations of first order. Checking this solution in the differential equation shows that. Find a solution of the differential equation from the previous example that satisfies the condition y(0) = 2. A sin-gle difierential equation of second and higher order can also be converted into a system of flrst-order difierential.
CommonCrawl
Fast-decaying plant litter enhances soil carbon in temperate forests but not through microbial physiological traits Tropical forest soil carbon stocks do not increase despite 15 years of doubled litter inputs Emma J. Sayer, Luis Lopez-Sangil, … Michael W. I. Schmidt Plant roots increase both decomposition and stable organic matter formation in boreal forest soil Bartosz Adamczyk, Outi-Maaria Sietiö, … Jussi Heinonsalo Microbial necromass carbon and nitrogen persistence are decoupled in agricultural grassland soils Kate M. Buckeridge, Kelly E. Mason, … Jeanette Whitaker Higher soil fauna abundance accelerates litter carbon release across an alpine forest-tundra ecotone Yang Liu, Lifeng Wang, … Jian Zhang Forest understories controlled the soil organic carbon stock during the fallow period in African tropical forest: a 13C analysis Soh Sugihara, Makoto Shibata, … Shinya Funakawa Tundra microbial community taxa and traits predict decomposition parameters of stable, old soil organic carbon Lauren Hale, Wenting Feng, … Jizhong Zhou Interactions between microbial diversity and substrate chemistry determine the fate of carbon in soil Nanette C. Raczka, Juan Piñeiro, … Edward Brzostek Environmental and microbial controls on microbial necromass recycling, an important precursor for soil carbon stabilization Bioenergetic control of soil carbon dynamics across depth Ludovic Henneron, Jerôme Balesdent, … Sébastien Fontaine Matthew E. Craig ORCID: orcid.org/0000-0002-8890-79201,2, Kevin M. Geyer ORCID: orcid.org/0000-0002-5218-27453,4, Katilyn V. Beidler ORCID: orcid.org/0000-0002-9539-17821, Edward R. Brzostek ORCID: orcid.org/0000-0002-2964-05765, Serita D. Frey ORCID: orcid.org/0000-0002-9221-59193, A. Stuart Grandy3, Chao Liang6 & Richard P. Phillips ORCID: orcid.org/0000-0002-1345-41381 Nature Communications volume 13, Article number: 1229 (2022) Cite this article Ecosystem ecology Conceptual and empirical advances in soil biogeochemistry have challenged long-held assumptions about the role of soil micro-organisms in soil organic carbon (SOC) dynamics; yet, rigorous tests of emerging concepts remain sparse. Recent hypotheses suggest that microbial necromass production links plant inputs to SOC accumulation, with high-quality (i.e., rapidly decomposing) plant litter promoting microbial carbon use efficiency, growth, and turnover leading to more mineral stabilization of necromass. We test this hypothesis experimentally and with observations across six eastern US forests, using stable isotopes to measure microbial traits and SOC dynamics. Here we show, in both studies, that microbial growth, efficiency, and turnover are negatively (not positively) related to mineral-associated SOC. In the experiment, stimulation of microbial growth by high-quality litter enhances SOC decomposition, offsetting the positive effect of litter quality on SOC stabilization. We suggest that microbial necromass production is not the primary driver of SOC persistence in temperate forests. Factors such as microbial necromass origin, alternative SOC formation pathways, priming effects, and soil abiotic properties can strongly decouple microbial growth, efficiency, and turnover from mineral-associated SOC. The conversion of plant inputs into stable soil organic carbon (SOC) represents a critical yet poorly understood process, with consequences for soil C storage, nutrient availability, net primary production, and ecosystem sensitivity to global change1,2,3,4. The largest and slowest-cycling SOC pool is largely composed of microbial products stabilized by associations with soil minerals5,6,7, suggesting that microbial production mediates the transfer of plant inputs into the mineral-associated SOC pool. Microbial necromass (dead microbial cells and their degradates) accounts for a large portion of SOC in many systems8,9 and necromass production is controlled by three physiological traits: microbial growth rate (MGR), microbial carbon use efficiency (CUE)—the proportion of assimilated microbial C allocated to the production of new biomass—and microbial biomass turnover rate (MTR). Though these traits may be central to both SOC formation and decomposition10,11,12, the controls on microbial physiological traits and their consequences for SOC across environmental gradients are poorly elucidated, hindering predictions about how SOC pools will respond to environmental changes. Contemporary SOC theory predicts that microbial physiological traits mediate SOC stabilization, with greater microbial growth, efficiency, or turnover leading to greater mineral-associated SOC10 (hereafter the 'necromass stabilization hypothesis'; Fig. 1). Multiple isotope tracing studies support that high-quality, fast-decomposing plant inputs are more rapidly or efficiently transferred into mineral-associated SOC1,13,14,15,16,17,18, but see19. This effect is commonly attributed to the necromass stabilization hypothesis, assuming that higher input quality promotes faster or more efficient microbial growth10,20,21. The necromass stabilization hypothesis is represented implicitly in conventional first-order decay models22, and is increasingly represented in more mechanistic microbially explicit SOC models, which often treat microbial necromass as a primary source of mineral-associated SOC11,12,22,23,24,25,26,27,28. Yet, despite its importance in conceptual and mathematical models, this hypothesis has only limited empirical support (e.g., from artificial laboratory microcosms29 or single-site field observations30). Moreover, other mechanisms could lessen the importance of the necromass stabilization pathway—offsetting necromass buildup via enhanced SOC turnover (e.g., priming effects)31,32,33 or circumventing microbial physiological traits via non-necromass SOC formation pathways (e.g., direct sorption of plant compounds34,35 or microbial extracellular products36,37; Fig. 1a). Such mechanisms are rarely considered alongside the necromass stabilization pathway in empirical studies. Fig. 1: Conceptual model and map of study sites. a Conceptual model showing microbe-mediated mechanisms of mineral-associated soil organic carbon (SOC) formation and decay. Pathway 1 represents the necromass stabilization hypothesis and we note that different types of necromass may be differentially susceptible to mineral stabilization. Other numbers represent (2) mineral stabilization of plant inputs without assimilation by microbes, (3) mineral stabilization of microbial extracellular compounds, (4) stimulation of microbe-mediated mineral-associated SOC decay, (5) and the role of soil properties in governing mineral-associated SOC accumulation. b Map of study sites including Wabikon Lake Forest (WLF), Harvard Forest (HF), Lilly-Dickey Woods (LDW), Smithsonian Conservation Biology Institute (SCBI), Smithsonian Environmental Research Center (SERC), and Tyson Research Center (TRC). Here, we tested a hypothesis that is central to current SOC models—that microbial physiological traits link litter quality to SOC stabilization (H1; Fig. 1a). We used both experimental microcosms and a multi-site field study to ask whether microbial physiological traits (1) predict differences in mineral-associated SOC and (2) represent the key link between plant litter decomposition and SOC stabilization. We evaluated competing hypotheses (Fig. 1a) that direct interactions between plant compounds and mineral surfaces could explain litter quality effects on mineral-associated SOC (H2), microbial extracellular production could decouple microbial necromass production from mineral-associated SOC formation (H3), and microbial growth and efficiency stimulation could promote decomposition and reduce net SOC gains from litter-enhanced SOC formation (H4). In the microcosm experiment, we incubated 16 different temperate tree leaf litters with isotopically distinct soil and measured microbial physiological traits and the flow of litter-derived C into the mineral-associated SOC pool. Given that the characteristics of the soil matrix (e.g., soil texture, pH, metal-oxides) may exert more control over mineral-associated SOC retention than the source or characteristics of the organic matter itself10,38,39 (H5), we also tested hypotheses about the relationship between plant litter, microbes, and edaphic factors (Fig. 1a) across six eastern US temperate forests (Fig. 1b). We quantified SOC pools (mineral-associated and necromass-derived) along with microbial physiological traits and other potential biotic and abiotic drivers of mineral-associated SOC. We find that alternative SOC formation pathways and priming effects act to decouple microbial physiological traits from SOC formation in temperate forests. Microcosm experiment We monitored litter decomposition by measuring the rate and isotopic signature of respiration during 185-day incubations of litter-soil mixtures. Litter treatments varied markedly in chemical traits and decomposition parameters (Table S1). A litter quality index, obtained via a principal components analysis (Fig. S1A), explained 47% of this variation and correlated highly with acid unhydrolyzable residue (AUR) content—an indicator of litter carbon quality40. We used a substrate-nonspecific technique (18O incorporation into DNA) to quantify microbial physiological traits during early (15 days) and intermediate (100 days) stages of decomposition (Table S2). These time intervals correspond to the midpoint of parallel early and intermediate carbon budget microcosms (described below). Microbial growth, CUE, and turnover were highly correlated with one another allowing us to collapse these traits into a single microbial physiological trait index explaining 90% of the variation (Fig. S1B). High-quality litters increased microbial growth, CUE, and turnover (Fig. 2a). This effect was especially pronounced during intermediate stages of decomposition suggesting a depletion of the smaller pool of labile substrates in low-quality litters. Microbial physiological traits were better predicted by indicators of litter C quality than by litter nitrogen (N) content (Fig. S2), which is notable given that N availability and substrate stoichiometry are considered primary drivers of microbial physiological traits41,42. Nonetheless, the positive effect of litter quality on microbial physiological traits supports the common hypothesis that high-quality or fast-decomposing litters should promote rapid and efficient microbial growth10. Whereas this hypothesis has been previously evaluated indirectly by tracing microbial use of specific C compounds10,22, these substrate non-specific measurements provide direct evidence from microbial communities growing on natural leaf litter substrates. Fig. 2: Microbial physiological traits and mineral-associated SOC (MA-SOC) formation in the microcosm experiment. a Linear relationship (±SE; n = 16) between the litter quality index (PC1 in Fig. S1A) and the microbial physiological trait index (PC1 in Fig. S1B) after 15 days (Early stage: R2 = 0.14; P = 0.16) and 100 days (Intermediate stage: R2 = 0.38; P = 0.01) of decomposition, (b) lack of a relationship (P > 0.96) between the microbial physiological trait index and litter-derived MA-SOC after 30 (Early) and 185 days (Intermediate), and (c) path analysis showing the direct and indirect effects of the litter quality index (Litter quality) on the percentage of added litter C recovered in mineral-associated SOC (Litter-derived MA-SOC). Indirect effects of litter quality are mediated through the microbial physiological trait index. Numbers above and below paths represent standardized coefficients during early- and intermediate-stage decomposition, respectively, with significance levels indicated (*p < 0.1, **p < 0.05, and ***p < 0.01). Thickness and color of lines correspond to coefficient magnitude and direction, respectively. Total and indirect effects of litter quality on soil C formation are also summarized with standardized coefficients. Contrary to the predictions of the necromass stabilization hypothesis, microbial physiological traits were not positively related to the amount of litter C recovered in the mineral-associated SOC pool during early (30 days) or intermediate (185 days) stage decomposition (Fig. 2b). This is surprising given that high-quality litters led to faster and more efficient mineral-associated SOC formation (Fig. S3), a common finding that is often attributed to the necromass stabilization hypothesis. Instead, path analysis revealed a negative relationship between microbial physiological traits and both the rate and efficiency of mineral-associated SOC formation (Fig. 2c; Fig. S4). This pattern was most pronounced in intermediate stages when the effects of litter quality on microbial physiological traits were strongest. The overall relationship between litter quality and mineral-associated SOC formation was explained by a direct linkage rather than by an indirect effect—that is, it was not mediated through microbial physiological traits. In fact, the indirect microbial physiological trait effect counteracted and weakened the overall positive effect of litter quality on mineral-associated SOC formation in both early and, especially, intermediate stages of decomposition. These findings point toward a few explanations with important implications for conceptual and mathematical SOC models. The inability of microbial physiological traits to explain litter quality effects on mineral-associated SOC formation suggests that the production of microbial necromass is not a sufficient explanation for SOC stabilization in temperate systems. This result highlights the critical importance of non-necromass pathways of mineral-associated SOC formation. The direct linkage between litter quality and mineral-associated SOC formation in our path analysis could indicate direct sorption of plant-derived decomposition products14,31,43 (Fig. 1a). Indeed, high-quality litters in this experiment had more soluble material and decomposed faster, likely enhancing dissolved SOC44 and producing oxidized intermediates with a high sorption affinity35. These effects of litter quality on dissolved SOC are known to enhance mineral-associated SOC formation45. In addition, microbial extracellular products could be important (Fig. 1a), but are overlooked by an emphasis on microbial physiological traits36,46. Extracellular polymeric substances, stress compounds, and similar products account for a small proportion of microbial production47, but are likely to disproportionately affect mineral-associated SOC given their susceptibility to stabilization on mineral surfaces37. In direct contrast with predictions derived from the necromass hypothesis, litter quality stimulation of microbial physiological traits may also mediate priming effects (i.e., stimulation of SOC decay by fresh inputs) which act to reduce, rather than promote, mineral-associated SOC formation. Higher litter quality promoted faster and more efficient microbial growth, leading to increased microbial biomass (Fig. S5) which is likely to promote SOC decomposition via increased microbial activity48. Though mineral-associated SOC is often assumed to resist microbial decomposition, a large fraction of this pool is likely susceptible to microbe-mediated desorption2. Microbes can access mineral-associated organic matter as a source of N and microbial activity can promote desorption by altering redox states or diffusion gradients, or by producing extracellular enzymes49. Alternatively, newly produced necromass could be recycled as a source of N prior to mineral stabilization50. Thus, the negative relationship between microbial physiological traits and mineral-associated SOC formation in the path analysis indicates that decomposition of newly formed mineral-associated SOC or mineral-associated SOC precursors outweighed necromass-driven SOC formation. Though rarely assessed32,51, the balance of these two processes is likely to vary based on the strength of mineral-organic associations and magnitude of litter-induced priming effects. This balance should determine whether the necromass pathway is a dominant driver of mineral-associated SOC formation in a given system. We observed a negative effect of microbial physiological traits on soil-derived (i.e., pre-existing) mineral-associated SOC (Fig. 3a) and particulate SOC (Fig. S6), supporting the hypothesis that litter quality stimulation of microbial physiological traits led to priming effects. By the time intermediate-stage decomposition was reached, this led to a significant negative indirect effect of litter quality on soil-derived SOC. A litter-induced priming effect was also evident in cumulative soil-derived respiration data, but we note that this was better predicted by the second axis of the litter quality index—which reflected litter N content (Fig. 3b)—potentially indicating that SOC formation and decomposition were differentially affected by litter C quality vs. N availability. In summary, our microcosm experiment suggests that the positive effect of litter quality on mineral-associated SOC formation is attributable to the sorption of litter-derived or microbial extracellular products rather than microbial necromass. To the contrary, elevated microbial growth, efficiency, and turnover may drive the decomposition of new and old SOC. Fig. 3: Soil-derived (i.e., pre-existing) carbon losses in the microcosm experiment. a Path analysis showing the direct and indirect effects of the litter quality index (Litter quality) on soil-derived mineral-associated SOC (MA-SOC). Indirect effects of litter quality are mediated through the microbial physiological trait index. Numbers above and below paths represent standardized coefficients during early- and intermediate-stage decomposition, respectively, with significance levels indicated (*p < 0.1, **p < 0.05, and ***p < 0.01). Thickness and color of lines correspond to coefficient magnitude and direction, respectively. Total and indirect effects of litter quality on soil C formation are also summarized with standardized coefficients. b Linear relationship (±SE; n = 16) between the litter nitrogen index—i.e., the second axis of the litter quality PCA which correlated negatively with litter C:N and AUR:N (Fig. S1A)—and cumulative soil-derived respiration (CO2-C) after 30 days (Early: R2 = 0.59; P < 0.01) and 185 days of decomposition (Intermediate: R2 = 0.32; P = 0.02). To evaluate whether the mechanisms implied by our experimental results are relevant drivers of mineral-associated SOC in natural forests, we quantified microbial physiological traits along with mineral-associated and necromass-derived SOM pools across wide environmental gradients nested within six US temperate forests. These gradients were identified based on the mycorrhizal associations of dominant trees52,53,54 (see methods) and reflected variation in litter quality and other factors hypothesized to control microbial physiological traits (e.g., soil C:N, pH, and N availability; Table S3). As in the microcosm experiment, we derived a litter quality index and a microbial physiological trait index explaining 45% of the variation in litter chemical traits and 79% of the variation in microbial physiological traits, respectively (Fig. S1C, D). In contrast with the microcosm experiment, we detected no relationship between litter quality and microbial physiological traits in the field (Fig. S7), which is perhaps unsurprising given field variation in other factors expected to control litter decomposition and microbial functioning (e.g., soil nutrient availability). Microbial physiological traits were instead positively predicted by soil C:N (Fig. S7), meaning microbial growth, efficiency, and turnover tended to be greater in soils with more carbon per unit nitrogen. This indicates that our plots, as well as the soil used in our experiment (C:N = 12.0), fell below the stoichiometric boundary (i.e., threshold element ratio) between C vs. N limitation55, which is thought to range from a C:N of 20–25 for terrestrial microbial communities56. Indeed, bivariate plots suggest a hump-shaped relationship where microbial physiological traits no longer increase with soil C:N at a soil C:N of ~20 (Fig. S8), suggesting that N limitation might drive microbial physiological traits in soils with a higher C:N ratio. Patterns in mineral-associated SOC, however, were consistent with the microcosm experiment. Litter quality was positively associated with mineral-associated SOC, but this relationship was not explained by microbial physiological traits (Fig. 4a; Fig. S9), which were unrelated to total mineral-associated SOC (Fig. S9). Consistent with the microcosm experiment, we observed a negative relationship between microbial physiological traits and the proportion of soil C stored in mineral-associated SOC (Fig. 4a). This pattern may have been driven by alternative stabilization pathways or priming of mineral-associated SOC, as documented in the microcosm experiment, but our observations reveal additional mechanisms by which necromass production could be decoupled from mineral-associated SOC along natural gradients. Fig. 4: Results from the field study. Linear mixed model coefficients relating the percentage of soil C stored in mineral-associated SOC (a) and total microbial necromass (b) to the litter quality index (Litter quality; PC1 in Fig. S1C) the microbial physiological trait index (Mic. phys. traits; PC1 in Fig S1D), fine root biomass, soil pH, oxalate-extractable iron (Fe-ox), and soil clay content. Plot shows standard error (n = 54; inner bold lines) and 95% confidence intervals (outer lines). Coefficients were centered and standardized to show the relative importance of each predictor despite the different scales on which the variables were measured. Black points indicate p < 0.1. Differences in necromass stabilization could weaken the relationship between necromass production and accumulation across field plots. We found no significant relationship between microbial physiological traits and the size of the soil microbial necromass pool obtained via soil amino sugar analyses of the field soils (Fig. 4b), suggesting that necromass production was not a sufficient predictor of necromass retention. We observed only a weak positive relationship of total necromass with Feox and, curiously, a negative relationship with soil clay content, whereas the necromass stabilization hypothesis assumes that microbial necromass is highly susceptible to interactions with soil minerals. Recent work, however, suggests necromass retention may be more influenced by substrate chemistry than by mineral mechanisms in fungal-dominated forest topsoils57,58, potentially leading to the buildup of necromass in organic soils or particulate organic matter fractions59. After resolving necromass into fungal and bacterial pools, it is clear that fungal necromass drove the negative association with clay content—likely reflecting fungal dominance in coarse-textured soils60—while bacterial necromass was strongly associated with Feox (Fig. S10). This suggests, in agreement with previous work58,61,62,63,64, that bacterial cellular products were more prone to interactions with soil minerals and therefore a more important source of mineral-associated SOC. We suggest that the origin of microbial necromass influences its stabilization potential and therefore the extent to which necromass production should drive mineral-associated SOC formation (Fig. 1a). Variation in abiotic soil properties could also directly or indirectly override the effect of microbial physiological traits on mineral-associated SOC. Across our plots, clay content was strongly associated with the proportion of soil C stored in mineral-associated SOC and oxalate-extractable Fe (Feox)—an indicator of poorly crystalline Fe oxides—was the strongest predictor of total mineral-associated SOC (Fig. 4a; Fig. S7). In addition to these direct relationships, abiotic soil properties could indirectly decouple microbial physiological traits from mineral-associated SOC through their association with other SOM and biotic properties. For example, while clay content was positively associated with mineral-associated SOC, clayey soils can have a lower root density65. This pattern could partially explain the negative relationships between mineral-associated SOC and root biomass or microbial physiological traits (Fig. 4a)—because roots can promote microbial activity—or the negative relationship between clay content and microbial necromass (Fig. 4b). Indeed, we found that greater fine root biomass was associated with higher microbial biomass (mixed model: P = 0.09), necromass (Fig. 4b), and a tendency toward higher microbial physiological traits (Fig. S7). Roots and root-associated microbes are likely to drive patterns in SOC dynamics. Rhizospheres are important zones of both SOC formation43 and decay66. Our finding that fine root biomass was associated with greater microbial activity and lower mineral-associated SOC could reflect root-driven SOC priming67. Yet our study targeted aboveground inputs and upper topsoil; a different role of roots may be apparent at greater depths where belowground inputs dominate. Indeed, root-derived carbon was recently found to account for a substantial and rapid accumulation of mineral-associated SOC in ingrowth cores deployed in these same plots68, indicating a role of living root or mycorrhizal inputs69. Measurements of in situ interactions among roots, microbes, and SOC would be valuable in further improving our understanding of pathways of SOC formation and decomposition. While we found that microbial growth, efficiency, and turnover are not adequate predictors of mineral-associated SOC formation in surface soils of temperate forests, necromass stabilization on mineral surfaces could still be an important mechanism of SOC storage. This mechanism may explain more variation in other systems (e.g., croplands or grasslands) or in deeper soils, where necromass accounts for a larger proportion of total SOC9. For example, agricultural soils tend to be more bacterially dominated9, potentially leading to stronger interactions between microbial necromass and soil minerals (as described above). This may lead to a stronger association between microbial physiological traits and mineral-associated SOC in cropped systems30. Future sampling campaigns are needed to understand how SOC formation pathways vary across different systems. Contemporary SOC concepts posit that microbial physiological traits are key mediators in the mineral stabilization of plant inputs, leading to the hypothesis that high-quality plant inputs promote mineral-associated SOC by enhancing microbial growth and turnover. We found that plant litter quality was positively associated with mineral-associated SOC in both a microcosm experiment and across six temperate forests. This pattern, however, was not explained by microbial physiological traits in either the experiment or the field study. Though litter quality was positively associated with microbial physiological traits in the microcosm experiment, field variation in microbial growth, efficiency, and turnover was better explained by soil C:N ratios. More importantly, microbial physiological traits were negatively related to both mineral-associated SOC formation and the proportion of soil C stored in mineral-associated SOC in the experiment and field study, respectively. Thus, while acknowledging that microbial necromass represents an important pool of SOC, we conclude that the necromass stabilization hypothesis is insufficient as a general explanation linking plant input quality to mineral-associated SOC formation. Our results highlight four factors which act to decouple microbial necromass production from mineral-associated SOC (Fig. 1a): differences in the mineral stabilization potential of necromass produced by different microbial communities (1); alternative pathways linking plant inputs to mineral-associated SOC (i.e., direct stabilization of plant inputs or the stabilization of microbial extracellular products) (2 and 3); priming of newly formed or pre-existing mineral-associated SOC driven by higher litter quality and greater microbial activity (4); and the overriding effects of soil abiotic properties (5). The balance of these factors is likely context-dependent, but is critical to our predictive understanding of mineral-associated SOC dynamics and therefore atmosphere-soil C feedbacks. As models and experiments probe these alternative hypotheses, a broadened view of how microbes mediate mineral-associated SOC formation should ultimately lead to greater confidence in soil C projections. Microcosm preparation and incubation Leaf litters were collected from Lilly-Dickey Woods, a mature eastern US temperate broadleaf forest located in South-Central Indiana (39°14′N, 86°13′W) using litter baskets and surveys for freshly senesced litter as described in Craig et al.52. Of the 19 species collected in Craig et al. (2018), we selected litter from 16 tree species with the goal of maximizing variation in litter chemical traits (Table S1). Litters were air-dried and then homogenized and fragmented such that all litter fragments passed a 4000 µm, but not a 250 µm mesh. Whereas leaf litters had a distinctly C3 δ13C signature of −30.1 ± 1.5 (mean, standard deviation), we used a 13C-rich (δ13C = −12.6 ± 0.4) soil obtained from the A horizon of a 35-yr continuous corn field at the Purdue University Agronomy Center for Research and Education near West Lafayette, Indiana (40°4′N, 86°56′W). The soil is classified as Chalmers silty clay loam (a fine-silty, mixed, superactive, mesic Typic Endoaquoll). Prior to use in the incubation, soils were sieved (2 mm) and remaining recognizable plant residues were thoroughly picked out. Soils were mixed with acid-washed sand—30% by mass—to facilitate litter mixing (see below) and to increase the soil volume for post-incubation processing. The resulting soil had a pH of 6.7 and a C:N ratio of 12.0. We constructed the experimental microcosms by mixing the 16 litter species with the 13C-enriched soil. Each litter treatment was replicated four times in four batches (i.e., 16 microcosms per species, 272 total microcosms including 16 soil-only controls). Two batches (C budget microcosms) were used to monitor CO2 efflux and to track litter-derived C into SOM pools, and two batches were used to quantify microbial biomass dynamics. Incubations were carried out in 50 mL centrifuge tubes modified with an O-ring to prevent leakage and a rubber septum to facilitate headspace sampling. To each microcosm, we added 5 g dry soil, adjusted moisture to 65% water-holding capacity, and pre-incubated for 24 h in the dark at 24 °C. Using a dissecting needle, 300 mg of leaf litter were carefully mixed into treatment microcosms and controls were similarly agitated. This corresponds to an average C addition rate of 27.1 ± 1.1 g C kg−1 dry soil among the 16 species. During incubation, microcosms were loosely capped to retain moisture while allowing gas exchange, and were maintained at 65% water-holding capacity by adding deionized water every week. Carbon budget in microcosms Respiration was quantified with an infrared gas analyzer (LiCOR 6262, Lincoln, NE, USA) coupled to a sample injection system. Our first measurement was taken about 12 h after litter addition (day 1) and subsequent measurements were taken on days 2, 4, 11, 19, and 30 for both batches and on days 46, 64, 79, 92, 109, 128, 149, and 185 for the second batch. Prior to each measurement, microcosms were capped, flushed with CO2-free air, and incubated for 1–8 h depending on the expected efflux rate. Headspace was sampled with a gas-tight syringe and the CO2-C concentration was converted to a respiration rate (µg CO2-C day−1). Total cumulative CO2-C loss was derived from point measurements by numerical integration (i.e., the trapezoid method). To evaluate soil-derived CO2-C efflux, we measured δ13C in two gas samples per litter type or control on a ThermoFinnigan DELTA Plus XP isotope ratio mass spectrometer (IRMS) with a GasBench interface (Thermo Fisher Scientific, San Jose, CA). Isotopes were measured on days 1, 4, 11, 30, 64, 109, and 185. On each of these days, a two-source mixing model70 was applied to determine the fraction of total CO2-C derived from soil organic matter vs. litter: $$\frac{{F}^{l}(t)}{F(t)}=\frac{\delta F\left(t\right)-\,\delta {F}^{c}(t)}{\delta {C}_{l}-\delta {F}^{c}(t)}$$ where \(\frac{{F}^{l}(t)}{F(t)}\) is the fraction total CO2-C efflux [\(F(t)\)] derived from litter [\({F}^{l}(t)\)] at time \(\left(t\right)\), \(\delta F\left(t\right)\) is the δ13C of the CO2 respired by each litter-soil combination, \(\delta {F}^{c}(t)\) is the average δ13C of the CO2 respired by the control soil, and \(\delta {C}_{l}\) is the δ13C of each litter type. These data were used to calculate cumulative soil-derived C efflux via numerical integration and, for each litter type, average soil-derived C efflux was subtracted from total cumulative CO2-C loss to determine cumulative litter-derived CO2-C loss. Carbon budget microcosms were harvested on days 30 and 185 to track litter-derived C into mineral-associated SOC at an early and intermediate stage of decomposition. To do this, we used a size fractionation procedure71,72 modified to minimize the recovery of soluble leaf litter compounds or dissolved organic matter in the mineral-associated SOC fraction. For each sample, we first added 30 mL deionized water, gently shook by hand to suspend all particles, and then centrifuged (2500 rpm) for 10 min. Floating leaf litter was carefully removed, dried for 48 h at 60 °C, and weighed; and the clear supernatant was discarded to remove the dissolved organic matter. The remaining sample was dispersed in 5% (w/v) sodium hexametaphosphate for 20 h on a reciprocal shaker and then washed through a 53 µm sieve. The fraction retained on the sieve was added to the floating leaf litter sample and collectively referred to as particulate SOC, while the fraction that passed through the sieve was considered the mineral-associated SOC. Both fractions were dried, ground, and weighed; and analyzed for C concentrations and δ13C values on an elemental combustion system (Costech ECS 4010, Costech Analytical Technologies, Valencia, CA, USA) as an inlet to an IRMS. As above, litter-derived C in the particulate and mineral-associated SOC was determined as follows: $$\frac{{C}_{s}^{l}(t)}{{C}_{s}(t)}=\frac{\delta {C}_{s}\left(t\right)-\,\delta {C}_{c}(t)}{\delta {C}_{l}-\delta {C}_{c}(t)}$$ where \({C}_{s}(t)\) is the total particulate or mineral-associated SOC content in the sample at time \((t)\), \({C}_{s}^{l}(t)\) is the litter-derived C in the soil, \(\delta {C}_{s}\left(t\right)\) is the measured δ13C value for each soil fraction, \(\delta {C}_{c}\left(t\right)\) is the average δ13C for each fraction in control samples, and \(\delta {C}_{l}\) is the δ13C of each litter type. In a few cases, mineral-associated δ13C was slightly less negative in the treatment than in the control soil. In these cases, litter-derived mineral-associated SOC was considered zero. Total litter-derived SOC at each harvest date was calculated by subtracting the cumulative litter CO2-C from initial added litter C. The difference between this value and the sum of litter-derived particulate and mineral-associated SOC was considered the residual pool which we assume mostly represents water-extractable dissolved organic matter. Microbial biomass dynamics during incubation Sample batches were harvested at days 15 and 100 to capture early- and intermediate-term microbial biomass responses to litter treatments. These times were selected to correspond with the middle of early and intermediate C budget microcosm incubations. We quantified microbial biomass as well as MGR, CUE, and MTR using 18O-labeled water73,74 as in Geyer et al.75. Microbial biomass C (MBC) was determined on two ~2 g subsamples using a standard chloroform fumigation extraction76. One subsample was immediately extracted in 0.5 M K2SO4 and one was fumigated for 72 h before extraction. After shaking for 1 h, extracts were gravity filtered through a Whatman No. 40 filter paper, and filtrates were analyzed for total organic C using the method of Bartlett and Ross77 as adapted by Giasson et al.78. The difference between total organic C in the fumigated and unfumigated subsamples was used to calculate MBC (extraction efficiency KEC = 0.45). To determine MGR, CUE, and MTR, we first pre-incubated two 0.5 g soil subsamples (one treatment and one control) for 2 d at 24 °C. Prior to this pre-incubation, samples were allowed to evaporate down to 53 ± 6% (mean, sd) water-holding capacity. After the pre-incubation, water was injected with a 25 µL syringe to bring each sample to 65% water-holding capacity. For one subsample, we used unlabeled deionized water. For the second subsample, enriched 18O-water (98.1 at%; ICON Isotopes) was mixed with unlabeled deionized H2O to achieve approximately 20 at% of 18O in the final soil water. Each sample was placed in a centrifuge tube (modified for gas sampling), flushed with CO2-free air, and incubated for 24 h. Headspace CO2 concentration was then measured, and samples were flash frozen in liquid N2 and stored at −80 °C until DNA extraction. DNA was extracted from each sample using a DNA extraction kit (Qiagen DNeasy PowerSoil Kit, Venlo, Netherlands) following the protocol described in Geyer et al. (2019) which sought to maximize the recovery of DNA. The DNA concentration was determined fluorometrically using a Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen). DNA extracts (80 µL) were dried at 60 °C in silver capsules spiked with 100 µL of salmon sperm DNA (42.5 ng µL−1), to reach the oxygen detection limit, and sent to the UC Davis Stable Isotope Facility for quantification of δ18O and total O content. Microbial growth rate (MGR) was calculated following Geyer et al. (2019). Specifically, atom % of soil DNA O (at% ODNA) was determined using the two-pool mixing model: $${at} \% \,{O}_{{DNA}}=\,\frac{\left[\left({at} \% \,{O}_{{DN}A+{ss}}\times {O}_{{DNA}+{ss}}\right)-\left({at} \% \,{O}_{{ss}}\times {O}_{{ss}}\right)\right]}{{O}_{{DNA}}}\,$$ where at% is the atom % 18O and ODNA+ss, ODNA, and Oss are the concentration of O in the whole sample, soil DNA, and salmon sperm, respectively. Atom percent excess of soil DNA oxygen (APE Osoil) was calculated as the difference between at% ODNA in the treatment and control samples. Total microbial growth in terms of O (Total O; µg) was estimated as: $${Total}\,O=\frac{{O}_{{soil}}\times \,{{APE}\,O}_{{soil}}}{{at} \% \,{soil}\,{water}}$$ where at% soil water is the atom % 18O in the soil water. MGR in terms of C (µg C g−1 soil d−1) was calculated by applying conversion mass ratios of oxygen:DNA (0.31) and MBC:DNA for each sample, dividing by the soil mass, and scaling by the incubation time. Assuming uptake rate (Uptake) is equal to the sum of MGR and respiration, CUE and MTR were calculated by the following equations. $${CUE}=\,\frac{{MGR}}{{Uptake}}\,$$ $${MTR}=\,\frac{{MGR}}{{MBC}}$$ Data analysis for microcosm experiment Litter decay constants were calculated for each species using litter-derived CO2-C values to estimate litter mass loss over time. After it was determined that a single exponential decay model provided a poor fit, we fit litter decomposition data using the double exponential decay model: $$y=s{e}^{{-k}_{1}t}+(1-s){e}^{{-k}_{2}t}$$ where s represents the labile or early stage decomposition fraction that decomposes at rate k1, and k2 is the decay constant for the remaining late stage decomposition fraction. To reduce the dimensionality of litter quality and microbial indicators, indices were derived by principal component analysis (PCA; Fig. S1A, B) using the 'prcomp' function in R. The first axis of a PCA of decomposition parameters (s, k1, and k2) and litter chemical properties (soluble and AUR contents; AUR-to-N and C-to-N ratios; and the lignocellulose index [LCI]) was taken as a litter quality index. Whereas this index highly correlated with indicators of C quality (AUR, soluble content, and LCI), the second axis of this PCA correlated with C:N and AUR:N and was therefore taken as a second litter quality index representing variation in N concentration. The first axis of a PCA of MGR, CUE, and MTR was taken as a microbial physiological trait index. Bivariate relationships were examined using simple linear regressions on average species values at each harvest (n = 16). To examine relationships between microbial physiological traits and mineral-associated SOC, data from the early-term (day 15) and intermediate-term (day 100) microbial harvest were matched with early-term (day 30) and intermediate-term (day 185) C budget microcosms, respectively. In addition to examining total mineral-associated SOC formation, we also estimated the efficiency of litter C transfer into the mineral-associated SOC pool as the fraction of lost litter C (i.e., litter C lost as CO2, recovered in the mineral-associated SOC fraction, or in the residual pool) retained in the mineral-associated SOC. Path analyses were used to evaluate the hypothesis that microbial physiological traits mediate the effect of litter quality on mineral-associated SOC formation and mineral-associated and particulate SOC decay. We hypothesized that the litter quality index would be positively associated with the microbial physiological trait index (representing faster and more efficient microbial growth) and microbial physiological traits would, in turn, be positively associated with the rate and efficiency of mineral-associated SOC formation. We expected that this mediating pathway would reduce the direct relationship between litter quality and SOC. This analysis was conducted using the LAVAAN package79 to run path analyses for total litter-derived mineral-associated SOC, mineral-associated SOC formation efficiency, and soil-derived mineral-associated and particulate SOC for both early and intermediate stage harvests. All analyses were performed using R version 3.5.2. Field study design and soil sampling We worked in the Smithsonian's Forest Global Earth Observatory (ForestGEO) network80 in six mature U.S. temperate forests varying in climate, soil properties, and tree community composition (Fig. 1a): Harvard forest (HF; 42°32′N, 72°11′W) in North-Central Massachusetts, Lilly-Dickey Woods (LDW; 39°14'N, 86°13'W) in South-Central Indiana, the Smithsonian Conservation Biology Institute (SCBI; 38°54′N, 78°9′W) in Northern Virginia, the Smithsonian Environmental Research Center (SERC; 38°53′N, 76°34′W) on the Chesapeake Bay in Maryland, Tyson Research Center (TRC; 38°31′N, 90°33′W) in Eastern Missouri, and Wabikon Lake Forest (WLF; 45°33′N, 88°48′W) in Northern Wisconsin, USA. Land use history across the six sites consisted mostly of timber harvesting which ceased in the early 1900s. Soils are mostly Oxyaquic Dystrudepts at HF, Typic Dystrudepts and Typic Hapludults at LDW, Typic Hapludalfs at SCBI, Typic or Aquic Hapludults at SERC, Typic Hapludalfs and Typic Paleudalfs at TRC, and Typic and Alfic Haplorthods at WLF. Further site details are reported in Table S5. Each site contains a rich assemblage of co-occurring arbuscular mycorrhizal (AM)- and ectomycorrhizal (ECM)-associated trees (Table S6), which we leveraged to generate environmental gradients in factors hypothesized to predict microbial physiological traits within each site. Specifically, the dominance of AM vs. ECM trees within a temperate forest plot has been shown to be a strong predictor of soil pH, C:N, inorganic N availability, and litter quality52,53,54. We established nine 20 × 20 m plots in each of our six sites in Fall 2016 (n = 54) distributed along a gradient of AM- to ECM-associated tree dominance. Plots were selected to avoid obvious confounding environmental factors. Where possible, we established our nine-plot gradient in three blocks (<1 ha), each containing an AM, ECM, and mixed dominance plot. Plots were generally located on upland areas except for TRC where all plots were located on toeslopes due to a lack of AM trees in upland areas. At HF, we established six of the nine plots outside the boundaries of the ForestGEO plot due to a lack of appropriate AM-ECM mixtures. At LDW, we used a random subset of previously established 15 × 15 m plots81. Basal area of living trees was determined using the most recent ForestGEO census (within the last 5 years) or by conducting our own inventory. ECM-dominance was quantified as the percentage of ECM-associated basal area relative to the total plot basal area. Four soil cores (5 cm-diameter) were collected in July 2017 from each plot to a depth of 5 cm. A thin O horizon (Oe and Oa) was sometimes present and was collected as part of the 5 cm soil core. However, there was often a thick O horizon (>5 cm) at HF, which was removed before coring. Samples were also collected at 5–15 cm depth for soil texture analysis. We sampled from an inner 10 × 10 m square in each plot to avoid edge effects. All samples from the same plot were composited, sieved (2 mm), picked free of roots, subsampled for gravimetric moisture (105 °C), and air-dried, or refrigerated (4 °C) until analysis for microbial physiological variables and N availability. Soil properties We determined several physicochemical properties known to predict mineral-associated SOC. We measured soil pH (8:1 ml 0.01 M CaCl2:g soil) and soil texture using a benchtop pH meter and a standard hydrometer procedure82, respectively. Organic matter content was high in some upper surface soils, so plot-level soil texture was determined from 5 to 15 cm depth samples. We quantified oxalate-extractable Al and Fe pools (Alox and Feox) in all soil samples as an index of poorly crystalline Al- and Fe-oxides83, which is one of the strongest predictors of SOM content in temperate forests84. Briefly, we extracted 0.40 g dried, ground soil in 40 mL 0.2 M NH4-oxalate at pH 3.0 in the dark for 4 h before gravity filtering and refrigerating until analysis (within 2 w) on an atomic-adsorption spectrometer (Aanalyst 800, Perkin Elmer, Waltham, MA, USA), using an acetylene flame and a graphite furnace for the atomization of Fe and Al, respectively. We quantified potential net N mineralization rates as an index of soil N availability. One 5 g subsample per plot was extracted immediately after processing by adding 10 mL 2 M KCl, shaking for 1 h, and filtering through a Whatman No. 1 filter paper after centrifugation at 3000 rpm. A second subsample from each plot was incubated under aerobic conditions at field moisture and 23 °C for 14 d before extraction. Extracts were frozen (−20 °C) until analysis for NH4+-N using the salicylate method and for NO3−-N plus NO2−-N after a cadmium column reduction on a Lachat QuikChem 8000 flow Injection Analyzer (Lachat Instruments, Loveland, CO, USA). Potential net N mineralization rates (mg N g dry soil−1 d−1) were calculated as the difference between pre- and post-incubation inorganic N concentrations. Microbial biomass dynamics in field plots Microbial biomass carbon and microbial physiological traits were quantified within 10 days of collection as described above, with four minor differences. First, 30 g soil subsamples were covered with parafilm and pre-incubated for 2 d near the field soil temperature measured at the time of sampling (16.5 °C for WLF and HF, and 21.5 °C for LDW, TRC, SCBI, and SERC). Second, for CO2 analysis, samples were placed in a 61 mL serum vial crimped with a rubber septum. Third, DNA concentrations were determined using a Qubit dsDNA BR Assay Kit (Life Technologies) and a Qubit 3.0 fluorometer (Life Technologies). Fourth, 14.5 g subsamples were used for microbial biomass analysis. Soil organic matter characterization in field plots Mineral-associated SOC was quantified as in the microcosm experiment, but without a pre-fractionation leachate removal step. We additionally measured soil amino sugar concentrations to estimate microbial necromass contributions to SOM. Amino sugars are useful microbial biomarkers because they are found in abundance in microbial cell walls, but are not produced by higher plants and soil animals19. Moreover, amino sugars can provide information on the microbial source of necromass. For example, glucosamine (Glu) is produced mostly by fungi whereas muramic acid (MurA) is produced almost exclusively by bacteria61,85. Amino sugars were extracted, purified, converted to aldononitrile acetates, and quantified with myo-inositol as in Liang et al.86. We used the concentrations of Glu and MurA to estimate total, fungal, and bacterial necromass soil C using the empirical relationships reported in Liang et al.8. $${Bacterial}\,{necromass}\,C\,=\,{MurA}\,\times \,45$$ $${Fungal}\,{necromass}\,C\,=\,({mmol}\,{GluN}\,{-}\,2\,\times \,{mmol}\,{MurA})\times \,179.17\,\times \,9$$ Leaf litter and fine roots in field plots In Fall 2017, we collected leaf litter on two sample dates from four baskets deployed in the inner 10 × 10 m of each plot. Litter was composited by plot, dried (60 °C), sorted by species, weighed, and ground. We performed leaf litter analyses on at least three samples of each species at each site —unless a species was only present in one or two plots— to get a site-specific mean for each species. Some non-dominant species were not included in these analyses because an insufficient amount of material was collected. Fine roots (<2 mm) were collected at the time of soil sampling (at LDW, TRC, and WLF) or the following summer (HF, SERC, SCBI) due to issues with sample processing. Leaf litter and bulk root samples were analyzed for C and N content. For leaf litter, we conducted a sequential extraction as above (sensu Moorhead & Reynolds87) to determine the soluble fraction using hot water and ethanol, the AUR—the insoluble ash-free fraction that resisted degradation by a strong acid—which contains lignin and is commonly found to inhibit leaf litter decay40. The LCI was estimated by dividing the AUR fraction by the total non-soluble ash-free fraction. Leaf litter trait values were calculated for each plot as the average of site-specific mean values for each species in the plot weighted by proportion of total plot basal area. Data analysis for field study As above, a litter quality index and microbial physiological trait index was calculated via PCA for each plot (Fig. S1C, D). We note, however, that decomposition parameters were not available for the litter quality PCA in the field study. We evaluated the extent to which microbial physiological variables were predicted by litter quality, N availability, stoichiometery, C availability, pH, or root biomass. We first examined variability in indicators of these biogeochemical drivers (i.e., the litter quality index, net N mineralization, soil C:N, dissolved organic carbon (DOC), soil pH, and fine root biomass) and evaluated whether they correlated with our mycorrhizal gradients. Linear mixed models were fit to the microbial physiological trait index using site as a random factor and the above indicators as fixed factors. We also used linear mixed models to evaluate whether microbial physiological variables predicted significant variation in mineral-associated or necromass-derived C in the context of other hypothesized or known controls on mineral-associated SOC. In addition to the litter quality and microbial physiological trait indices, we included clay content, pH, Feox, Alox, and fine root biomass as fixed factors. We fit models to mineral-associated SOC—on a total basis (g soil−1) and as a proportion of soil C (mineral-associated C/total soil C)—as well as total, fungal, and bacterial necromass. Models were selected to avoid highly correlated predictors (i.e., r > 0.5). Feox and Alox were correlated above this threshold and final models were selected to contain only Feox based on AIC. Residuals were screened for normality (Shapiro-Wilk), heteroscedasticity (visual assessment of residual plots), and influential observations (Cook's D). Based on this, MGR, MTR, and mineral-associated SOC were natural log transformed. For all mixed models, we centered and standardized predictors (i.e., z-transformation) so that the slopes and significance levels of different predictors could be compared to one another on the same axis88. The data generated in this study have been deposited in the ESS-DIVE archive89 (https://data.ess-dive.lbl.gov/view/; https://doi.org/10.15485/1835182). Analysis code is available as part of the ESS-DIVE data package89 (https://data.ess-dive.lbl.gov/view/; https://doi.org/10.15485/1835182). Cotrufo, M. F. et al. Formation of soil organic matter via biochemical and physical pathways of litter mass loss. Nat. Geosci. 8, 776–779 (2015). Jilling, A. et al. Minerals in the rhizosphere: overlooked mediators of soil nitrogen availability to plants and microbes. Biogeochemistry 139, 103–122 (2018). Lal, R. Soil Carbon Sequestration Impacts on Global Climate Change and Food Security. Science 304, 1623–1627 (2004). Article CAS PubMed ADS Google Scholar Lavallee, J. M., Soong, J. L. & Cotrufo, M. F. Conceptualizing soil organic matter into particulate and mineral-associated forms to address global change in the 21st century. Glob. Change Biol. 26, 261–273 (2020). Grandy, A. S. & Neff, J. C. Molecular C dynamics downstream: the biochemical decomposition sequence and its impact on soil organic matter structure and function. Sci. Total Environ. 404, 297–307 (2008). Miltner, A., Bombach, P., Schmidt-Brücken, B. & Kästner, M. SOM genesis: microbial biomass as a significant source. Biogeochemistry 111, 41–55 (2012). Bradford, M. A., Keiser, A. D., Davies, C. A., Mersmann, C. A. & Strickland, M. S. Empirical evidence that soil carbon formation from plant inputs is positively related to microbial growth. Biogeochemistry 113, 271–281 (2013). Liang, C., Amelung, W., Lehmann, J. & Kästner, M. Quantitative assessment of microbial necromass contribution to soil organic matter. Glob. Change Biol. 25, 3578–3590 (2019). Wang, B., An, S., Liang, C., Liu, Y. & Kuzyakov, Y. Microbial necromass as the source of soil organic carbon in global ecosystems. Soil Biol. Biochem. 162, 108422 (2021). Cotrufo, M. F., Wallenstein, M. D., Boot, C. M., Denef, K. & Paul, E. The Microbial Efficiency-Matrix Stabilization (MEMS) framework integrates plant litter decomposition with soil organic matter stabilization: do labile plant inputs form stable soil organic matter? Glob. Change Biol. 19, 988–995 (2013). Li, J., Wang, G., Allison, S. D., Mayes, M. A. & Luo, Y. Soil carbon sensitivity to temperature and carbon use efficiency compared across microbial-ecosystem models of varying complexity. Biogeochemistry 119, 67–84 (2014). Wieder, W. R., Grandy, A. S., Kallenbach, C. M. & Bonan, G. B. Integrating microbial physiology and physio-chemical principles in soils with the MIcrobial-MIneral Carbon Stabilization (MIMICS) model. Biogeosciences 11, 3899–3917 (2014). Bird, J. A., Kleber, M. & Torn, M. S. 13C and 15N stabilization dynamics in soil organic matter fractions during needle and fine root decomposition. Org. Geochem. 39, 465–477 (2008). Córdova, S. C. et al. Plant litter quality affects the accumulation rate, composition, and stability of mineral-associated soil organic matter. Soil Biol. Biochem. 125, 115–124 (2018). Fulton‐Smith, S. & Cotrufo, M. F. Pathways of soil organic matter formation from above and belowground inputs in a Sorghum bicolor bioenergy crop. GCB Bioenergy 11, 971–987 (2019). Haddix, M. L., Paul, E. A. & Cotrufo, M. F. Dual, differential isotope labeling shows the preferential movement of labile plant constituents into mineral-bonded soil organic matter. Glob. Change Biol. 22, 2301–2312 (2016). Hatton, P., Castanha, C., Torn, M. S. & Bird, J. A. Litter type control on soil C and N stabilization dynamics in a temperate forest. Glob. Change Biol. 21, 1358–1367 (2015). Lavallee, J. M., Conant, R. T., Paul, E. A. & Cotrufo, M. F. Incorporation of shoot versus root-derived 13C and 15N into mineral-associated organic matter fractions: results of a soil slurry incubation with dual-labelled plant material. Biogeochemistry 137, 379–393 (2018). Castellano, M. J., Mueller, K. E., Olk, D. C., Sawyer, J. E. & Six, J. Integrating plant litter quality, soil organic matter stabilization, and the carbon saturation concept. Glob. Change Biol. 21, 3200–3209 (2015). Crowther, T. W. et al. Environmental stress response limits microbial necromass contributions to soil organic carbon. Soil Biol. Biochem. 85, 153–161 (2015). Sokol, N. W. & Bradford, M. A. Microbial formation of stable soil carbon is more efficient from belowground than aboveground input. Nat. Geosci. 12, 46–53 (2019). Frey, S. D., Lee, J., Melillo, J. M. & Six, J. The temperature response of soil microbial efficiency and its feedback to climate. Nat. Clim. Change 3, 395–398 (2013). Abramoff, R. et al. The Millennial model: in search of measurable pools and transformations for modeling soil carbon in the new century. Biogeochemistry 137, 51–71 (2018). Allison, S. D., Wallenstein, M. D. & Bradford, M. A. Soil-carbon response to warming dependent on microbial physiology. Nat. Geosci. 3, 336–340 (2010). Hassink, J. & Whitmore, A. A model of the physical protection of organic matter in soils. Soil Sci. Soc. Am. J. 61, 131–139 (1997). Robertson, A. D. et al. Unifying soil organic matter formation and persistence frameworks: the MEMS model. Biogeosciences 16, 1225–1248 (2019). Sulman, B. N., Phillips, R. P., Oishi, A. C., Shevliakova, E. & Pacala, S. W. Microbe-driven turnover offsets mineral-mediated storage of soil carbon under elevated CO2. Nat. Clim. Change 4, 1099–1102 (2014). Wieder, W. R., Bonan, G. B. & Allison, S. D. Global soil carbon projections are improved by modelling microbial processes. Nat. Clim. Change 3, 909–912 (2013). Kallenbach, C. M., Frey, S. D. & Grandy, A. S. Direct evidence for microbial-derived soil organic matter formation and its ecophysiological controls. Nat. Commun. 7, 13630 (2016). Article CAS PubMed PubMed Central ADS Google Scholar Kallenbach, C. M., Grandy, A. S., Frey, S. D. & Diefendorf, A. F. Microbial physiology and necromass regulate agricultural soil carbon accumulation. Soil Biol. Biochem. 91, 279–290 (2015). Liang, C., Schimel, J. P. & Jastrow, J. D. The importance of anabolism in microbial control over soil carbon storage. Nat. Microbiol 2, 1–6 (2017). Liang, J. et al. More replenishment than priming loss of soil organic carbon with additional carbon input. Nat. Commun. 9, 3175 (2018). Article PubMed PubMed Central ADS Google Scholar Zhu, X. et al. Microbial trade-off in soil organic carbon storage in a no-till continuous corn agroecosystem. Eur. J. Soil Biol. 96, 103146 (2020). Kalbitz, K., Kaiser, K., Bargholz, J. & Dardenne, P. Lignin degradation controls the production of dissolved organic matter in decomposing foliar litter: Lignin degradation and dissolved organic matter. Eur. J. Soil Sci. 57, 504–516 (2006). Kramer, M. G., Sanderman, J., Chadwick, O. A., Chorover, J. & Vitousek, P. M. Long-term carbon storage through retention of dissolved aromatic acids by reactive particles in soil. Glob. Change Biol. 18, 2594–2605 (2012). Geyer, K., Schnecker, J., Grandy, A. S., Richter, A. & Frey, S. Assessing microbial residues in soil as a potential carbon sink and moderator of carbon use efficiency. Biogeochemistry 151, 237–249 (2020). Mikutta, R., Zang, U., Chorover, J., Haumaier, L. & Kalbitz, K. Stabilization of extracellular polymeric substances (Bacillus subtilis) by adsorption to and coprecipitation with Al forms. Geochimica et. Cosmochimica Acta 75, 3135–3154 (2011). Kaiser, K. & Guggenberger, G. Mineral surfaces and soil organic matter. Eur. J. Soil Sci. 54, 219–236 (2003). Sollins, P. et al. Organic C and N stabilization in a forest soil: Evidence from sequential density fractionation. Soil Biol. Biochem. 38, 3313–3324 (2006). Prescott, C. E. Litter decomposition: what controls it and how can we alter it to sequester more carbon in forest soils? Biogeochemistry 101, 133–149 (2010). Manzoni, S., Taylor, P., Richter, A., Porporato, A. & Ågren, G. I. Environmental and stoichiometric controls on microbial carbon-use efficiency in soils: Research review. N. Phytologist 196, 79–91 (2012). Sinsabaugh, R. L., Manzoni, S., Moorhead, D. L. & Richter, A. Carbon use efficiency of microbial communities: stoichiometry, methodology and modelling. Ecol. Lett. 16, 930–939 (2013). Sokol, N. W., Sanderman, J. & Bradford, M. A. Pathways of mineral‐associated soil organic matter formation: Integrating the role of plant carbon source, chemistry, and point of entry. Glob. Change Biol. 25, 12–24 (2019). Soong, J. L., Parton, W. J., Calderon, F., Campbell, E. E. & Cotrufo, M. F. A new conceptual model on the fate and controls of fresh and pyrolized plant litter decomposition. Biogeochemistry 124, 27–44 (2015). Kleber, M., Eusterhues, K., Mikutta, C., Mikutta, R. & Nico, P. S. Chapter One - Mineral–Organic Associations: Formation, Properties, and Relevance in Soil Environments. In Advances in Agronomy (ed. Sparks, D. L.) vol. 130 1–140 (Academic Press, 2015). Schimel, J. & Schaeffer, S. M. Microbial control over carbon cycling in soil. Front. Microbiol. 3, 1–11 (2012). Šantrůčková, H., Picek, T., Tykva, R., Šimek, M. & Pavlů, B. Short-term partitioning of 14C-[U]-glucose in the soil microbial pool under varied aeration status. Biol. Fertil. Soils 40, 386–392 (2004). Wei, H., Chen, X., He, J., Huang, L. & Shen, W. Warming but not nitrogen addition alters the linear relationship between microbial respiration and biomass. Front. Microbiol. 10, 1–9 (2019). Daly, A. B. et al. A holistic framework integrating plant-microbe-mineral regulation of soil bioavailable nitrogen. Biogeochemistry, https://doi.org/10.1007/s10533-021-00793-9 (2021). Cui, J. et al. Carbon and nitrogen recycling from microbial necromass to cope with C:N stoichiometric imbalance by priming. Soil Biol. Biochem. 142, 107720 (2020). Sauvadet, M. High carbon use efficiency and low priming effect promote soil C stabilization under reduced tillage. Soil Biol. Biochem. 123, 64–73 (2018). Craig, M. E. et al. Tree mycorrhizal type predicts within-site variability in the storage and distribution of soil organic matter. Glob. Change Biol. 24, 3317–3330 (2018). Jo, I., Potter, K. M., Domke, G. M. & Fei, S. Dominant forest tree mycorrhizal type mediates understory plant invasions. Ecol. Lett. 21, 217–224 (2018). Phillips, R. P., Brzostek, E. & Midgley, M. G. The mycorrhizal-associated nutrient economy: a new framework for predicting carbon–nutrient couplings in temperate forests. N. Phytologist 199, 41–51 (2013). Allen, A. P. & Gillooly, J. F. Towards an integration of ecological stoichiometry and the metabolic theory of ecology to better understand nutrient cycling. Ecol. Lett. 12, 369–384 (2009). Berg, B. & McClaugherty, C. Plant litter: Decomposition, humus formation, carbon sequestration (Springer, 2003). Buckeridge, K. M. et al. Sticky dead microbes: rapid abiotic retention of microbial necromass in soil. Soil Biol. Biochem. 149, 107929 (2020). Ni, X. et al. The vertical distribution and control of microbial necromass carbon in forest soils. Glob. Ecol. Biogeogr. 29, 1829–1839 (2020). Prommer, J. et al. Increased microbial growth, biomass, and turnover drive soil organic carbon accumulation at higher plant diversity. Glob. Change Biol. 26, 669–681 (2020). Hemkemeyer, M., Christensen, B. T., Martens, R. & Tebbe, C. C. Soil particle size fractions harbour distinct microbial communities and differ in potential for microbial mineralisation of organic pollutants. Soil Biol. Biochem. 90, 255–265 (2015). Guggenberger, G., Frey, S. D., Six, J., Paustian, K. & Elliott, E. T. Bacterial and Fungal Cell-Wall Residues in Conventional and No-Tillage Agroecosystems. Soil Sci. Soc. Am. J. 63, 1188–1198 (1999). Pronk, G. J., Heister, K. & Kögel-Knabner, I. Amino sugars reflect microbial residues as affected by clay mineral composition of artificial soils. Org. Geochem. 83–84, 109–113 (2015). Six, J., Frey, S. D., Thiet, R. K. & Batten, K. M. Bacterial and Fungal Contributions to Carbon Sequestration in Agroecosystems. Soil Sci. Soc. Am. J. 70, 555–569 (2006). Zhang, X., Amelung, W., Yuan, Y. & Zech, W. Amino sugar signature of particle-size fractions in soils of the native prairie as affected by climate. Soil Sci. 163, 220–229 (1998). Silver, W. L. et al. Effects of Soil Texture on Belowground Carbon and Nutrient Storage in a Lowland Amazonian Forest Ecosystem. Ecosystems 3, 193–209 (2000). Keiluweit, M. et al. Mineral protection of soil carbon counteracted by root exudates. Nat. Clim. Change 5, 588–595 (2015). Mobley, M. L. et al. Surficial gains and subsoil losses of soil carbon and nitrogen during secondary forest development. Glob. Change Biol. 21, 986–996 (2015). Keller, A. B., Brzostek, E. R., Craig, M. E., Fisher, J. B. & Phillips, R. P. Root-derived inputs are major contributors to soil carbon in temperate forests, but vary by mycorrhizal type. Ecol. Lett. 24, 626–635 (2021). Sokol, N. W., Kuebbing, S. E., Karlsen‐Ayala, E. & Bradford, M. A. Evidence for the primacy of living root inputs, not root or shoot litter, in forming soil organic carbon. N. Phytologist 221, 233–246 (2019). Balesdent, J., Mariotti, A. & Guillet, B. Natural 13C abundance as a tracer for studies of soil organic matter dynamics. Soil Biol. Biochem. 19, 25–30 (1987). Cambardella, C. A. & Elliott, E. T. Particulate Soil Organic-Matter Changes across a Grassland Cultivation Sequence. Soil Sci. Soc. Am. J. 56, 777–783 (1992). Bradford, M. A., Fierer, N. & Reynolds, J. F. Soil carbon stocks in experimental mesocosms are dependent on the rate of labile carbon, nitrogen and phosphorus inputs to soils. Funct. Ecol. 22, 964–974 (2008). Blazewicz, S. J. & Schwartz, E. Dynamics of 18O Incorporation from H218O into Soil Microbial DNA. Micro. Ecol. 61, 911–916 (2011). Spohn, M., Klaus, K., Wanek, W. & Richter, A. Microbial carbon use efficiency and biomass turnover times depending on soil depth – Implications for carbon cycling. Soil Biol. Biochem. 96, 74–81 (2016). Geyer, K. M., Dijkstra, P., Sinsabaugh, R. & Frey, S. D. Clarifying the interpretation of carbon use efficiency in soil through methods comparison. Soil Biol. Biochem. 128, 79–88 (2019). Vance, E. D., Brookes, P. C. & Jenkinson, D. S. An extraction method for measuring soil microbial biomass C. Soil Biol. Biochem. 19, 703–707 (1987). Bartlett, R. J. & Ross, D. S. Colorimetric Determination of Oxidizable Carbon in Acid Soil Solutions. Soil Sci. Soc. Am. J. 52, 1191–1192 (1988). Giasson, M.-A., Averill, C. & Finzi, A. C. Correction factors for dissolved organic carbon extracted from soil, measured using the Mn(III)-pyrophosphate colorimetric method adapted for a microplate reader. Soil Biol. Biochem. 78, 284–287 (2014). Rosseel, Y. lavaan: An R package for structural equation modeling. J. Stat. Softw. 48, 1–36 (2012). Anderson-Teixeira, K. J. et al. CTFS-ForestGEO: a worldwide network monitoring forests in an era of global change. Glob. Change Biol. 21, 528–549 (2015). Midgley, M. G., Brzostek, E. & Phillips, R. P. Decay rates of leaf litters from arbuscular mycorrhizal trees are more sensitive to soil effects than litters from ectomycorrhizal trees. J. Ecol. 103, 1454–1463 (2015). Gee, G. W. & Bauder, J. W. Particle Size Analysis by Hydrometer: a Simplified Method for Routine Textural Analysis and a Sensitivity Test of Measurement Parameters. Soil Sci. Soc. Am. J. 43, 1004–1007 (1979). Schwertmann, U. Use of oxalate for fe extraction from soils. Can. J. Soil. Sci. 53, 244–246 (1973). Rasmussen, C. et al. Beyond clay: towards an improved set of variables for predicting soil organic matter content. Biogeochemistry 137, 297–306 (2018). Amelung, W. Methods using amino sugars as markers for microbial residues in soil. in Assessment Methods for Soil Carbon 159–196 (CRC Press, 2001). Liang, C., Read, H. W. & Balser, T. C. GC-based detection of aldononitrile acetate derivatized glucosamine and muramic acid for microbial residue determination in soil. J. Vis. Exp. e3767, https://doi.org/10.3791/3767 (2012). Moorhead, D. L. & Reynolds, J. F. Changing Carbon Chemistry of Buried Creosote Bush Litter during Decomposition in the Northern Chihuahuan Desert. Am. Midl. Naturalist 130, 83–89 (1993). Schielzeth, H. Simple means to improve the interpretability of regression coefficients. Methods Ecol. Evol. 1, 103–113 (2010). Craig, M., Brzostek, E., Geyer, K., Liang, C. & Phillips, R. Data for "Fast-decaying plant litter enhances soil carbon in temperate forests, but not through microbial physiological traits". ESS-DIVE Repository, https://data.ess-dive.lbl.gov/view/; https://doi.org/10.15485/1835182 (2021). This work was supported by a National Science Foundation Doctoral Dissertation Improvement Grant (DEB-1701652; M.E.C. and R.P.P.), the U.S. Department of Energy Office of Biological and Environmental Research (DOE-BER), Terrestrial Ecosystem Science Program (DESC0016188; E.R.B. and R.P.P.), an Indiana University Research and Teaching Preserve (IURTP) Student Grant (M.E.C.), the Smithsonian Center for Tropical Forest Science—ForestGEO, the Oak Ridge National Laboratory (ORNL) Terrestrial Ecosystem Science, Science Focus Area, funded by DOE-BER (M.E.C.), and the National Natural Science Foundation of China (31930070; C.L.). We thank Elizabeth Huenupi and members of the Phillips, Brzostek, and Frey labs (Corben Andrews, Kelly Fox, Peyton Joachim, Naomi Reibold, Madison Barney, Rachel Zeunik, Mark Sheehan, Kara Allen, Joe Carrara, Nanette Raczka, and others) for assistance in the field and lab. We also thank Peter Sauer, Erica Elswick, Ryan Mushinski, and Brent Lemkuhl for assistance with sample analyses; Ron Turco for facilitating soil collection; individuals affiliated with the ForestGEO network for facilitating site access (Bill McShea, Dave Orwig, Sean McMahon, Michael Chitwood, Jonathan Myers, and Amy Wolf); and Adrienne Keller and Steve Kannenberg for feedback on earlier presentations of this work. LDW is part of the IURTP. ORNL is managed by UT-Battelle, LLC, for the U.S. DOE under contract DE-AC05-100800OR22725. Department of Biology, Indiana University, Bloomington, IN, USA Matthew E. Craig, Katilyn V. Beidler & Richard P. Phillips Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge, TN, USA Matthew E. Craig Department of Natural Resources and the Environment, University of New Hampshire, Durham, NH, USA Kevin M. Geyer, Serita D. Frey & A. Stuart Grandy Department of Biology, Young Harris College, Young Harris, GA, USA Kevin M. Geyer Department of Biology, West Virginia University, Morgantown, WV, USA Edward R. Brzostek Key Laboratory of Forest Ecology and Management, Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang, China Chao Liang Katilyn V. Beidler Serita D. Frey A. Stuart Grandy Richard P. Phillips M.E.C. and R.P.P. conceived the project with contributions from A.S.G. and E.R.B. on the lab and field studies, respectively. M.E.C. and K.V.B. carried out the lab experiment with input from K.M.G. M.E.C. and E.R.B. contributed to field sampling. M.E.C., K.M.G., and C.L. analyzed field samples. M.E.C. analyzed the data and wrote the first draft of the paper. All authors contributed to the conceptual development of the paper and provided feedback on the final draft. Correspondence to Matthew E. Craig. Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Craig, M.E., Geyer, K.M., Beidler, K.V. et al. Fast-decaying plant litter enhances soil carbon in temperate forests but not through microbial physiological traits. Nat Commun 13, 1229 (2022). https://doi.org/10.1038/s41467-022-28715-9 Soil organic matter formation and loss are mediated by root exudates in a temperate forest Nikhil R. Chari Benton N. Taylor Nature Geoscience (2022) Deforestation for agriculture increases microbial carbon use efficiency in subarctic soils Julia Schroeder Tino Peplau Christopher Poeplau Biology and Fertility of Soils (2022) Recarbonizing Global Soils for Sustainable Development Rama Kant Dubey Anthropocene Science (2022)
CommonCrawl
JIMO Home A modified strictly contractive peaceman-rachford splitting method for multi-block separable convex programming January 2018, 14(1): 413-425. doi: 10.3934/jimo.2017053 An iterative algorithm for periodic sylvester matrix equations Lingling Lv 1, , Zhe Zhang 1, , Lei Zhang 2,, and Weishu Wang 1, Institute of Electric power, North China University of Water Resources and Electric Power, Zhengzhou 450011, China Computer and Information Engineering College, Henan University, Kaifeng 475004, China Corresponding author: Lei Zhang Received March 2016 Revised September 2016 Published June 2017 Fund Project: This work is supported by the Programs of National Natural Science Foundation of China (Nos. 11501200, U1604148,61402149), Innovative Talents of Higher Learning Institutions of Henan (No. 17HASTIT023), China Postdoctoral Science Foundation (No. 2016M592285), and Innovative Research Team in University of Henan Province (No. 16IRTSTHN017) Full Text(HTML) Figure(1) The problem of solving periodic Sylvester matrix equations is discussed in this paper. A new kind of iterative algorithm is proposed for constructing the least square solution for the equations. The basic idea is to develop the solution matrices in the least square sense. Two numerical examples are presented to illustrate the convergence and performance of the iterative method. Keywords: Periodic Sylvester matrix equations, iterative algorithm, least square solutions. Mathematics Subject Classification: 15A24. Citation: Lingling Lv, Zhe Zhang, Lei Zhang, Weishu Wang. An iterative algorithm for periodic sylvester matrix equations. Journal of Industrial & Management Optimization, 2018, 14 (1) : 413-425. doi: 10.3934/jimo.2017053 P. Benner, M. S. Hossain and T. Stykel, Low-rank iterative methods for periodic projected Lyapunov equations and their application in model reduction of periodic descriptor systems, Springer Seminars in Immunopathology, 67 (2014), 669-690. doi: 10.1007/s11075-013-9816-6. Google Scholar P. Benner and M. S. Hossain, Structure Preserving Iterative Solution of Periodic Projected Lyapunov Equations, Mathematical Modelling, 45 (2012), 276-281. Google Scholar P. Benner, M. S. Hossain and T. Stykel, Model reduction of periodic descriptor systems using balanced truncation, Lecture Notes in Electrical Engineering, 74 (2010), 193-206. doi: 10.1007/978-94-007-0089-5_11. Google Scholar C. Y. Chiang, On the Sylvester-like matrix equation $AX + f(X)B = C$, Journal of the Franklin Institute, 353 (2016), 1061-1074. doi: 10.1016/j.jfranklin.2015.03.024. Google Scholar M. Dehghan and M. Hajarian, Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations, Applied Mathematical Modelling, 35 (2011), 3285-3300. doi: 10.1016/j.apm.2011.01.022. Google Scholar M. Dehghan and M. Hajarian, The general coupled matrix equations over generalized bisymmetric matrices, Linear Algebra & Its Applications, 432 (2010), 1531-1552. doi: 10.1016/j.laa.2009.11.014. Google Scholar M. Dehghan and M. Hajarian, Efficient iterative method for solving the second-order Sylvester matrix equation $EVF^{2}-AVF^{2}-CV = BW$, IET Control Theory & Applications, 3 (2009), 1401-1408. doi: 10.1049/iet-cta.2008.0450. Google Scholar F. Ding and T. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Transactions on Automatic Control, 50 (2005), 1216-1221. doi: 10.1109/TAC.2005.852558. Google Scholar F. Ding and H. Zhang, Gradient-based iterative algorithm for a class of the coupled matrix equations related to control systems, IET Control Theory & Applications, 8 (2014), 1588-1595. doi: 10.1049/iet-cta.2013.1044. Google Scholar M. Hajarian, Solving the general Sylvester discrete-time periodic matrix equations via the gradient based iterative method, Applied Mathematics Letters, 52 (2016), 87-95. doi: 10.1016/j.aml.2015.08.017. Google Scholar M. Hajarian, Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equations, Journal of the Franklin Institute, 350 (2013), 3328-3341. doi: 10.1016/j.jfranklin.2013.07.008. Google Scholar Z. Li, B. Zhou, Y. Wang and G. Duan, Numerical solution to linear matrix equation by finite steps iteration, IET Control Theory & Applications, 4 (2010), 1245-1253. doi: 10.1049/iet-cta.2009.0015. Google Scholar S. Longhi and R. Zulli, A note on robust pole assignment for periodic systems, IEEE Transactions on Automatic Control, 41 (1996), 1493-1497. doi: 10.1109/9.539431. Google Scholar L. Lv and L. Zhang, New iterative algorithms for coupled matrix equations, Journal of Computational Analysis and Applications, 19 (2015), 947-958. Google Scholar E.-S. M. E. Mostafa, A nonlinear conjugate gradient method for a special class of matrix optimization problems, Journal of Industrial & Management Optimization, 10 (2014), 883-903. doi: 10.3934/jimo.2014.10.883. Google Scholar W. J. Rugh, Linear system theory (2nd ed. ), Upper Saddle River, New Jersey: Prentice Hall, 1996. Google Scholar Q. W. Wang, J. W. V. D. Woude and H. X. Chang, A system of real quaternion matrix equations with applications, Linear Algebra & Its Applications, 431 (2009), 2291-2303. doi: 10.1016/j.laa.2009.02.010. Google Scholar Q. W. Wang and C. K. Li, Ranks and the least-norm of the general solution to a system of quaternion matrix equations, Linear Algebra & Its Applications, 430 (2009), 1626-1640. doi: 10.1016/j.laa.2008.05.031. Google Scholar L. Xie, Y. Liu and H. Yang, Gradient based and least squares based iterative algorithms for matrix equations $AXB+CX^{\mathrm{T}}D=F$, Applied Mathematics & Computation, 217 (2010), 2191-2199. doi: 10.1016/j.amc.2010.07.019. Google Scholar L. Xu, Application of the Newton iteration algorithm to the parameter estimation for dynamical systems, Journal of Computational & Applied Mathematics, 288 (2015), 33-43. doi: 10.1016/j.cam.2015.03.057. Google Scholar L. Xu, A proportional differential control method for a time-delay system using the Taylor expansion approximation, Applied Mathematics & Computation, 236 (2014), 391-399. doi: 10.1016/j.amc.2014.02.087. Google Scholar Y. Yang, An efficient algorithm for periodic Riccati equation with periodically time-varying input matrix, Automatica, 78 (2017), 103-109. doi: 10.1016/j.automatica.2016.12.028. Google Scholar H. Zhang and F. Ding, A property of the eigenvalues of the symmetric positive definite matrix and the iterative algorithm for coupled Sylvester matrix equations, Journal of the Franklin Institute, 351 (2014), 340-357. doi: 10.1016/j.jfranklin.2013.08.023. Google Scholar H. Zhang and F. Ding, Iterative algorithms for $X+A^{\mathrm{T} }X^{-1}A=I$, by using the hierarchical identification principle, Journal of the Franklin Institute, 353 (2015), 1132-1146. doi: 10.1016/j.jfranklin.2015.04.003. Google Scholar L. Zhang, A. Zhu and A. Wu, Parametric solutions to the regulator-conjugate matrix equations, Journal of Industrial & Management Optimization, 13 (2017), 623-631. doi: 10.3934/jimo.2016036. Google Scholar B. Zhou, On asymptotic stability of linear time-varying systems, Automatica, 68 (2016), 266-276. doi: 10.1016/j.automatica.2015.12.030. Google Scholar B. Zhou and A. V. Egorov, Razumikhin and Krasovskii stability theorems for time-varying time-delay systems, Automatica, 71 (2016), 281-291. doi: 10.1016/j.automatica.2016.04.048. Google Scholar B. Zhou, G. R. Duan and Z. Y. Li, Gradient based iterative algorithm for solving coupled matrix equations, Systems and Control Letters, 58 (2009), 327-333. doi: 10.1016/j.sysconle.2008.12.004. Google Scholar Figure 1. The residuals for the iterative algorithm Figure Options Download as PowerPoint slide Ai-Guo Wu, Ying Zhang, Hui-Jie Sun. Parametric Smith iterative algorithms for discrete Lyapunov matrix equations. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-17. doi: 10.3934/jimo.2019093 Ryuji Kajikiya. Nonradial least energy solutions of the p-Laplace elliptic equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 547-561. doi: 10.3934/dcds.2018024 Yinbin Deng, Wentao Huang. Least energy solutions for fractional Kirchhoff type equations involving critical growth. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 1929-1954. doi: 10.3934/dcdss.2019126 Lei Zhang, Anfu Zhu, Aiguo Wu, Lingling Lv. Parametric solutions to the regulator-conjugate matrix equations. Journal of Industrial & Management Optimization, 2017, 13 (2) : 623-631. doi: 10.3934/jimo.2016036 Veronica Felli, Ana Primo. Classification of local asymptotics for solutions to heat equations with inverse-square potentials. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 65-107. doi: 10.3934/dcds.2011.31.65 Hailong Zhu, Jifeng Chu, Weinian Zhang. Mean-square almost automorphic solutions for stochastic differential equations with hyperbolicity. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1935-1953. doi: 10.3934/dcds.2018078 Octavian G. Mustafa, Yuri V. Rogovchenko. Existence of square integrable solutions of perturbed nonlinear differential equations. Conference Publications, 2003, 2003 (Special) : 647-655. doi: 10.3934/proc.2003.2003.647 Jiawei Dou, Lan-sun Chen, Kaitai Li. A monotone-iterative method for finding periodic solutions of an impulsive competition system on tumor-normal cell interaction. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 555-562. doi: 10.3934/dcdsb.2004.4.555 Simai He, Min Li, Shuzhong Zhang, Zhi-Quan Luo. A nonconvergent example for the iterative water-filling algorithm. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 147-150. doi: 10.3934/naco.2011.1.147 Fabián Crocce, Ernesto Mordecki. A non-iterative algorithm for generalized pig games. Journal of Dynamics & Games, 2018, 5 (4) : 331-341. doi: 10.3934/jdg.2018020 Victor Kozyakin. Iterative building of Barabanov norms and computation of the joint spectral radius for matrix sets. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 143-158. doi: 10.3934/dcdsb.2010.14.143 Miaomiao Niu, Zhongwei Tang. Least energy solutions of nonlinear Schrödinger equations involving the half Laplacian and potential wells. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1215-1231. doi: 10.3934/cpaa.2016.15.1215 Zhongwei Tang. Least energy solutions for semilinear Schrödinger equations involving critical growth and indefinite potentials. Communications on Pure & Applied Analysis, 2014, 13 (1) : 237-248. doi: 10.3934/cpaa.2014.13.237 Fernando Casas, Cristina Chiralt. A Lie--Deprit perturbation algorithm for linear differential equations with periodic coefficients. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 959-975. doi: 10.3934/dcds.2014.34.959 Xiaoyan Lin, Xianhua Tang. Solutions of nonlinear periodic Dirac equations with periodic potentials. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2051-2061. doi: 10.3934/dcdss.2019132 Lejun Shi, Shaocui Guo, Xu Yang. Encryption service protocol based on matrix norm algorithm. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020255 Dixiang Cheng, Zhengrong Liu, Xin Huang. Periodic solutions of a class of Newtonian equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1795-1801. doi: 10.3934/cpaa.2009.8.1795 Vincenzo Ambrosio, Giovanni Molica Bisci. Periodic solutions for nonlocal fractional equations. Communications on Pure & Applied Analysis, 2017, 16 (1) : 331-344. doi: 10.3934/cpaa.2017016 Daniele Cassani, Antonio Tarsia. Periodic solutions to nonlocal MEMS equations. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 631-642. doi: 10.3934/dcdss.2016017 Lei Wei, Xiyou Cheng, Zhaosheng Feng. Exact behavior of positive solutions to elliptic equations with multi-singular inverse square potentials. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7169-7189. doi: 10.3934/dcds.2016112 PDF downloads (102) HTML views (526) Lingling Lv Zhe Zhang Lei Zhang Weishu Wang
CommonCrawl
Is there a "special" inertial frame determined by the value of E and B at a point in an EM field? Given the facts that $(E^2 - B^2)$ and $(E\cdot B)$ are Lorentz invariants of the EM field, and that the energy density $(E^2 + B^2)$ is not invariant, it seems that at each point in an EM field there should be unique inertial frame in which the field energy is minimum. Can that minimum value, and that inertial frame, be considered Lorentz invariants? electromagnetism special-relativity inertial-frames lorentz-symmetry S. McGrewS. McGrew $\begingroup$ The energy density is proportional to $E^2+B^2$. I don't see a general connection between this and the two invariants. $\endgroup$ – G. Smith Mar 23 at 4:34 Since $E^2-B^2$ is invariant under boosts, any boost must change both $E^2$ and $B^2$ by the same increment $\epsilon$, if it changes them at all. In other words, a boost either increases both $E^2$ and $B^2$ by the same amount or decreases them both by the same amount, if it changes them at all. There are two cases two consider: $E\cdot B=0$ and $E\cdot B\neq 0$. Throughout this answer, only a single point is considered, and "energy density" means the energy density at that point. First suppose $E\cdot B=0$. In this case, there is a frame in which either $E$ or $B$ is zero. (See the appendix for an outline of a proof.) This must be the frame in which the energy density is a minimum, because any boost away from that frame will increase both $E^2$ and $B^2$. (It can't decrease them, because then one of them would end up being negative, but $E^2$ and $B^2$ cannot be negative.) Now consider the case $E\cdot B\neq 0$. In this case, there is a frame in which the electric and magnetic field vectors are parallel to each other. (See the appendix for an outline of a proof.) Denote these fields by $E_0$ and $B_0$. In such a frame, we have $$ |E_0\cdot B_0| = \sqrt{E_0^2B_0^2}. \tag{1} $$ After a boost, we have $$ E\cdot B = \sqrt{E^2B^2}\cos\theta = \sqrt{(E_0^2+\epsilon)(B_0^2+\epsilon)}\,\cos\theta, \tag{2} $$ and since $E\cdot B$ is invariant, this gives $$ \sqrt{E_0^2B_0^2} = \sqrt{(E_0^2+\epsilon)(B_0^2+\epsilon)}\,\cos\theta. \tag{3} $$ This shows that we must have $\epsilon\geq 0$. Therefore, if we start in a frame in which the vectors $E$ and $B$ are parallel, a boost cannot decrease the value of $E^2+B^2$, so it cannot decrease the energy density at the given point. This shows that the energy at any given point is minimized in any frame where $E$ and $B$ are parallel to each other. The frame that minimizes the energy density is not unique. For example, if we start in a frame where $E$ and $B$ are parallel to each other (or if one of them is zero) and then apply a parallel boost, the fields $E$ and $B$ are unchanged. Summary: There is always a frame in which (at the given point) either $E$ and $B$ are parallel to each other or one of them is zero. The frame that does this is not unique. Any such frame minimizes the energy density (at that point). The claim is that if $E\cdot B=0$, then there is a frame in which either $E$ or $B$ is zero; and if $E\cdot B\neq 0$, then there is a frame in which they are parallel. Here's an outline of the proof, using the Clifford-algebra approach that was described in my answer to Wigner Rotation of static E & M fields is dizzying Let $\gamma^0$, $\gamma^1$, $\gamma^2$, $\gamma^3$ be mutually orthogonal basis vectors, with $\gamma^0$ being timelike and the others being spacelike. In Clifford algebra, vectors can be multiplied, and the product is associative. I'll use $I$ to denote the identity element of the algebra, and I'll use $\Gamma$ to denote the special element $$ \Gamma\equiv \gamma^0\gamma^1\gamma^2\gamma^3. \tag{4} $$ Mutually orthogonal vectors anticommute with each other, and the product of parallel vectors is proportional to $I$. The electric and magnetic fields are components of the Faraday tensor $F_{ab}$, which are the components of a bivector $$ F = \sum_{a<b}\gamma^a\gamma^b F_{ab}. \tag{5} $$ In the frame defined by the given basis, the electric and magnetic parts of $F$, which I'll denote $F_E$ and $F_B$ (so $F=F_E+F_B$), are the parts that do and do not involve a factor of the timelike basis vector $\gamma^0$, respectively. The quantity $F$ satisfies $$ F^2=(F^2)_I + (F^2)_\Gamma \tag{6} $$ where subscripts $I$ and $\Gamma$ denote terms proportional to $I$ and $\Gamma$, respectively. Both parts, $(F^2)_I$ and $(F^2)_\Gamma$, are invariant under proper Lorentz transformations. The part $(F^2)_I$ is proportional to $E^2-B^2$, and the part $(F^2)_\Gamma$ is proportional to $E\cdot B$. The starting point for the proof is that in four-dimensional spacetime, any bivector may be written in the form $$ F = (\alpha+\beta\Gamma)uv \tag{7} $$ where $\alpha,\beta$ are scalars and where $u$ and $v$ are mutually orthogonal vectors. If neither vector is null (the null case wil be treated last), then we can always find a frame in which they are both proportional to basis vectors in a canonical basis. The factor $\alpha+\beta\Gamma$ is the same in all frames (only $u$ and $v$ are affected by Lorentz transformations, because $\Gamma$ is invariant), so in the latter frame, $F_E$ is proportional to the product of two basis vectors (one of which is timelike), and $F_B$ is proportional to the product of the other two. After reverting to the traditional formulation in which the electric and magnetic fields are represented by vectors in 3-d space rather than by bivectors in 4-d spacetime, this is equivalent to the condition that $E$ abd $B$ both be proportional to a single vector. If one of the coefficients $\alpha,\beta$ in (7) is zero (so that $E\cdot B=0$), then this proves the existence of a frame in which one of them is zero. If both coefficients $\alpha,\beta$ are non-zero (so that $E\cdot B\neq 0$), then then this proves the existence of a frame in which they are parallel. Finally, if one of the vectors $u,v$ in (7) is null, then $E$ and $B$ are already parallel to each other and have the same magnitude. Chiral AnomalyChiral Anomaly $\begingroup$ By "factor", do you mean "increment? That is, delta(E^2) = delta(B^2)? In that case, it seems that at some boost (keeping E and B parallel) B would go to zero, E^2 would be at a minimum, and E^2 +B^2 would also be at a minimum. $\endgroup$ – S. McGrew Mar 23 at 4:57 $\begingroup$ In general there is no frame in which E and B are parallel. $\endgroup$ – my2cts Mar 23 at 8:14 $\begingroup$ @S.McGrew I updated the answer to address both cases ($E\cdot B=0$ and $E\cdot B\neq 0$), and I added an appendix that outlines the proof of the assertion about the existence of a frame in which $E$ and $B$ are parallel (or in which one of them is zero). $\endgroup$ – Chiral Anomaly Mar 23 at 14:12 $\begingroup$ OK, it appears that you've given a simple proof that a) the field energy density is minimum in any frame where E is parallel to B, and b) the field energy density is the same for all frames in which E is parallel to B. Right? If so, that says that the minimum field energy is an invariant -- but that it does not correspond to a unique inertial frame because there are an infinite number of frames in which E is parallel to B. $\endgroup$ – S. McGrew Mar 23 at 14:21 $\begingroup$ @S.McGrew Yes, that sums it up. Using the word "invariant" this way seems a little non-standard to me, but I think I get what you're saying. Given $E$ and $B$ in an arbitrary frame, we have enough information to determine the minimum possible value of $E^2+B^2$ among all frames. I'm assuming that's what you mean by "invariant" in this context. $\endgroup$ – Chiral Anomaly Mar 23 at 15:43 Not the answer you're looking for? Browse other questions tagged electromagnetism special-relativity inertial-frames lorentz-symmetry or ask your own question. Does the electromagnetic field have a "rest mass" that is conserved? Formula for all the Lorentz boosts that result in $E$ parallel to $B$? Is there any true inertial reference frame in the universe? why is there only one inertial frame that $ct$ and $x$ are orthogonal? The definition of an inertial reference frame in Einstein's relativity How to find the magnitude of electric and magnetic fields in an arbitary inertial frame? Magnetic force created from a point charge due to an inertial frame Should the rest frame of a lab positioned on a gravitating body be considered an inertial frame in special relativity or not? Special *set* of inertial frames in an EM field?
CommonCrawl
Rules of chess knight movement Everyone knows that rooks dominate open space. Knights move in the shape of an "L". Knights can jump over any piece that stands in its way, and captures any piece that it lands on. A jump to a square whose midpoint is sqrt. He can maneuver much more efficiently and can easily sweep nearly an entire battlefield. Since the Bishops move diagonally, Bishop c3 can capture f6, and vice versa. It moves one square in any direction then diagonally one square away from its starting square. The colors of the sixty-four squares alternate between light and dark, and are referred to as "light squares" and "dark squares". The rules of chess may seem complicated at first, but they're actually quite simple. If it has not yet made its first move, a pawn also has the option of moving two squares straight forward, provided both squares are vacant. Learn the basics of how chess pieces move and capture opponent's pieces. Chess is a two-player game whose object is to capture the enemy King. The queen has the combined moves of the rook and the bishop, i. Basic Chess Movements. Castling is the only move that allows two pieces, the king and a rook to move at the same time. Movement of a Chess pawn is very Simple. Every move in chess is based around the King, with the purpose of either protecting your own King or capturing your opponents. A pawn on reaching the last rank can be exchanged for a Queen, Rook, Bishop or Knight as part of the same move. Pawns cannot BASIC RULES OF CHESS Introduction Chess is a game of strategy believed to have been invented more then 1500 years ago in India. The Bishop in Chess In this position three Bishops are on the board: Bishops are placed on c3, c4, f6; also three Rooks, c1, d2, f7, and of course the two Kings - the Kings never being captured - on a1 and g8. Typically the protector of the king, the knight performs the same duty in chess. The king is the main chess piece. Black's next move exd3(ep), is a special move called "en passant" capturing white's d4 pawn in passing while moving his pawn to d3 – as if the pawn had moved to d3. two minutes added to the opponent's clock). Instead of an L-shape, we can see the movement of the knight this way: the knight can move to one of the nearest squares from the one it stands, but not on the same file, rank or diagonal. The knight moves two squares up, down, or sideways and then one square at a 90 degree angle of the first two moves. The complete move therefore looks like the letter "L". The knight is the only piece that can jump other pieces (both friend and enemy). 2b and 9. White goes first in chess. -The Pawn Pawns can only move forward and can only move 1 square at a time with 1 exception. rules for knight movement in chess rules for knight movement in chess. Basic chess rules . The Knight is your cunning assassin. It is get shaped as a Horse Head in real world. Silver Generals move one square in any direction except straight backwards or to the left or right horizontally. Each player begins the game with sixteen pieces: each player's pieces comprise one king, one queen, two rooks, two bishops, two knights and eight pawns. The knight: It's represented by a horse and is the most complicated unit. It is a game for two play-ers, one with the light pieces and one with the dark pieces. The knight can get to c2/c7 square in just 3 moves from its starting position. After each move, the players take turns. Knights: Knights can move only in an L-shape, one square up and two over, or two squares over and one down, or any such combination of one-two or two-one movements in any direction. Many of them do not know how to play or have heard it is complicated and that is the reason they lose interest in it. From the accompanying chart, you can recognize the pieces and the general character of their shape and appearance and furthermore the area or square the chess pieces begin from as indicated by the chess rules. Usually, pawns are promoted to queens, but not always – the player moving the pawn chooses. If the knight on d2 moves to f3 you write Ndf3 (not Nf3 as usual). The knight is the only piece that can jump over other pieces. It moves in an 'L' shapes that consist of two spaces horizontally then one space vertically, or one space horizontally then two spaces vertically. The effect of this promoted piece is immediate. We can say that it The Knight changes the colour of the square it stands on with each move. Chess rules 2: The 50-move rule This rule is rather obscure, and rarely comes into play over the board or online. The diagram on the left illustrates that the Knight can move to any of the squares pointed to by a red dot. For example in a knight 'move' the piece could move two squares up and one square to the right. The Queen's Movement. The side whose king is captured loses. Knights have very unique movement compared to all other chess pieces. they capture only sideways 1 square diagonally forward -- see diagram in Chess Rules). The Knight piece can only move to one of up to eight positions on the board. lows all the rules of Chess, except that movement is by play of a card. We have to move pawns to play other chess pieces other than Chess Knight. Chess is a two-player zero sum board game. It is the only token that can jump over other tokens. If this isn't done the king and queen will be mixed up. Chess pieces can only make movement in one direction, except the Knight chess piece. The Game of Chess – Moving Every chess piece has rules for the way it can be moved around the board. For example there are two knights, one is sitting on g5 and the other on d2, both can move to f3. In the diagram on the left, the blue dots indicate to which squares this particular queen may move. Simple and light chess puzzle game for your brains! Rules of the game: You need to bypass all the cells according to the chess rules of the knight's move without being on one cell more than once. The Knight is the piece with the trickiest move in chess. Especially in the center it works fine and attacks eight squares on the board. How to Move the King in Chess. General movement rules A move consists of moving a single piece, in accordance with its rules of movement, to a square that is unoccupied or occupied by an enemy piece. Chess Notation Explained. Pieces can not jump Too detailed a rule might deprive the arbiter of his freedom of judgement and The knight may move to one of the squares nearest to that on which it stands but The official chess rules do not include a procedure for determining who plays With the exception of any movement of the knight and the occasional castling card on your opponent's discard pile is a Knight and the top card on your discard pile is . Pawns cannot move backwards. Notes About How to Move the Knight in Chess. The Knight moves unlike any other chess piece in the chess game. The game is played on a chessboard, consisting of 64 squares: eight rows and eight The knight makes a move that consists of first one step in a horizontal or By far the best example is the Knight's Tour. Winning the exchange on intermediate level generally means winning the game. May require some thought . The knight is not blocked by other pieces: it jumps to the new location. The knight is very different from all the other chess pieces. The knight is a diverse piece and is the only one able to move over other pieces making it difficult to pin down. The Knight is the most versatile of chess pieces being capable of movement in 8 directions as well as over other chess pieces . For instance, a pawn moves straight ahead but can only attack on an angle, one square at a time. If two pieces of the same type can move to the same square, you just add the file or number of departure to your chess notation. The Knight chess piece moves in a very mysterious way. Open with either the e-pawn or the d-pawn. And to place it right in the corner is just insane, as it attacks only two squares! The knight is not blocked by other pieces: it jumps to the new location. This rule is frequently enforced in casual games too. How a Pawn moves in Chess. Chess is an incredibly fun, addicting game that requires skill and strategy. The simplest move that makes no sense. How A Knight Chess Piece Moves. (Note, a pawn is not considered a piece. Queen. Castling is the only move that allows two pieces to move during the same turn. The bishop can move to any square along the diagonal on which it stands. 18 Pawns have the most complex rules of movement: A pawn moves straight forward one square, if that square is vacant. Bad Chess Move #6: One of the key ideas of many chess openings is controlling the center. The Below is an exam play out of a game of 110 1D chess. They can capture an enemy piece by moving one square forward diagonally. Chess Online. 3 The knight moves to one of the squares nearest to that on which it stands but not on the same rank, file or diagonal. A knight's We can observe that knight on a chessboard moves either: 1. Pawns are not usual since they move and capture in different ways. The two players are called With the exception of the knight, no piece may move over any pieces of May 16, 2010 Three Player Chess rules, history and useful strategies. As you can see, she can cover 27 squares. So we have 16 chess pieces all together in First Stage of the Game. The other way to lose is to run out of time. Typical Staunton wood Knight piece used in a game of Chess (Fig. Solution: start your games with d4 or e4 openings, or by first developing the knights. Only one piece can occupy a given square. Musketeer pieces enter the game when the position right in front of them is freed by a move. Ranks and files: Going from left to right, the vertical The 16 chess pieces are comprised of 1 King, 1 queen, 2 bishops, 2 knights, 2 rooks, and 8 pawns. The player with White pieces starts first, chooses his/her chess piece and moves it according to rules for this type of the piece (see chess pieces ). During the game, players move their pieces over the board and capture their opponent's pieces along the way to the ultimate goal of capturing the king. Learn How to Play Chess 101: Beginner's Guide to Chess Rules & Chess StrategyDecember 28, 2017In Chess Chess is a popular two-player board game originating from northern India and rules to form the basis the modern international chess game we know today. The king is always the tallest piece on the chessboard and the king chess piece will usually have a cross-like object on top. This means you cannot move pieces on and off the board. Learn the Chess Rules of How the Chess Pieces Move The Chess Pieces The Each has a King, a Queen, two Rooks (or Castles), two Bishops, two Knights, Dec 5, 2017 You probably remember the rules of chess, but what's actually Two knights, which move in an 'L' shape—two squares North/South and one Learn how to move a knight in the game of chess. Each piece has its own unique way to move. A player may never move a piece onto a square already occupied by another of his or her own pieces. The king does not move through or into check. The Rules of Chess. But sometimes if you are watching a Grandmaster match you might hear the commentators talking about it. Castling. Wins on time must be claimed by the player; games are drawn if both flags fall. , Knight. Draws are codified by various rules of chess including stalemate (when the player to move has no legal move and is not in check), threefold repetition (when the same position occurs three times with the same player to move), and the fifty-move rule (when the last fifty successive moves made by both players contain no capture or pawn move). Next, white chooses to move the knight at index 1 to the right. Can the knight move when it does not jump over a piece? Yes. Knight's Movement. 1. Pawns have the most complex rules of movement: A pawn moves straight forward one square, if that square is vacant. There is usually a provision for a player to stop the clock and claim a draw when there is no way for the opponent to win. Generally, a piece cannot pass through squares occupied by other pieces, but it can move to a square occupied by an opposing piece, which is then "captured" (removed from the board). Knights The Knight is the only piece on the board that can jump over another piece to get to another square - be it to capture or move. We can say that it either moves two squares sideways and then one square up or down, or two squares up or down, and then one square sideways. Castling is a special move in chess that involves the King and the Rook. When a pawn reaches the other side of the chessboard (called the back rank) the rules allow it to be promoted to any other piece (queen, rook, bishop or knight but not king!). 3, 5. En Passant This leaves people unaware of the rule absolutely flummoxed, and they may look at you like you've tried to pull off an illegal move while they weren't looking. This piece is shaped like a horse head and have a unique movement. A knight moves in an L-shape like from here D4 goes one, two and then one. The object of the game is to attack your opponent's King in such a way that he cannot prevent you from killing it (this is called checkmate) while at the same time preventing him from doing the same to you. How to Move the Almighty King in Chess. The player with the White pieces always moves first. The Knight can move from one corner to the other of any 2x3 rectangle of squares. Rules of Chess: Knights This webpage gives the answers to some frequently asked questions about the official rules of chess regarding Knights. Knight – Knights move in an "L" shape, either two squares horizontally and V. There are some similarities Personally when I teach others about the knight movement I explain it The rule I teach kids which seems to work is that the knight can go to If you introduced a new chess piece, how would it look & move? The rules: in a standard game with two rooks with the same moves and two Description of chess pieces, their legal moves and relative values. If the piece in front of a musketeer piece is captured before it entered the game, the musketeer piece is also removed from the board. White always moves first, and the players move one piece at a time until one side captures the enemy's king. When making these moves the queen, rook or bishop cannot move over any intervening chess pieces. Two moves horizontal and one move vertical 2. It moves to a square that is two squares away horizontally and one square vertically, or two squares vertically and one square horizontally. The bishop moves in a straight diagonal line. Chess has six types of pieces: the Pawn, Rook, Knight, Bishop, Queen and King. Virtually nothing about chess has changed with Omega Chess. As noted above, the knight can capture any chess piece that stands at the end of his move. In the photo below, the knight can take either the queen or the bishop. The starting position is shown below. Unlike Rooks, Bishops or Queens, the Knight is limited in the number of squares it can move across. Simply, one asks whether the Knight, using its . Pick the most suitable square for a piece and develop it there once and for all. Moving away on two of the opposite color squares. Knight. The knight move is unusual among chess pieces. The king is not in check. This capture is called 'checkmate'. The rules are in the section Laws of Chess of the FIDE Handbook. That's the case in both examples after Black's move. Chess is a board game played between two players–White and Black–who alternate turns. Learning how to play chess is not always easy, but once you do it can be an amazingly fun game of strategy. The most important chess piece is the king because if your king is captured , you lose the game. The black knight may move to any of eight squares (black dots). According to the WBCA (World Blitz Chess Association) rules, a player who makes an illegal move loses the game instantly. Pieces cannot move through other pieces (though the knight can jump over other Knight Tour Problem. e. Each variant has its own rules: Chess960: In Chess960 (Fischer Random), the initial position of the pieces is set at random. chess) submitted 2 years ago * by RockofStrength. • The knight is placed on any block of an empty board and is move according to the rules of chess, must visit each square exactly once. Setup rules for knight movement in chess rules for knight movement in chess. The King is similar to how he moves in regards to the Queen. The king can move one square in any direction that is not attacked by an opponent's piece. . (Knight is always a Jumper. PIECE TOUCHE In serious play, if a player having the move touches one of their piece s as if having the intention of moving it, then the player must move it if it can be legally moved. 11D. How to Play Advanced Chess. The King and the Rook move towards each other and swap places. The King, Queen, Bishop, Knight and Rook move and capture in the same way as Standard Chess. The white knight in this case is limited to two squares (white dots). The bishop moves to any square along a diagonal on which it stands. Part 1 Learning Chess Terms. Friendly community of correspondence chess players of all levels and ages. Can the knight jump over pieces of the same player? Yes. board and, moving according to the rules of chess, must visit each square exactly once. The rules of chess are governed by the World Chess Federation, which is known by the initials FIDE, meaning Fédération Internationale des Échecs. The touch-move rule in chess specifies that, if a player intentionally touches a piece on the board when it is his turn to move, then he must move or capture that piece if it is legal to do so. picture of the knight pieces The Rules of Chess. Since obstructions are not a bar to movement (unless there is a friendly piece on the square where The Knight is the piece with the trickiest move in chess. If a king is in check, then the player must make a move that eliminates the threat of capture and cannot leave the king in check. Checkmate happens when a king is placed in check and there is no legal move to escape. Chinese Chess has rooks, knights and pawns, and all of these move in the same or nearly the same way as their equivalents in International Chess. ) An example of a fork is if a knight attacks both the opposing king and queen at the same time. It moves to a square that is two squares away horizontally and one square vertically, or two squares vertically The Knight moves in an L shape in any direction. It's been around for centuries as a game for intellectuals and scholars; however, playing does require a level of genius -- but Blitz chess [change | change source] Instead, a move is completed only when the player starts the opponent's clock. In fact, its movement is a very specific movement. How the The Knight is that only piece that can jump over other pieces. After several more moves, Black captures White's Bishop on c1 with dxc1=Q. During castling a king moves two spaces towards the rook that it will castle with, and the rook jumps to the other side. Chess pieces can capture opposing chess pieces by moving onto the square they occupy, General Chess Rules. Both have the free rein to move in any direction which they choose. Two moves vertical and one move horizontal. These are called "chess variants". How the Chess Pieces Move Most pieces cannot move through other pieces-- only the knight can jump over anyone Rules. The game of chess is played by two opponents by moving pieces on a square board. This makes him a significant piece in opening play. A Knight covered in a suit of armor and sitting on a strong spirited horse, high above all the others on the battlefield. Ways to describe the knight move. White chooses to move the knight at index 2 (0-based indexing) to the right. Even though cavalry are the fastest unit in a real world it is bit different in a Chess Board. Musketeer pieces moves Can you move to every square on the chess board, using only the moves of a knight? Can you visit every square in just 63 moves? You can see a solution by clicking here. The knight in the game is generally represented by a horse's torso. In chess, the knight is the only chessman capable of leaping over other pieces. Chess Pattern 3: Rook Double Attack. The Knight moves in an L-shape either first two squares, then one to left or right; OR first one square, then two to the left or right. The Knights start one square from the corners. Pawns keep their normal initial position but the rest of the pieces are arranged randomly. In context of Knight's tour problem, an item is a Knight's move). On an open board, the King would have 8 different moves at all times, as there are 8 different directions. Each player has sixteen pieces in the beginning of the game: one king, one queen, two rooks, two bishops, two knights, and eight pawns. Just like the king, a knight cannot move into check, and it must move out of Aug 11, 2017 Torn features a unique rule forcing players to make their first move in a In chess you have the ability to promote a pawn into a queen, knight, Learn how to play and clarify any rules. The game is played on a chessboard, consisting of 64 squares: eight rows and eight columns. White just moved his pawn to d4. Standard chess rules apply for classic pieces. Knights move similarly to the knight in chess, but with more restrictions. If, a player completes an illegal move by pressing the clock, in addition to the usual obligation to make a legal move with the touched piece if possible, the standard penalty specified in rule 1C2a applies (i. Based on the way the knight moves, it is impossible for the knight to move onto another square of the same color from which he originally moves. The Rules of Chess Chess is played on a square board of eight rows (called ranks and denoted with numbers 1 to 8) and eight columns (called files and denoted with letters a to h) of squares. Knights are the ONLY pieces in chess that can jump over other pieces. Paladin is also a reference to the virtues of a knight . It moves in an L-shape over the squares. THE RULES OF CHESS. Movement of Chess Pieces Each piece moves in a different way. Chess Corner. (self. Chess For Dummies, 4th Edition. Rules of Chess A lot of people have heard of the game chess but have never played it. (Always take a queen over a bishop). Stay away from a4 and h4 pawn pushes. When you move your piece properly to a square occupied by an opponent's Each chess piece can move only a certain way. Forward 2, over 1 (in any direction). The modern rules first took form in Italy during the 13th century, giving more mobility to pieces that previously had more restricted movement (such as the queen Pawns have the most complex rules of movement: A pawn moves straight forward one square, if that square is vacant. The knight is the only piece on the board that may jump over other pieces. So if the pawn is promoted to a Queen, the Queen, if it is in a position to do so, may check or checkmate the enemy King. Checkmate happens once the king is under attack, cannot move and cannot be helped by its own army of chessmen. The Knight is considered a minor piece and each player start the game with two pieces. Make one or two pawn moves in the opening, not more. Membership is free. The rook moves in a straight line, horizontally or vertically. The Knight changes the colour of the square it stands on with each move. One player, referred to as White, controls the white pieces and the other player, Black, controls the black pieces; Besides the basic movement of the pieces, rules also govern the equipment used, the time control, the conduct and ethics of players, accommodations for physically challenged players, the recording of moves using chess notation, as well as provide procedures for resolving irregularities which can occur during a game. Chess Rules for Moving The King moves from its square to a neighboring square, the Rook in its line or row, the Bishop diagonally, the Queen may move like a Rook or a Bishop, the Knight jumps in making the shortest move that is not a straight one, and the Pawn moves one square straight ahead. Knights cannot move on the diagonal. This is the only situation in which you would move two of your own chess pieces in the same move. The chessboard is eight squares long by eight squares wide. Here, only the movement of a Knight on a Chessboard is important. So what? Well, according to the FIDE regulations a game of chess ends as soon as neither side is capable of giving mate. Sometimes, a Knight can be worth as much as a Queen! The Pawn is worth 1 Simply move the knight (in legal knight chess moves) to every square on the board in as few moves as possible. Capturing with the Knight In Chess. Silver generals excel at moving through crowed regions of the board. The Knight moves in an L shape in any direction. For example, a Rook cannot move 5 squares forward, and then 3 squares to the side in the same turn. FIDE also give rules and guidelines for chess tournaments. The king can castle to either side as long as: 1. Illegal move. The chess board is positioned between the players so that the square in the corner to the right of each player is white. A Knight may move as though it were starting in one corner of a 2x3 or 3x2 rectangle The rules of Extinction Chess are detailed in Schmittberger's book, New Rules for . The movement of the knight is restricted to an "L" shape or right angle. 3. The knight can attack any enemy piece that stands at the end of his move. In blitz chess, the rules are different. The king can move to any of the squares pointed to by an arrow in the diagram on the left. 4. If it has not yet moved, a pawn also has the option of moving two squares straight forward, provided both squares are vacant. This is a rule of chess that is enforced in all games played in over-the-board competitions. Chess Knight Chess Knight is a chess piece on a Chess Board. It makes an effective guard and attacker. Unless the knight can be taken, the king is forced to move, as it is in check, and the queen can be taken, at little to no expense. So we need some unique movement for the knight (not horse). For much of that time, the rules varied from area to area. The most common of the three special chess rules is called castling—a move that is normally used to improve the king's safety. ![img](i5btnwj1lv831) In this position I moved my knight from e5 to f7, with a discovered attack on the rook on d6 from my bishop. Even if White has met the conditions of the first four rules, he cannot castle in the positive above, Each side has 1 King, 1 Queen, 2 Rooks, 2 Bishops, 2 Knights and 8 pawns. share: The Knight. Notice the letter L, but this L can be backwards, one,two one in this direction or it can be kind of sideways and upside While most people play standard chess rules, some people like to play chess with changes to the rules. It is describe as a Armed cavalry in King's Army. Only a web browser is required to play. Basics. Piece Movements Part 3 - The Pawn and the Knight VI. Bishops: Bishops can move any number of squares diagonally. This chess piece, sometimes called "horse", has a quite mysterious way of moving on the board which can puzzle beginners just start to learning the chess rules. This is the same as saying that it moves two squares straight then one square to the side. The knight makes a move that consists of first one step in a The rules of chess have evolved much over the centuries, from the early chess-like games played in India in the 6th century. See in the diagram below. And if the knight on g5 moves to f3, you write Ngf3. The Rules of Chess Rook. The object As in standard Chess, a king is not allowed to make a move to a position is a significant difference between western Chess and Xiangqi because a knight Light Pieces: Two horses and two knights and two Bishops Eight Pawns Chess Chess pieces can only move in the frame of some rules. How the Knight moves in Chess. From articles 1. Then, black chooses to move the knight at index 6 to the left, killing a white knight. If it is attacked by an opponent's piece , it is in check. He can sneak through the enemy army and launch attacks on more powerful pieces, leaving them helpless to defend. Unlike any of the chess pieces in the game, the knight may jump over other pieces. This gives it a degree of flexibility and makes it a powerful piece especially in a game where the board is cluttered with the pieces. How Knights Move in Chess. By the rules of chess, his moves are pretty limited, he can only move one square at a time but he can move forward, backward, left, right and diagonally. The Knight piece can move to any position not already inhabited by another piece of the same color. Setting up the board: The board should be set up with the white square in the nearest row on the right, "white on the right". Keep this important chess pattern in mind in your next game, maybe it will help you at scoring a win! Black to move. How the Chess Pieces Move. It moves Jan 2, 2003 For the full rules of chess, see another webpage. One player plays with the white pieces, and the other player plays with the black pieces. Wherever possible, make a good developing move which threatens something or adds to the pressure on the centre. For the full rules of chess, see another webpage. Bishop. 6 it's clear that the game is over the moment there's no legal combination of moves that would make a mate possible. 1). Therefore, if it Your Move Chess and Games (1-800-645-4710) Presents basic Chess Rules on how to play chess The pawn can become a Queen, Bishop, Rook, or Knight. This is a healthy percentage of the board, 42 percent. This gives it a degree of flexibility that makes it a powerful piece especially early in the game when the board is cluttered with pieces. If you've passed all the beginner stuff and want to move on to the advanced form of the game, start at Your king must survive. To do this, move your King not one, but two spaces towards the Rook you are castling with. How the Chess Knight moves. The king has not moved. The knight moves in the shape of an "L" moving two squares in one direction and then one more square at a 90° angle. Shake hands across the board before the game starts. A Brief Guide to the Rules of Chess. Chess Rules - The Knight. Except for the Knight, chess pieces cannot move by jumping over other chess pieces that stand in their way. ) White pawns are located in 2nd Row & Blacks are located in 7th Row. Chess Rules. The exception is that on their first move they can move two squares, but only on their first move! A pawn is weak on its own, but it has a unique strength. Otherwise the rules of chess apply, with the following differences: The knight can capture the pawn safely, as it can move back to its original field (or to an Another rule to keep in mind rule is that white makes the first move of the game. A pawn can capture an Movement. A few rules that people, many of them starters, aren't aware of. A chess piece can move to another position or can capture one of the opponent's chess pieces by moving one of his pieces to its square. The knight can move two squares in one direction, and the third one at a 90 degree angle, just like the shape of "L". When sitting across the board from another player, the The rules of chess. White always moves first. But on the edge of the board it is weak, as it attacks only four squares on the board. Hence, it would make sense for the King to move in such a manner in the game of chess. We use P to denote a Chess pawn but normally don't mentioned it. The Knight piece can skip over any other pieces to reach its destination position. The Object. Imagine a Knight on a battlefield. If you want to be able to show your best chess in the Middlegame, first you need to survive the opening, making sound moves. The chess board is made up of 64 equal squares in colour alternately light (the "white" squares) and dark (the "black" squares). She can move as many squares as she desires and in any direction (barring any obstructions). 5 squares away. Develop knights before bishops. the Pieces Move Setting Up A Chess Board Basic Strategies Special Rules Setting Each side starts out with 16 pieces, consisting of 8 pawns, 2 rooks, 2 knights, The Rules of Chess. The king has a special move done with the rook. It is called castling. Of course not! The king sits comfortably on his throne, while he has others do his bidding. Chess Knight also a very Special chess piece. picture of the knight pieces Knight Movement. A particle is allowed to move in the $\mathbb{Z}\times \mathbb{Z}$ grid by choosing any of the two jumps: 1) Move two units to right and one unit up 2) Move two units up and one unit to r Having this knowledge will give you more credibility as a chess player and provide you with more ample strategies to victory over your opponent. How to Play Chess for Beginners. The Chess Knight should always be placed close to where the action is. The Knight is also the only piece that can jump over any other chess pieces. Fine's rules for the opening. The most common way that this occurs is that one player doesn't notice that they are in check and makes a move that doesn't get out of check. 2. This is twice as much as the a rook. The knight (♘,♞) is a piece in the game of chess, representing a knight (armored cavalry). Before you can play a game of chess, you need to know how to move the pieces Knights: Knights can move only in an L-shape, one square up and two over, Some differences between shogi and international chess have been mentioned the drop rule makes the two games so different in character that arguing over which game The knight's move in shogi is much more restrictive than in chess. Checkmate ends the game and the side whose king was checkmated looses. If it has not yet moved, a pawn also has the option of moving two squares straight Pawns are the only pieces that capture differently from how they move. How The Pieces Move and Capture All Chess pieces (including the Omega Chess pieces) capture an opponent's piece by landing on the square occupied by the opponent's piece. rules of chess knight movement 7o, v0, rx, q8, lr, ow, cf, le, 0c, bi, cv, ot, xy, dy, ca, pa, 8o, l5, um, hu, yl, ki, fh, ve, cz, 9m, 4a, b6, lc, yp, as,
CommonCrawl
Why power and not amplitude? I'm reading 'The Scientist and Engineer's Guide to Digital Signal Processing'. It says for a noisy signal, the important parameter is not the deviation from the mean, but the power represented by the deviation from the mean. So, if I were to consider Amplitude it would just be voltage deviations from the mean, but for power it becomes Voltage^2 deviations. Then an average is taken and a square root to give the standard deviation [It would just be an average deviation in the case of the Amplitude]. I'm trying to understand why considering just the Amplitude is not good enough? Why is it that when I'm dealing with noisy signals, I need to consider power calculations to try and reduce the noise? JJTJJT It's generally not true that the average deviation from the mean is not important or much less important than the power of the deviation. For sampled data, the mean deviation would be defined as $$\epsilon=\frac{1}{N}\sum_{n=0}^{N-1}|x_n-\mu_x|\tag{1}$$ where $\mu_x$ is the mean of the data. This quantity is definitely relevant. However, the absolute value in $(1)$ makes it much less convenient to use than the equally relevant standard deviation $$\sigma=\sqrt{\frac{1}{N}\sum_{n=0}^{N-1}|x_n-\mu_x|^2}\tag{2}$$ which is differentiable, and, consequently, easier to handle analytically (such as in least squares fitting, etc.). (Note that in Eq. $(2)$ you might as well use a factor $1/(N-1)$ instead of $1/N$, but that is not relevant here). Apart from the above argument based on mathematical convenience, it is true that in the case of uncorrelated noise sources, it is the noise variances (i.e., the squared standard deviations) that add up, and not the mean deviations. robert bristow-johnson $\begingroup$ You may like to add $|(.)|$ around the square term to make it applicable for $x_n\in \mathbb{C}$ as well. $\endgroup$ – Neeks Nov 24 '15 at 12:59 $\begingroup$ I'm still unable to understand why we use power with regard to noisy signals, unfortunately. If energy was getting converted into different forms then I understand using a common standard [i.e. power] to make comparisons is logical. But, within a noisy, electrical signal - since, it's all electrical energy [flow of electrons], why not just use current for example? Square the current instead of the voltage..? $\endgroup$ – JJT Nov 25 '15 at 10:52 $\begingroup$ @JJT: The point is that we're working with a squared quantity for the above mentioned reasons (easier to handle mathematically, and variances of independent noise sources simply add up). It doesn't matter if it's voltage, current, or square-feet. In signal processing any squared quantity is referred to as power, because it's proportional to power. $\endgroup$ – Matt L. Nov 25 '15 at 12:35 Independent noise sources add in power domain; whereas in the voltage domain they are incoherent. "Signals" add in the voltage domain. You might think on Parsevel's theorem as well. BTW: a good analogy is in linear algebra where straight lines add but vectors in different directions add in some form of Pythagoras theorem rrogersrrogers Not the answer you're looking for? Browse other questions tagged noise or ask your own question. Strange noise deviation formula ADC performance simulation: How to calculate SINAD from FFT? Why is random noise amplitude inversely proporional to the frequency? Estimating plant parameters from noisy frequecy response data Why is generating $1/f^\alpha$ noise so complicated? Frequency magnitude distribution of noise Why does signal averaging reduces noise levels by more than $\sqrt{n}$? frequency spectrum of a sampled signal, PSD and power discussion Properties of Up- and Downsampling Time series of Poisson Process
CommonCrawl
Works by Justin Moore ( view other items matching `Justin Moore`, view all matches ) Justin Tatch Moore [10] Justin Moore [2] Justin Hartley Moore [1] Set mapping reflection.Justin Tatch Moore - 2005 - Journal of Mathematical Logic 5 (1):87-97.details In this note we will discuss a new reflection principle which follows from the Proper Forcing Axiom. The immediate purpose will be to prove that the bounded form of the Proper Forcing Axiom implies both that 2ω = ω2 and that [Formula: see text] satisfies the Axiom of Choice. It will also be demonstrated that this reflection principle implies that □ fails for all regular κ > ω1. Baumgartner's isomorphism problem for $$\aleph _2$$ ℵ 2 -dense suborders of $$\mathbb {R}$$ R.Justin Tatch Moore & Stevo Todorcevic - 2017 - Archive for Mathematical Logic 56 (7-8):1105-1114.details In this paper we will analyze Baumgartner's problem asking whether it is consistent that \ and every pair of \-dense subsets of \ are isomorphic as linear orders. The main result is the isolation of a combinatorial principle \\) which is immune to c.c.c. forcing and which in the presence of \ implies that two \-dense sets of reals can be forced to be isomorphic via a c.c.c. poset. Also, it will be shown that it is relatively consistent with ZFC (...) that there exists an \ dense suborder X of \ which cannot be embedded into \ in any outer model with the same \. (shrink) The proper forcing axiom, Prikry forcing, and the singular cardinals hypothesis.Justin Tatch Moore - 2006 - Annals of Pure and Applied Logic 140 (1):128-132.details The purpose of this paper is to present some results which suggest that the Singular Cardinals Hypothesis follows from the Proper Forcing Axiom. What will be proved is that a form of simultaneous reflection follows from the Set Mapping Reflection Principle, a consequence of PFA. While the results fall short of showing that MRP implies SCH, it will be shown that MRP implies that if SCH fails first at κ then every stationary subset of reflects. It will also be demonstrated (...) that MRP always fails in a generic extension by Prikry forcing. (shrink) A Gδ ideal of compact sets strictly above the nowhere dense ideal in the Tukey order.Justin Tatch Moore & Sławomir Solecki - 2008 - Annals of Pure and Applied Logic 156 (2):270-273.details We prove that there is a -ideal of compact sets which is strictly above in the Tukey order. Here is the collection of all compact nowhere dense subsets of the Cantor set. This answers a question of Louveau and Veličković asked in [Alain Louveau, Boban Veličković, Analytic ideals and cofinal types, Ann. Pure Appl. Logic 99 171–195]. Areas of Mathematics in Philosophy of Mathematics Some remarks on the Open Coloring Axiom.Justin Tatch Moore - 2021 - Annals of Pure and Applied Logic 172 (5):102912.details Mathematical Logic in Philosophy of Mathematics Proper forcing, cardinal arithmetic, and uncountable linear orders.Justin Tatch Moore - 2005 - Bulletin of Symbolic Logic 11 (1):51-60.details In this paper I will communicate some new consequences of the Proper Forcing Axiom. First, the Bounded Proper Forcing Axiom implies that there is a well ordering of R which is Σ 1 -definable in (H(ω 2 ), ∈). Second, the Proper Forcing Axiom implies that the class of uncountable linear orders has a five element basis. The elements are X, ω 1 , ω 1 * , C, C * where X is any suborder of the reals of size (...) ω 1 and C is any Countryman line. Third, the Proper Forcing Axiom implies the Singular Cardinals Hypothesis at κ unless stationary subsets of S κ + ω reflect. The techniques are expected to be applicable to other open problems concerning the theory of H(ω 2 ). (shrink) Direct download (10 more) Montréal, Québec, Canada May 17–21, 2006.Jeremy Avigad, Sy Friedman, Akihiro Kanamori, Elisabeth Bouscaren, Philip Kremer, Claude Laflamme, Antonio Montalbán, Justin Moore & Helmut Schwichtenberg - 2007 - Bulletin of Symbolic Logic 13 (1).details Aronszajn lines and the club filter.Justin Tatch Moore - 2008 - Journal of Symbolic Logic 73 (3):1029-1035.details The purpose of this note is to demonstrate that a weak form of club guessing on ω1 implies the existence of an Aronszajn line with no Countryman suborders. An immediate consequence is that the existence of a five element basis for the uncountable linear orders does not follow from the forcing axiom for ω-proper forcings. Weak diamond and open colorings.Justin Tatch Moore - 2003 - Journal of Mathematical Logic 3 (01):119-125.details The purpose of this article is to prove the relative consistency of certain statements about open colorings with 2ℵ0 < 2ℵ1. In particular both OCA and the statement that every 1–1 function of size ℵ1 is σ-monotonic are consistent with 2ℵ0 < 2ℵ1. As a corollary we have that 2ℵ0 < 2ℵ1 does not admit a ℙ max variation. University of California at Berkeley Berkeley, CA, USA March 24–27, 2011.G. Aldo Antonelli, Laurent Bienvenu, Lou van den Dries, Deirdre Haskell, Justin Moore, Christian Rosendal Uic, Neil Thapen & Simon Thomas - 2012 - Bulletin of Symbolic Logic 18 (2).details Metrical Analysis of the Pāli Iti-vuttaka, a Collection of Discourses of BuddhaMetrical Analysis of the Pali Iti-vuttaka, a Collection of Discourses of Buddha.Justin Hartley Moore - 1907 - Journal of the American Oriental Society 28:317.details Erratum: "Set mapping reflection".Justin Tatch Moore - 2005 - Journal of Mathematical Logic 5 (02):299-299.details Stevo Todorcevic. Walks on ordinals and their characteristics. Progress in Mathematics, vol. 263. Birkhäuser Verlag, Basel, 2007, vi + 324 pp. [REVIEW]Justin Tatch Moore - 2011 - Bulletin of Symbolic Logic 17 (1):118-119.details
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Is there a definition of "direction" in physics? Is there an actual definition of "direction" (that is, spatial direction) in physics, or is it just one of those terms that's left undefined? In physics textbooks it's always just taken for granted that the reader knows what it means (and it is true that just about everyone does indeed have an "intuitive" idea of what it is). But it would be more satisfying to have an concrete definition, if possible. spacetime definition geometry Qmechanic♦ JackJack $\begingroup$ Try having a look at Arnold Mathematical Methods of Classical Mechanics. In the very first pages he gives a rigorous definition of Galilean space. But don't let the mathematics confuse you. $\endgroup$ – Giuseppe Negro $\begingroup$ It's looking more and more as though it would improve your question significantly to expand on what exactly you're looking for in a definition. Perhaps take a look at what the dictionary says and think about why it's unsatisfactory for you. $\endgroup$ – David Z In a slightly philosophical vein, direction acquires meaning only when you compare two objects. For example, when you attribute a "direction" to a vector,you are comparing it to the basis set of that vector space. By comparing I mean taking the inner product. I must emphasize that "direction" has meaning only as a representation. A representation is a list of coefficients from the field over which the vector space is defined. A representation is meaningless unless the basis set is specified. If somebody gives you a list of three numbers (a,b,c), there is not much you can say unless they give you the basis set $\{v_{i}\}_{i=1,3}$ Antillar MaximusAntillar Maximus $\begingroup$ I don't think that's how physicists view it. As I commented to Lucas, direction is not something that is relative to anything. $\endgroup$ – Jack $\begingroup$ Space is assumed isotropic, so you have to define a reference (frame, object) before the notion of direction makes sense. We deal with Vector Spaces extensively in physics, so I'm not sure why you say it is not a physics point of view? $\endgroup$ – Antillar Maximus $\begingroup$ Actually, speaking as a physicist(-in-training), I think Antillar is on to something. If there was only one direction, the concept of a direction wouldn't have any use because there would be nothing to compare it to. $\endgroup$ $\begingroup$ This sounds exactly like what I did in linear algebra i.e. taking representations of vectors with respect to certain basis. For example, in the matrix equation $y=Xc$, $c$ is the representation of $y$ with respect to $X$, aka $c=R_Xy$. The way physicists use it $X$ is usually the identity matrix $I$, i.e. the three unit vectors, $i,j,k$. $\endgroup$ – Greg I'll take this to a different level of abstraction, since you seem to want a more Philosophical approach. Given a measure of distance between two places $x$ and $y$, $d(x,y)$, a concept of direction can be formed by considering all the dimensionless ratios of distances between pairs of places from a set $P=\{x_1,x_2,...,x_n\}$. In some cases we might convert a dimensionless ratio $-1\le r\le 1$ into an "angle" $\arccos(r)$ (which is a purely algebraic assignment that will only be helpful in some cases). If there is a (not necessarily unique) subset $S$ of the dimensionless ratios of distances that determines all the other dimensionless ratios, we might consider those to be "directions", relative to that subset. If I give you the directions $S$ and the way in which those directions determine the whole set of dimensionless ratios, and nothing extra, you can reconstruct the whole set of dimensionless ratios $R(S)$, or, more interestingly, if I give you a way in which directions determine the whole set of dimensionless ratios and give you a subset $U\subset S$, that determines $R(U)$, which is, so to speak, where I want you to go. [There are many other mathematical constructions one could contemplate, but space here and my time and everyone else's are all finite.] This depends on us being able to identify $n$ "places" and on us assigning as many as $\frac{n(n-1)}{2}$ distances between them. How one does that in elementary cases is learned in geometry classes, at increasingly sophisticated levels from before kindergarten to beyond Euclid. At some point there is a move from a finite number of points/places to a countably infinite number of points/places, and then, if one doesn't mind such things, to an uncountably infinite number of points/places, but those are just ways to fill in the between (I feel happier filling it up, giving a space a continuous topology, but that's a prejudice and it's not clear that it's necessary for Physics). It's not uncommon in Mathematical Physics to assign distances to pairs of much more abstract places. I work with individual functions or with sets of individual functions as a single place. Where differences arise, it is often because the set of places cannot be embedded in a low-dimensional space. In general it will be possible to embed a set of places and the distances between them into a lower-dimensional space than otherwise if one allows the space to be curved. Another significant difference arises if there are negative as well as positive distances, which prevents whatever system one constructs being embedded into a Euclidean space of any dimension, but might allow it to be embedded into a Minkowski or pseudo-Riemannian space. A Physicist's criteria for how useful any given such assignment of places and distances might be will presumably include some notion of reproducibility, which gets into more difficult abstract territory, including the now ever-present relationship between probability and statistics. Another Question, of course, is "What is Distance"? Ultimately I suppose one comes to some irreducible definitions, where one throws up one's hands and asks whether we're just going to talk about stuff or do something interesting with what we all just know, until someone points out persuasively that we don't and that it's useful to just know something different. Peter MorganPeter Morgan I reckon it describes a non-commutative relationship between two objects (or events) which is completely determined by their position in some (measurable) space, independent of their distance (by some well defined measure of distance) and some set of operations on both of them (probably just translation really). Can anyone do better? (I expect they can) As an aside, I guess you could use a method more suited to analytical philosophy. If I wanted to examine what a direction as used by physicists was, I would do something like: 1) Acknowledge that most physicists would agree on what a direction is. 2) Look at how they use it and write about it, I expect I would see that it is a relationship between two things, that it requires some concept of a space, probably some set of coordinates etc, 3) Try and concisely write down what I learned, probably tearing my hair out at the same time. 4) go to 2 and check what I wrote LucasLucas $\begingroup$ Thanks for the laugh. The AP approach is what I've been taking, with the concomitant tearing of the hair! By the way, according to physicists, direction, whatever it is, is completely independent of coordinate systems. This is something that makes an attempt at a definition even more elusive. $\endgroup$ $\begingroup$ I guess the whole point of differential geometry is to not have a specific set of coordinates. But there is still a space to which coordinates can always be applied and therefore manifold potential positions (some of which are joined to others) which one can conceptualize something being in, and, without which direction would be meaningless. $\endgroup$ – Lucas $\begingroup$ @Jack: it is not elusive--- the interval of a "fifth" is perfectly well defined, even though to give it an absolute location you need a lower note to tell you what the upper note is. The notion of direction is the same--- it's three numbers which you can rotate. $\endgroup$ – Ron Maimon $\begingroup$ @Ron: To play devils advocate here: how does one rotate three numbers and what makes a set of three numbers rotatable. Is 5 0 2 a direction, do those numbers mean a direction without invoking anything else? Is the last three digits of my phone number a direction? Personally, I have learned to be weary about saying it is easy to define anything until I'd written a pretty concrete definition myself, as my experience of such definitions is that they're infinitely harder than they seem on the surface. $\endgroup$ $\begingroup$ @Lucas: Three numbers are meaningless unless you specify the basis set. $\endgroup$ Take the tangent vector at the start of the shortest path between a pair of distinct points, and mod out by distance. What you have left is "direction". In a Euclidean space with a standard metric, you can do this by just dividing any non-zero vector by its length. The resulting normalized vector is a "direction". For curved spaces, it is a bit more complicated, but still the same basic idea. MikolaMikola Simply: Points are well defined in a space (e.g Euclidean space). Now consider two points. One can assign two information to these two points 1) distance and 2) direction. I skip the concept of the distance. The choice (or information)of which point be first and the other be second point defines the direction. MatrixMatrix $\begingroup$ how can you define direction for two points? $\endgroup$ – Physiks lover Man, some of the answers are way of the heads of morons like me, but anyway... At a basic level, a vector is construced from the difference between two coordinates a-b. The number that represents the distance between them is the magnitude of displacement vector ab. There is nothing else, certainly no direction intrinsic to it. Introducing a third coordinate means you can define a second displacement vector ac. Let's suppose the magnitude of ab = ac. We can define the angle between ab and ac as the ratio of the arc distance between b and c to the circumference of a circle of radius equal to the magnitude of ab=ac. That's one way of doing it, but essentially you're just assigning a number to a pair of vectors. Direction is measured by angle and therefore is a function that assigns a number to two vectors. It isn't intrinsic to a vector. Physiks loverPhysiks lover The definition of direction is by three numbers--- (a,b,c) which represent the components of a vector along an arbitrary x-axis, y-axis, and z-axis. But if you ignore the motivation, a vector is a triplet of numbers. This reduces the description of direction to the description of real numbers, and has been the standard definition since the time of Descartes. The reason this definition took so long to formulate is because there is an arbitrariness in the choice of coordinates. Given a collection of directions (triplets of numbers) which describe a physical situation, one can always transform all the directions by a rotation matrix (which is defined noncircularly as a matrix all of whose columns are perpendicular to each other and have unit length), and the situation is not altered. Because of the rotation business, mathematicians are often uncomforable with the definition of direction by triplets of numbers, but this is really the best way, since all other methods are more complicated and needlessly so. The issue of an arbitrary coordinate frame to describe invariant objects is important for other reasons, and is central to gauge theories. LATER EDIT: The proper definition of a pure direction is the ratio of a nonzero vector to its length, which is algebraically simple, because the length is the square root of the sum of the squares of the components. This defines a pure direction. But a vector with direction and magnitude is simply the three numbers together. This obvious clarification is because of a downvote and comment below. Ron MaimonRon Maimon $\begingroup$ The whole vector isn't the direction; only the vector mod its length - e.g. the vector normalized to 1 whenever possible - is the "directional" part of the information. $\endgroup$ – Luboš Motl $\begingroup$ @Lubos: everybody knows that, including the OP. The problem he had was reducing direction to more primitive concepts, like real numbers. The issue with the other answers here is that they do not complete the reduction to more primitive concepts. $\endgroup$ $\begingroup$ I doubt everyone knows that, including the OP. $\endgroup$ There is no reference to "direction" of a vector in the definition of a vector or vector space. However, we may have various examples of vector spaces. For example, the real numbers form a vector space ( without any direction). As another example, geometric arrows can form a vector space. In latter example, arrows are sketched with "direction" respect to a reference (implicitly or explicitly) for example with respect to edge of the paper or an axis or an oriented line connecting two points. Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged spacetime definition geometry or ask your own question. Fundamentally, what is direction? Definition of entropy in thermodynamics Physicists definition of vectors based on transformation laws Has the definition of "statistical mechanics" changed from its original meaning? What is a mathematically precise definition of mass in Lagrangian mechanics? Precise definition of "simple gauge group" in the Yang-Mills existence/mass gap problem Why vector is defined as one straight line?
CommonCrawl
Journal of Nanobiotechnology Preparation of PLA/chitosan nanoscaffolds containing cod liver oil and experimental diabetic wound healing in male rats study Payam Khazaeli1,2, Maryam Alaei3, Mohammad Khaksarihadad4 & Mehdi Ranjbar ORCID: orcid.org/0000-0002-5844-32991 Journal of Nanobiotechnology volume 18, Article number: 176 (2020) Cite this article Diabetes mellitus is one of the most common metabolic disorders. One of the important metabolic complications in diabetes is diabetic foot ulcer syndrome, which causes delayed and abnormal healing of the wound. The formulation of nanoscaffolds containing cod liver oil by altering the hemodynamic balance toward the vasodilators state, increasing wound blood supply, and altering plasma membrane properties, namely altering the membrane phospholipids composition, can be effective in wound healing. In this study, electrospinning method was used to produce poly lactic acid/chitosan nanoscaffolds as a suitable bio-substitute. After preparing the nanoscaffolds, the products were characterized with dynamic light scattering (DLS), transmission electron microscopy (TEM) and scanning electron microscopy (SEM). Also optical properties of polymer and comparison between adsorption between single polymer and polymer-drug calculated with UV−Vis spectra. The structure and functional groups of the final products were characterized by Fourier-transform infrared spectroscopy (FT-IR) and energy dispersive spectroscopy (EDAX) as elemental analysis. The results showed that the optimum formulation of cod liver oil was 30%, which formed a very thin fiber that rapidly absorbed to the wound and produced significant healing effects. According to the results, poly lactic acid/chitosan nanoscaffolds containing cod liver oil can be a suitable bio-product to be used in treating the diabetic foot ulcer syndrome. Poly lactic acid/chitosan nanoscaffolds were synthesized using microwave-assisted electrospinning process. Nanoscaffolds showed high potential in wound healing recovery after 14 days. PLA/chitosan nanoscaffolds containing 30% cod liver oil were synthesized with the size of about 50–150 nm. Wound area indicated that there was significant improvement in wound surface on the 14th day. The global prevalence of diabetes has increased dramatically over the past 2 decades [1]. Diabetes mellitus is the most common heterogeneous metabolic disorder [2], which is associated with a disorder in the metabolism of sugars, lipids, and proteins and is characterized by elevated blood glucose or insulin response to tissues [3, 4]. Patients suffering from diabetes mellitus have limited ability to stimulate the immune response and are very susceptible to infection and at the risk of terminal limb amputation and recurrence of the wound [5]. Fatty acids have physiological and pathological roles in diseases such as atherosclerosis [6, 7], inflammation [8], or normal wound healing [9, 10], The effect of fatty acids on wound healing is through alterations in plasma membrane properties [11, 12], such as changes in membrane phospholipids composition [13,14,15], increased growth factor activity [16, 17], cell differentiation [18, 19], decreased eicosanoids production [20], and lipid mediators of inflammation [21], followed by reducing inflammation and producing interleukin-1 and collagen [22]. The cod liver oil as a rich source of omega-3 fatty acids has many potential effects on modulating various diseases, especially diabetes mellitus [23, 24], improvements in vasodilator property [25,26,27]. In many studies immune and allergic responses of rats was investigated for wound healing [28,29,30] Many scientific works have shown which cod liver oil accelerates many of the potential mechanisms involved in wound healing [31,32,33]. In recent years, new drug delivery systems such as nanofibers [34], nanoparticles [35], cell therapy, and stem cell [36] being used as alternative therapies for common pharmaceutical methods, which could reduce the need for continuous follow-up of the disease and increase the quality of treatment, have received great attention [37]. Nanotechnology has solved many concerns in the field of medicine due to dealing with materials that have unique properties on their surface [38,39,40,41,42,43]. Chitosan structures have a good crosslink structure for encapsulating drugs [44] and polylactic acid possesses properties such as the ability to form hydrogels in physiological conditions [45], mild gel degradation for a wound to heal successfully, and the growth and movement of nutrients [46]. In recent years, the science of nanotechnology has attracted particular attention from researchers in various fields of medicine and pharmaceuticals [47]. Nanofibers and nanoparticles can release the drug in a controlled approach for a long time [48]. These structures can act as an appropriate topical drug delivery system that can provide the appropriate drug concentration and other advantages of this system include the ability to transport hydrophilic and lipophilic drugs simultaneously depending on their structure [49]. Examples of natural polymers used in the fabrication of nanofibers with electrospinning method [50,51,52,53] include creatine [54], gelatin [55], cellulose [56], and polysaccharides such as chitosan and alginate. The synthesis of the PLA/ Chitosan nanofibers has been reviewed in recent studies [57,58,59]. Microwave irradiation as a cost-effective, eco-friendly, and high efficiency method is used for preparing nanoparticles for various applications [60], electrospinning process with high-voltage power, generate polymer fibers in nanometer dimensions which show unique physical and chemical properties [61]. In this study, new developments in the fabrication of nanoscaffold materials such as the microwave-assisted electrospinning process were applied to prepare and formulate poly lactic acid/chitosan containing cod liver oil as a suitable cost-effective method. Summary of the research on nanoscaffolds applications in wound healing recovery is displayed in Table 1. The results indicate that the synergistic effect quantity of the poly lactic acid/chitosan containing hydrogel is the key factor in obtaining suitable biological wound for wound dressing. Table 1 Summary of researches about nanoscaffolds applications in wound healing recovery All materials and precursors used in this research work were pure without any impurities and were purchased directly from reputable commercial centers. Chitosan (CAS: 9012-76-4, MW Mol wt‎: ‎50,000 daltons based on viscosity, 99.90%), Polylactic acid (C(CH3) HC(=O) O–) and Dimethylformamide (DMF, MW; 73.095 g mol− 1) were purchased from Sigma Aldrich agents in IRAN). Polysorbate 80 (tween 80, C64H124O26, MW: 1.310 g/mol) was purchased from FLOKA company in Switzerland. NaOH (d: 2.13 g/ml, MW: 39.9971 g/mol, 99.99%) was purchased from Dr. Abidi company in IRAN. We purchased cod oil lever from institute of pharmaceutical services Razavi company. Xylazine and Ketamine for anesthesia and intraperitoneal tolerance in rats were purchased from Alfasan group of companies in Netherlands. Male rats were obtained from Kerman university of medical animal's farm. Also this study received ethical approval from the local ethical committee of the kerman university of medical sciences as a thesis research at the faculty of pharmacy kerman university of medical sciences with number 1124. Male rat weighing 150–200 g was fed with standard diet and kept under 12:12 h light/dark cycles, at 20 ℃ and relative humidity of 25–30%. XRD patterns for crystalline phase detection were recorded by a Rigaku D-max C III, X-ray diffractometer using Ni-filtered Cu Ka radiation. Microscopic morphology and investigation of surface propertieso of the products were characterized by SEM (LEO 1455VP). The energy dispersive spectrometry (EDS) supplier analysis to determine the elements in the samples was studied by XL30. Transmission electron microscopy (TEM) images were obtained with a Philips EM208 transmission electron microscope with an accelerating voltage of 200 kV. Fourier transform infrared (FT-IR) spectra were recorded on Shimadzu Varian 4300 spectrophotometer in KBr pellets. To absorption evaluate samples ultraviolet–visible spectroscopy analysis was carried out using Shimadzu UV-2600 UV–Vis spectrophotometer. Preparing PLA/chitosan nanofibers To prepare the polymer phase, at first, 0.2 g of PLA was dissolved in 18:3 ml ratio of deionized water and ethanol after heating and stirring at 50 °C and RPM 400 for 45 min. Then, 5 ml of NaOH (2 mol/l) was added to the above solution and this solution was heated at 60 °C and stirred for 30 min. In the next step, 0.05 g of chitosan was dissolved in 2:1 ml ratio of deionized water and dimethylformamide after 110 min. Subsequently, the solutions were transferred to a beaker and exposed to the microwave irradiation oven under the power of 450 W for 5 min. Regular cycles of the microwave irradiation were set to 30 s off and 60 s on. Finally, the solutions were placed in an environment free of contamination for 24 h to complete the crystallization process. Cod liver oil loading First, 2 ml of polymeric solution was added to 100 µl of the drug with the concentrations of 15% and 30% by weight in the presence of 350 µl Tween as the surfactant agent and placed on the reflux system for 30 min at 50 °C and 500 rpm on the shaker for 15 min. PLA/chitosan nanoscaffolds containing cod liver oil were formed by an electrospinning device. Nanoscaffolds containing 30% w/w and 15% w/w cod liver oil were prepared at the speed of 2 ml/h; 12.1 V and jet rotation speed of 100 rpm were used to form the nanofibers. In vivo study For the in vivo study, the male rats were divided into four groups (each group containing 6 mice weighing approximately 200 g). Animals were diabetic by the intraperitoneal injection of 60 mg/kg and their diabetes was confirmed after 3 days by measuring glucose using a glucometer. Then, after anesthetizing the rats with ketamine/xylazine, an ulcer about 1.5 cm in the area between the two scapula was created by punch biopsy and, then, the drug was positioned topically on each group. For this study, four groups of mice were divided into the following groups: Mice in Group 1 treated with nanofiber alone; Mice in Group 2 treated with cod oil only; Mice in Group 3 treated with nanofiber delivery system containing cod liver oil for wound healing; and Mice in Group 4 received no treatment (negative control). To induce diabetes, we injected strepotozocin 60 mg/kg intraperitoneally in rats. All the experimental procedures were carried out according to the protocols set for working with animals at Kerman University of Medical Science (Kerman, Iran). Result and discussion Physicochemical characteristics Morphological properties and surface features of the nanofibers were observed through scanning electron microscopy (SEM) images. SEM images with the approximate scaffold size of poly lactic acid/chitosan nanofibers containing 30% and 15% cod liver oil are displayed in Fig. 1a and b, respectively. As can be seen from the SEM images of the nanoscaffold structures, they were uniform and had no cracking along the formation. The nanoscaffold structures encapsulated different concentrations of cod liver oil without any systemic defects and the nanofibers containing 30% cod liver oil showed less diameter than 15% cod liver oil; this may be due to higher solubility of the polymer phase at higher concentrations of oil phase. According to SEM images, the average size of the diameter scaffold tube was estimated between about 50 and 150 nm. To investigate the three-dimensional (3D) images of the fibers, we did transmission electron microscopy (TEM). According to the TEM images, the cod liver oil was trapped uniformly in the spaces between PLA/chitosan nanoscaffolds. The oil phase ranges in the polymer phase were well-defined. The TEM image of 30% w/w cod liver oil cod liver oil distributed in poly lactic acid/chitosan nanoscaffolds is shown in Fig. 1c. SEM images of the poly lactic acid/chitosan nanofibers containing cod liver oil 30% w/w cod liver oil (a), 15% w/w cod liver oil (b) and TEM image of the poly lactic acid/chitosan nanofibers containing 30% w/w cod liver oil Energy dispersive spectroscopy (EDAX) is a suitable supplier analysis used for the semi-quantitative analysis of elements. This method is mainly used to obtain point chemical composition and quantitatively investigate the poly lactic acid/chitosan nanoscaffolds containing cod liver oil. About 37.17 percent of the total atomic weight of the final products was related to C atoms; this could be related to carbon atoms in poly lactic acid, chitosan, omega3 in cod liver oil, and Tween structures. The existence of the O, Na, and N atoms to the amount of about 31.63, 5.31, and 15.36 percent could be related to the existence of these atoms in poly lactic acid, chitosan, cod liver oil, Tween, NaOH, and dimethylformamide structures. The small amounts of Ti and Ca atoms could be related to the unpredictable impurities in the final products. The EDAX as a supplier analysis of the poly lactic acid/chitosan nanoscaffolds containing 30% cod liver oil is shown in Fig. 2a. The size distribution obtained from the nanoscaffolds was a plot of the relative intensity of light scattered by nanoscaffolds in various size classes and introduced as the intensity size distribution. Results related to the size distribution of the nanoscaffolds obtained from dynamic light scattering analysis showed good match with SEM images and estimated the size of the poly lactic acid/chitosan nanoscaffolds containing 30% cod liver after 15 min ultrasonic irradiation at 60 W fibers of about 50–150 nm in Fig. 2b. Also the size distribution of PLA/chitosan nanofibers without cod oil liver was calculated about 120 nm. EDAX supplier analysis (a) and DLS data diagram after 15 min ultrasonic irradiation at 60 W (b) of the poly lactic acid/chitosan nanofibers 30% w/w containing cod liver oil UV–Vis as a general qualitative technique can be used to identify and confirm functional groups in a compound by matching the absorbance spectrum. Absorption in UV–Vis spectroscopy follows the Beer's Law: $${\text{A}} = \upvarepsilon \times {\text{b}} \times {\text{C}}.$$ where ε is the molar attenuation coefficient, b is path length, and C is concentration. UV–Vis absorption spectra of poly lactic acid/chitosan nanoscaffolds containing cod liver oil showed that, with increasing concentration from 15 to 30% w/w, absorbent peaks became noticeably more intense. However, the cod liver oil was not absorbed alone and it can be concluded that more loading of cod liver oil occurred in poly lactic acid/chitosan nanoscaffolds. Figure 3a demonstrates UV–vis absorption spectra of poly lactic acid/chitosan nanofibers containing cod liver oil of 30% w/w and 15% w/w compared with the cod liver oil [62]. Fourier transform infrared spectroscopy (FT-IR) is an analytical technique used to identify functional groups in materials. Figure 3b, c shows FT-IR spectrum of the prepared poly lactic acid/chitosan nanoscaffolds containing 30% w/w and 15% w/w cod liver oil in the region 400–4000 cm− 1, respectively. The absorption peaks at 3454 cm− 1 and 1630 cm− 1 regions could be attributed to the stretching and bending vibrations of O–H groups from chitosan and omega3 structures in the cod liver oil. The obtained peaks at 2884 cm− 1, 1650 cm− 1, and 1600 cm− 1 expressed the existence of stretching mode C-H, CH2 groups, C=O, C=C, and C–N regions in chitosan omega3 in cod liver oil and poly lactic acid structures. The reflectance at 3093 cm− 1 showed N–H band in chitosan. In general, the cod liver oil structures with forming chemical bonds are located in nanobio-polymeric nanoscaffold structures. UV–Vis absorption spectra of poly lactic acid/chitosan nanofibers containing cod liver oil 30% w/w, 15% w/w and cod liver oil [62] (a) and FT-IR spectrum of the poly lactic acid/chitosan nanofibers containing 30% w/w (b) and 15% w/w (c) cod liver oil Diabetic rats were evaluated in two ways; (a) blood glucose measurement by glucometer, rats with blood glucose above 200 mg/dl were considered diabetic and selected for further study, (b) the selected rats were examined for the appearance of diabetic symptoms including polyuria, overeating, and thirst and were assured of their diabetic status. As shown in the results and tables, in the group using nanofibers mixed with cod liver oil, the wound healing process (assessed by measuring the wound area) was investigated. It showed significantly better results than the group that used nanofiber alone or cod liver oil alone. It also appears that 30% cod liver oil supplementation with polyelactic acid/chitosan nanoscaffolds resulted in 94.5% wound healing on day 14, whereas 15% cod liver oil can heal wounds by about 86% on day 14, as presented in Table 2. Macroscopic changes of the wound were evaluated for treatment progress on days 0, 3, 7, and 14 after treatment and recorded by a photograph. The percentage of wound healing was calculated using the following formula: $$Percentage\;of\;recovery = {{\left( {Surface\;wound\;on\;the\;first\;day - Surface\;wound\;in\;day\;X} \right)} \mathord{\left/ {\vphantom {{\left( {Surface\;wound\;on\;the\;first\;day - Surface\;wound\;in\;day\;X} \right)} {\left( {Surface\;wound\;on\;the\;first\;day} \right)}}} \right. \kern-\nulldelimiterspace} {\left( {Surface\;wound\;on\;the\;first\;day} \right)}} \times 100.$$ Table 2 Evaluation of wound healing rate in study groups studied on days 7 and 14 Figure 4 demonstrates a 14-day course examination of the wound surface and a photograph of the wound healing process. On days 0, 3, 7, and 14, the rate of wound healing in the nanoscaffold saline-treated mice with 30% cod oil was significantly different from that of the untreated control group and also from the other groups. The poly lactic acid/chitosan nanoscaffolds as bio-compatible and bio-degradable polymers could interact with skin cells and accelerate the healing process. Due to the high porosity of the nanoscaffold coating and the hydrogel-like properties of the polymers used, the coating swelled after the absorption of moisture and created a very small gap between the coating and the wound surface. On day 14, these characteristics were well observed and the nanoscaffold porosity property permitted oxygen to pass through the wound while keeping its surface moist. Wound area in five groups of animals studied on days 0, 3, 7 and 14 (mean ± sd; N = 3) Wound area in five groups of animals studied on days 0, 3, 7, and 14 (mean ± sd; N = 3) is shown in Fig. 5. The area of the wound in the group with 30% cod oil was significantly less than the other groups; this indicated greater improvement in this group than in the other groups. The presence of bio-compatible nanofibers not only induced immune and allergic responses, but also made the body resemble the original tissue located in the wound. As a result, bio-chemical signals are needed to accelerate recovery and, eventually, wound healing will occur faster. The wound healing process observed in the study groups In this research, the nanofibers scaffolding produced with electrospinning method was used to repair wounds on the skin of the rats. Macroscopic and microscopic studies were performed on the wounds to determine the efficacy of the produced nanoscaffolds after the desired time. The poly lactic acid/chitosan as bio-compatible and bio-degradable polymers scaffolding were designed for wound healing to be able to control drug (cod liver oil) release over a long period of time by improving the chemical structure. The poly lactic acid/chitosan nanoscaffolds containing 30% cod liver oil showed more healing and less wound area on day 14; this can be due to permeability and sufficient oxygen for tissue repair. Moisture retention of the wound medium for accelerating its healing, color change, and pH changes to keep the wound site safe from bacteria, contamination, and non-contamination was among the characteristics of the produced nanoscaffold. Wong SL, Demers M, Martinod K, Gallant M, Wang Y, Goldfine AB, et al. Diabetes primes neutrophils to undergo NETosis, which impairs wound healing. Nat Med. 2015;21(7):815. Majd SA, Khorasgani MR, Moshtaghian SJ, Talebi A, Khezri M. Application of chitosan/PVA nano fiber as a potential wound dressing for streptozotocin-induced diabetic rats. Int J Biol Macromol. 2016;92:1162–8. Anisha B, Biswas R, Chennazhi K, Jayakumar RJ. Chitosan–hyaluronic acid/nano silver composite sponges for drug resistant bacteria infected diabetic wounds. Int J Biol Macromol. 2013;62:310–20. Zhao L, Niu L, Liang H, Tan H, Liu C, Zhu FJ, et al. pH and glucose dual-responsive injectable hydrogels with insulin and fibroblasts as bioactive dressings for diabetic wound healing. ACS Appl Mater Interfaces. 2017;9(43):37563–74. Xiao J, Zhu Y, Huddleston S, Li P, Xiao B, Farha OK, et al. Copper metal–organic framework nanoparticles stabilized with folic acid improve wound healing in diabetes. ACS Nano. 2018;12(2):1023–32. Spickett CM. Chlorinated lipids and fatty acids: an emerging role in pathology. Pharmacol Ther. 2007;115(3):400–9. Calder PC. The role of marine omega-3 (n-3) fatty acids in inflammatory processes, atherosclerosis and plaque stability. Mol Nutr Food Res. 2012;56(7):1073–80. Calder PC. Polyunsaturated fatty acids, inflammation, and immunity. Lipids. 2001;36(9):1007–24. Norling LV, Spite M, Yang R, Flower RJ, Perretti M, Serhan CN. Cutting edge: humanized nano-proresolving medicines mimic inflammation-resolution and enhance wound healing. J Immunol. 2011;186(10):5543–7. Laroui H, Ingersoll SA, Liu HC, Baker MT, Ayyadurai S, Charania MA, et al. Dextran sodium sulfate (DSS) induces colitis in mice by forming nano-lipocomplexes with medium-chain-length fatty acids in the colon. PLoS ONE. 2012;7(3):e32084. Baghaie S, Khorasani MT, Zarrabi A, Moshtaghian J. Wound healing properties of PVA/starch/chitosan hydrogel membranes with nano zinc oxide as antibacterial wound dressing material. J Biomater Sci Polym Ed. 2017;28(18):2220–41. Siscovick DS, Raghunathan T, King I, Weinmann S, Wicklund KG, Albright J, et al. Dietary intake and cell membrane levels of long-chain n-3 polyunsaturated fatty acids and the risk of primary cardiac arrest. JAMA. 1995;274(17):1363–7. Saville JT, Zhao Z, Willcox MD, Ariyavidana MA, Blanksby SJ, Mitchell TW. Identification of phospholipids in human meibum by nano-electrospray ionisation tandem mass spectrometry. Exp Eye Res. 2011;92(3):238–40. Di Pasqua R, Hoskins N, Betts G, Mauriello G. Changes in membrane fatty acids composition of microbial cells induced by addiction of thymol, carvacrol, limonene, cinnamaldehyde, and eugenol in the growing media. J Agric Food Chem. 2006;54(7):2745–9. PubMed Article CAS PubMed Central Google Scholar Falcone DL, Ogas JP, Somerville CR. Regulation of membrane fatty acid composition by temperature in mutants of Arabidopsis with alterations in membrane lipid composition. BMC Plant Biol. 2004;4(1):17. Kurosu H, Choi M, Ogawa Y, Dickson AS, Goetz R, Eliseenkova AV, et al. Tissue-specific expression of βKlotho and fibroblast growth factor (FGF) receptor isoforms determines metabolic activity of FGF19 and FGF21. J Biol Chem. 2007;282(37):26687–95. Chui PC, Antonellis PJ, Bina HA, Kharitonenkov A, Flier JS, Maratos-Flier E. Obesity is a fibroblast growth factor 21 (FGF21)-resistant state. Diabetes. 2010;59(11):2781–9. Lee J-H, Tachibana H, Morinaga Y, Fujimura Y, Yamada K. Modulation of proliferation and differentiation of C2C12 skeletal muscle cells by fatty acids. Life Sci. 2009;84(13–14):415–20. Yanes O, Clark J, Wong DM, Patti GJ, Sanchez-Ruiz A, Benton HP, et al. Metabolic oxidation regulates embryonic stem cell differentiation. Nat Chem Biol. 2010;6(6):411–7. Weaver J, Maddox J, Cao Y, Mullarky I, Sordillo L. Increased 15-HPETE production decreases prostacyclin synthase activity during oxidant stress in aortic endothelial cells. Free Radic Biol Med. 2001;30(3):299–308. Bennett M, Gilroy DW. Lipid mediators in inflammation. In: Myeloid cells in health and disease: a synthesis. Washington: ASM Press; 2017. p. 343–66. Franz S, Allenstein F, Kajahn J, Forstreuter I, Hintze V, Möller S, et al. Artificial extracellular matrices composed of collagen I and high-sulfated hyaluronan promote phenotypic and functional modulation of human pro-inflammatory M1 macrophages. Acta Biomater. 2013;9(3):5621–9. Ward OP, Singh AJ. Omega-3/6 fatty acids: alternative sources of production. Process Biochem. 2005;40(12):3627–52. Turk HF, Monk JM, Fan Y-Y, Callaway ES, Weeks B, Chapkin RS. Inhibitory effects of omega-3 fatty acids on injury-induced epidermal growth factor receptor transactivation contribute to delayed wound healing. Am J Physiol Cell Physiol. 2013;304(9):C905–17. Scherhag R, Kramer HJ, Düsing R. Dietary administration of eicosapentaenoic and linolenic acid increases arterial blood pressure and suppresses vascular prostacyclin synthesis in the rat. Prostaglandins. 1982;23(3):369–82. McVeigh G, Brennan G, Johnston G, McDermott B, McGrath L, Henry W, et al. Dietary fish oil augments nitric oxide production or release in patients with type 2 (non-insulin-dependent) diabetes mellitus. Diabetologia. 1993;36(1):33–8. Codde JP, Beilin LJ. Dietary fish oil prevents dexamethasone induced hypertension in the rat. Clin Sci. 1985;69(6):691–9. Falcone FH, Zillikens D, Gibbs BF. The 21st century renaissance of the basophil? Current insights into its role in allergic responses and innate immunity. Exp Dermatol. 2006;15(11):855–64. Wilgus TA. Immune cells in the healing skin wound: influential players at each stage of repair. Pharmacol Res. 2008;58(2):112–6. Godbout JP, Glaser R. Stress-induced immune dysregulation: implications for wound healing, infectious disease and cancer. J Neuroimmune Pharmacol. 2006;1(4):421–7. Khanna PK, Nair CK. Synthesis of silver nanoparticles using cod liver oil (fish oil): green approach to nanotechnology. Int J Green Nanotechnol Phys Chem. 2009;1(1):P3–9. Terkelsen LH, Eskild-Jensen A, Kjeldsen H, Barker JH, Vibeke Elisabeth Hjortdal A. Topical application of cod liver oil ointment accelerates wound healing: an experimental study in wounds in the ears of hairless mice. Scand J Plast Surg Hand Surg. 2000;34(1):15–20. Mahmoud Ali M, Radad KJ. Cod liver oil/honey mixture: an effective treatment of equine complicated lower leg wounds. Vet World. 2011;4(7):304–10. Rho KS, Jeong L, Lee G, Seo B-M, Park YJ, Hong S-D, et al. Electrospinning of collagen nanofibers: effects on the behavior of normal human keratinocytes and early-stage wound healing. Biomaterials. 2006;27(8):1452–61. Tian J, Wong KK, Ho CM, Lok CN, Yu WY, Che CM, et al. Topical delivery of silver nanoparticles promotes wound healing. 2007;2(1):129–36. Gauglitz GG, Jeschke MG. Combined gene and stem cell therapy for cutaneous wound healing. Mol Pharm. 2011;8(5):1471–9. Öien RF, Forssell HW. Ulcer healing time and antibiotic treatment before and after the introduction of the Registry of Ulcer Treatment: an improvement project in a national quality registry in Sweden. BMJ Open. 2013;3(8):e003091. Ahlawat J, Guillama Barroso G, Masoudi Asil S, Alvarado M, Armendariz I, Bernal J, et al. Nanocarriers as potential drug delivery candidates for overcoming the blood–brain barrier: challenges and possibilities. ACS Omega. 2020;5:12583–95. Khatoon A, Khan F, Ahmad N, Shaikh S, Rizvi SMD, Shakil S, et al. Silver nanoparticles from leaf extract of Mentha piperita: eco-friendly synthesis and effect on acetylcholinesterase activity. Life Sci. 2018;209:430–4. Sahai N, Ahmad N, Gogoi M. Nanoparticles based drug delivery for tissue regeneration using biodegradable scaffolds: a review. Curr Pathobiol Rep. 2018;6(4):219–24. Ahmad N, Bhatnagar S, Ali SS, Dutta R. Phytofabrication of bioinduced silver nanoparticles for biomedical applications. Int J Nanomed. 2015;10:7019. Ahmad N, Bhatnagar S, Saxena R, Iqbal D, Ghosh A, Dutta R. Biosynthesis and characterization of gold nanoparticles: kinetics, in vitro and in vivo study. Mater Sci Eng C. 2017;78:553–64. Ahmad N, Bhatnagar S, Dubey SD, Saxena R, Sharma S, Dutta R. Nanopackaging in food and electronics. In: Nanoscience in food and agriculture. Cham: Springer; 2017. p. 45–97. Bernkop-Schnürch A, Dünnhaupt S. Chitosan-based drug delivery systems. Eur J Pharm. 2012;81(3):463–9. Alsaheb RA, Aladdin A, Othman NZ, Malek RA, Leng OM, Aziz R, et al. Recent applications of polylactic acid in pharmaceutical medical industries. J Chem Pharm Res. 2015;7(12):51–63. Narayanan G, Vernekar VN, Kuyinu EL, Laurencin CT. Poly (lactic acid)-based biomaterials for orthopaedic regenerative engineering. Adv Drug Deliv Rev. 2016;107:247–76. Mei L, Zhang Z, Zhao L, Huang L, Yang X-L, Tang J, et al. Pharmaceutical nanotechnology for oral delivery of anticancer drugs. Adv Drug Deliv Rev. 2013;65(6):880–90. Liu M, Duan X-P, Li Y-M, Yang D-P, Long Y-Z. Electrospun nanofibers for wound healing. Mater Sci Eng. 2017;76:1413–23. Fahr A, van Hoogevest P, May S, Bergstrand N, Leigh ML. Transfer of lipophilic drugs between liposomal membranes and biological interfaces: consequences for drug delivery. Eur J Pharm Sci. 2005;26(3–4):251–65. Liu Y, Ren Z-F, He J-H. Bubble electrospinning method for preparation of aligned nanofibre mat. Mater Sci Technol. 2010;26(11):1309–12. Bai J, Li Y, Yang S, Du J, Wang S, Zheng J, et al. A simple and effective route for the preparation of poly (vinylalcohol)(PVA) nanofibers containing gold nanoparticles by electrospinning method. Solid State Commun. 2007;141(5):292–5. Dharmaraj N, Kim C, Kim K, Kim H, Suh EK. Spectral studies of SnO2 nanofibres prepared by electrospinning method. Spectrochim Acta A Mol Biomol Spectrosc. 2006;64(1):136–40. Khorami HA, Keyanpour-Rad M, Vaezi MR. Synthesis of SnO2/ZnO composite nanofibers by electrospinning method and study of its ethanol sensing properties. Appl Surf Sci. 2011;257(18):7988–92. Caspe S. The role of creatine in cell growth in vitro and its use in wound healing. J lab Clin Med. 1944;29:483–5. Kanokpanont S, Damrongsakkul S, Ratanavaraporn J, Aramwit P. An innovative bi-layered wound dressing made of silk and gelatin for accelerated wound healing. Int J Pharm. 2012;436(1–2):141–53. Qiu Y, Qiu L, Cui J, Wei Q. Bacterial cellulose and bacterial cellulose-vaccarin membranes for wound healing. Mater Sci Eng. 2016;59:303–9. Suryani AH, Wirjosentono B, Rihayat T, Salisah Z. Synthesis and characterization of poly (lactid acid)/chitosan nanocomposites based on renewable resources as biobased-material. J Phys Conf. 2018;953:012015. Raza ZA, Anwar F. Fabrication of poly (lactic acid) incorporated chitosan nanocomposites for enhanced functional polyester fabric. Polímeros. 2018;28(2):120–4. Jeevitha D, Amarnath K. Chitosan/PLA nanoparticles as a novel carrier for the delivery of anthraquinone: synthesis, characterization and in vitro cytotoxicity evaluation. Colloids Surf B. 2013;101:126–34. Bhuvaneswari T, Thiyagarajan M, Geetha N, Venkatachalam P. Bioactive compound loaded stable silver nanoparticle synthesis from microwave irradiated aqueous extracellular leaf extracts of Naringi crenulata and its wound healing activity in experimental rat model. Acta Trop. 2014;135:55–61. Jiang S, Chen Y, Duan G, Mei C, Greiner A, Agarwal S. Electrospun nanofiber reinforced composites: a review. Polym Chem. 2018;9(20):2685–720. Anil Kumar K, Viswanathan K. Study of UV transmission through a few edible oils and chicken oil. J Spectrosc. 2012. https://doi.org/10.1155/2013/540417. Kokabi M, Sirousazar M, Hassan ZM. PVA–clay nanocomposite hydrogels for wound dressing. Eur Polym J. 2007;43(3):773–81. Archana D, Dutta J, Dutta PK. Evaluation of chitosan nano dressing for wound healing: characterization, in vitro and in vivo studies. Int J Biol Macromol. 2013;57:193–203. Naseri-Nosar M, Farzamfar S, Sahrapeyma H, Ghorbani S, Bastami F, Vaez A, et al. Cerium oxide nanoparticle-containing poly (ε-caprolactone)/gelatin electrospun film as a potential wound dressing material: in vitro and in vivo evaluation. Mater Sci Eng. 2017;81:366–72. Sandri G, Aguzzi C, Rossi S, Bonferoni MC, Bruni G, Boselli C, et al. Halloysite and chitosan oligosaccharide nanocomposite for wound healing. Acta Biomater. 2017;57:216–24. Aguzzi C, Sandri G, Bonferoni C, Cerezo P, Rossi S, Ferrari F, et al. Solid state characterisation of silver sulfadiazine loaded on montmorillonite/chitosan nanocomposite for wound healing. Colloids Surf B Biointerfaces. 2014;113:152–7. Authors are grateful to council of Pharmaceutics Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran. Pharmaceutics Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, P.O. Box: 76175-493, Kerman, 76169-11319, Iran Payam Khazaeli & Mehdi Ranjbar Faculty of Pharmacy, Kerman University of Medical Sciences, Kerman, Iran Payam Khazaeli Student Research Committee, Kerman University of Medical Sciences, Kerman, Iran Maryam Alaei Neuroscience Research, and Physiology Research Centers, Kerman University of Medical Sciences, Kerman, Iran Mohammad Khaksarihadad Mehdi Ranjbar PK: wrote the manuscript, supervised the research. MA: Designed and performed experiments. MK: Conceived and planned the experiments. MR: Developed the theoretical formalism, performed the analytic calculations. All authors read and approved the final manuscript. Correspondence to Mehdi Ranjbar. Khazaeli, P., Alaei, M., Khaksarihadad, M. et al. Preparation of PLA/chitosan nanoscaffolds containing cod liver oil and experimental diabetic wound healing in male rats study. J Nanobiotechnol 18, 176 (2020). https://doi.org/10.1186/s12951-020-00737-9 Poly lactic acid/chitosan nanoscaffolds Electrospinning
CommonCrawl
moduli spaces Higher Gauge Theory in Edinburgh – Part I Posted by Jeffrey Morton under 2-groups, categorification, conferences, double categories, geometry, groupoids, higher dimensional algebra, moduli spaces, string theory, tqft The main thing happening in my end of the world is that it's relocated from Europe back to North America. I'm taking up a teaching postdoc position in the Mathematics and Computer Science department at Mount Allison University starting this month. However, amidst all the preparations and moving, I was also recently in Edinburgh, Scotland for a workshop on Higher Gauge Theory and Higher Quantization, where I gave a talk called 2-Group Symmetries on Moduli Spaces in Higher Gauge Theory. That's what I'd like to write about this time. Edinburgh is a beautiful city, though since the workshop was held at Heriot-Watt University, whose campus is outside the city itself, I only got to see it on the Saturday after the workshop ended. However, John Huerta and I spent a while walking around, and as it turned out, climbing a lot: first the Scott Monument, from which I took this photo down Princes Street: And then up a rather large hill called Arthur's Seat, in Holyrood Park next to the Scottish Parliament. The workshop itself had an interesting mix of participants. Urs Schreiber gave the most mathematically sophisticated talk, and mine was also quite category-theory-minded. But there were also some fairly physics-minded talks that are interesting to me as well because they show the source of these ideas. In this first post, I'll begin with my own, and continue with David Roberts' talk on constructing an explicit string bundle. … 2-Group Symmetries of Moduli Spaces My own talk, based on work with Roger Picken, boils down to a couple of observations about the notion of symmetry, and applies them to a discrete model in higher gauge theory. It's the kind of model you might use if you wanted to do lattice gauge theory for a BF theory, or some other higher gauge theory. But the discretization is just a convenience to avoid having to deal with infinite dimensional spaces and other issues that don't really bear on the central point. Part of that point was described in a previous post: it has to do with finding a higher analog for the relationship between two views of symmetry: one is "global" (I found the physics-inclined part of the audience preferred "rigid"), to do with a group action on the entire space; the other is "local", having to do with treating the points of the space as objects of a groupoid who show how points are related to each other. (Think of trying to describe the orbit structure of just the part of a group action that relates points in a little neighborhood on a manifold, say.) In particular, we're interested in the symmetries of the moduli space of connections (or, depending on the context, flat connections) on a space, so the symmetries are gauge transformations. Now, here already some of the physically-inclined audience objected that these symmetries should just be eliminated by taking the quotient space of the group action. This is based on the slogan that "only gauge-invariant quantities matter". But this slogan has some caveats: in only applies to closed manifolds, for one. When there are boundaries, it isn't true, and to describe the boundary we need something which acts as a representation of the symmetries. Urs Schreiber pointed out a well-known example: the Chern-Simons action, a functional on a certain space of connections, is not gauge-invariant. Indeed, the boundary terms that show up due to this not-invariance explain why there is a Wess-Zumino-Witt theory associated with the boundaries when the bulk is described by Chern-Simons. Now, I've described a lot of the idea of this talk in the previous post linked above, but what's new has to do with how this applies to moduli spaces that appear in higher gauge theory based on a 2-group . The points in these space are connections on a manifold . In particular, since a 2-group is a group object in categories, the transformation groupoid (which captures global symmetries of the moduli space) will be a double category. It turns out there is another way of seeing this double category by local descriptions of the gauge transformations. In particular, general gauge transformations in HGT are combinations of two special types, described geometrically by -valued functions, or -valued 1-forms, where is the group of objects of , and is the group of morphisms based at . If we think of connections as functors from the fundamental 2-groupoid into , these correspond to pseudonatural transformations between these functors. The main point is that there are also two special types of these, called "strict", and "costrict". The strict ones are just natural transformations, where the naturality square commutes strictly. The costrict ones, also called ICONs (for "identity component oplax natural transformations" – see the paper by Steve Lack linked from the nlab page above for an explanation of "costrictness"). They assign the identity morphism to each object, but the naturality square commutes only up to a specified 2-cell. Any pseudonatural transformation factors into a strict and costrict part. The point is that taking these two types of transformation to be the horizontal and vertical morphisms of a double category, we get something that very naturally arises by the action of a big 2-group of symmetries on a category. We also find something which doesn't happen in ordinary gauge theory: that only the strict gauge transformations arise from this global symmetry. The costrict ones must already be the morphisms in the category being acted on. This category plays the role of the moduli space in the normal 1-group situation. So moving to 2-groups reveals that in general we should distinguish between global/rigid symmetries of the moduli space, which are strict gauge transformations, and costrict ones, which do not arise from the global 2-group action and should be thought of as intrinsic to the moduli space. String Bundles David Roberts gave a rather interesting talk called "Constructing Explicit String Bundles". There are some notes for this talk here. The point is simply to give an explicit construction of a particular 2-group bundle. There is a lot of general abstract theory about 2-bundles around, and a fair amount of work that manipulates physically-motivated descriptions of things that can presumably be modelled with 2-bundles. There has been less work on giving a mathematically rigorous description of specific, concrete 2-bundles. This one is of interest because it's based on the String 2-group. Details are behind that link, but roughly the classifying space of (a homotopy 2-type) is fibred over the classifying space for (a 1-type). The exact map is determined by taking a pullback along a certain characteristic class (which is a map out of ). Saying "the" string 2-group is a bit of a misnomer, by the way, since such a 2-group exists for every simply connected compact Lie group . The group that's involved here is a , the string 2-group associated to , the universal cover of the rotation group . This is the one that determines whether a given manifold can support a "string structure". A string structure on , therefore, is a lift of a spin structure, which determines whether one can have a spin bundle over , hence consistently talk about a spin connection which gives parallel transport for spinor fields on . The string structure determines if one can consistently talk about a string-bundle over , and hence a 2-group connection giving parallel transport for strings. In this particular example, the idea was to find, explicitly, a string bundle over Minkowski space – or its conformal compactification. In point of fact, this particular one is for $latek String(5)$, and is over 6-dimensional Minkowski space, whose compactification is . This particular is convenient because it's possible to show abstractly that it has exactly one nontrivial class of string bundles, so exhibiting one gives a complete classification. The details of the construction are in the notes linked above. The technical details rely on the fact that we can coordinatize nicely using the projective quaternionic plane, but conceptually it relies on the fact that , and because of how the lifting works, this is also . This quotient means there's a string bundle whose fibre is . While this is only one string bundle, and not a particularly general situation, it's nice to see that there's a nice elegant presentation which gives such a bundle explicitly (by constructing cocycles valued in the crossed module associated to the string 2-group, which give its transition functions). (Here endeth Part I of this discussion of the workshop in Edinburgh. Part II will talk about Urs Schreiber's very nice introduction to Higher Geometric Quantization) (This ends the first part of this update – the next will describe the physics-oriented talks, and the third will describe Urs Schreiber's series on higher geometric quantization) Categorifying Global vs. Local Symmetry Posted by Jeffrey Morton under 2-groups, categorification, category theory, double categories, groupoids, moduli spaces So it's been a while since I last posted – the end of 2013 ended up being busy with a couple of visits to Jamie Vicary in Oxford, and Roger Picken in Lisbon. In the aftermath of the two trips, I did manage to get a major revision of this paper submitted to a journal, and put this one out in public. A couple of others will be coming down the pipeline this year as well. I'm hoping to get back to a post about motives which I planned earlier, but for the moment, I'd like to write a little about the second paper, with Roger Picken. Global and Local Symmetry The upshot is that it's about categorifying the concept of symmetry. More specifically, it's about finding the analog in the world of categories for the interplay between global and local symmetry which occurs in the world of set-based structures (sets, topological spaces, vector spaces, etc.) This distinction is discussed in a nice way by Alan Weinstein in this article from the Notices of the AMS from The global symmetry of an object in some category can be described in terms of its group of automorphisms: all the ways the object can be transformed which leave it "the same". This fits our understanding of "symmetry" when the morphisms can really be interpreted as transformations of some sort. So let's suppose the object is a set with some structure, and the morphisms are set-maps that preserve the structure: for example, the objects could be sets of vertices and edges of a graph, so that morphisms are maps of the underlying data that preserve incidence relations. So a symmetry of an object is a way of transforming it into itself – and an invertible one at that – and these automorphisms naturally form a group . More generally, we can talk about an action of a group on an object , which is a map . "Local symmetry" is different, and it makes most sense in a context where the object is a set – or at least, where it makes sense to talk about elements of , so that has an underlying set of some sort. Actually, being a set-with-structure, in a lingo I associate with Jim Dolan, means that the forgetful functor is faithful: you can tell morphisms in (in particular, automorphisms of ) apart by looking at what they do to the underlying set. The intuition is that the morphisms of are exactly set maps which preserve the structure which forgets about – or, conversely, that the structure on objects of is exactly that which is forgotten by . Certainly, knowing only this information determines up to equivalence. In any case, suppose we have an object like this: then knowing about the symmetries of amounts to knowing about a certain group action, namely the action of , on the underlying set . From this point of view, symmetry is about group actions on sets. The way we represent local symmetry (following Weinstein's discussion, above) is to encode it as a groupoid – a category whose morphisms are all invertible. There is a level-slip happening here, since is now no longer seen as an object inside a category: it is the collection of all the objects of a groupoid. What makes this a representation of "local" symmetry is that each morphism now represents, not just a transformation of the whole object , but a relationship under some specific symmetry between one element of and another. If there is an isomorphism between and , then and are "symmetric" points under some transformation. As Weinstein's article illustrates nicely, though, there is no assumption that the given transformation actually extends to the entire object : it may be that only part of has, for example, a reflection symmetry, but the symmetry doesn't extend globally. Transformation Groupoid The "interplay" I alluded to above, between the global and local pictures of symmetry, is to build a "transformation groupoid" (or "action groupoid") associated to a group acting on a set . The result is called for short. Its morphisms consist of pairs such that is a morphism taking to its image under the action of . The "local" symmetry view of treats each of these symmetry relations between points as a distinct bit of data, but coming from a global symmetry – that is, a group action – means that the set of morphisms comes from the product . Indeed, the "target" map in from morphisms to objects is exactly a map . It is not hard to show that this map is an action in another standard sense. Namely, if we have a real action , then this map is just , which moves one of the arguments to the left side. If was a functor, then $\hat{\phi}$ satisfies the "action" condition, namely that the following square commutes: (Here, is the multiplication in , and this is the familiar associativity-type axiom for a group action: acting by a product of two elements in is the same as acting by each one successively. So the starting point for the paper with Roger Picken was to categorify this. It's useful, before doing that, to stop and think for a moment about what makes this possible. First, as stated, this assumed that either is a set, or has an underlying set by way of some faithful forgetful functor: that is, every morphism in corresponds to a unique set map from the elements of to itself. We needed this to describe the groupoid , whose objects are exactly the elements of . The diagram above suggests a different way to think about this. The action diagram lives in the category : we are thinking of as a set together with some structure maps. and the morphism must be in the same category, , for this characterization to make sense. So in fact, what matters is that the category lived in was closed: that is, it is enriched in itself, so that for any objects , there is an object , the internal hom. In this case, it's which appears in the diagram. Such an internal hom is supposed to be a dual to 's monoidal product (which happens to be the Cartesian product ): this is exactly what lets us talk about . So really, this construction of a transformation groupoid will work for any closed monoidal category , producing a groupoid in . It may be easier to understand in cases like , the category of topological spaces, where there is indeed a faithful underlying set functor. But although talking explicitly about elements of was useful for intuitively seeing how relates global and local symmetries, it played no particular role in the construction. Categorify Everything In the circles I run in, a popular hobby is to "categorify everything": there are different versions, but what we mean here is to turn ideas expressed in the world of sets into ideas in the world of categories. (Technical aside: all the categories here are assumed to be small). In principle, this is harder than just reproducing all of the above in any old closed monoidal category: the "world" of categories is , which is a closed monoidal 2-category, which is a more complicated notion. This means that doing all the above "strictly" is a special case: all the equalities (like the commutativity of the action square) might in principle be replaced by (natural) isomorphisms, and a good categorification involves picking these to have good properties. (In our paper, we left this to an appendix, because the strict special case is already interesting, and in any case there are "strictification" results, such as the fact that weak 2-groups are all equivalent to strict 2-groups, which mean that the weak case isn't as much more general as it looks. For higher -categories, this will fail – which is why we include the appendix to suggest how the pattern might continue). Why is this interesting to us? Bumping up the "categorical level" appeals for different reasons, but the ones matter most to me have to do with taking low-dimensional (or -codimensional) structures, and finding analogous ones at higher (co)dimension. In our case, the starting point had to do with looking at the symmetries of "higher gauge theories" – which can be used to describe the transport of higher-dimensional surfaces in a background geometry, the way gauge theories can describe the transport of point particles. But I won't ask you to understand that example right now, as long as you can accept that "what are the global/local symmetries of a category like?" is a possibly interesting question. So let's categorify the discussion about symmetry above… To begin with, we can just take our (closed monoidal) category to be , and follow the same construction above. So our first ingredient is a 2-group . As with groups, we can think of a 2-group either as a 2-category with just one object , or as a 1-category with some structure – a group object in , which we'll call if it comes from a given 2-group. (In our paper, we keep these distinct by using the term "categorical group" for the second. The group axioms amount to saying that we have a monoidal category . Its objects are the morphisms of the 2-group, and the composition becomes the monoidal product .) (In fact, we often use a third equivalent definition, that of crossed modules of groups, but to avoid getting into that machinery here, I'll be changing our notation a little.) 2-Group Actions So, again, there are two ways to talk about an action of a 2-group on some category . One is to define an action as a 2-functor . The object being acted on, , is the unique object – so that the 2-functor amounts to a monoidal functor from the categorical group into . Notice that here we're taking advantage of the fact that is closed, so that the hom-"sets" are actually categories, and the automorphisms of – invertible functors from to itself – form the objects of a monoidal category, and in fact a categorical group. What's new, though, is that there are also 2-morphisms – natural transformations between these functors. To begin with, then, we show that there is a map , which corresponds to the 2-functor , and satisfies an action axiom like the square above, with playing the role of group multiplication. (Again, remember that we're only talking about the version where this square commutes strictly here – in an appendix of the paper, we talk about the weak version of all this.) This is an intuitive generalization of the situation for groups, but it is slightly more complicated. The action directly gives three maps. First, functors for each 2-group morphism – each of which consists of a function between objects of , together with a function between morphisms of . Second, natural transformations for 2-morphisms in the 2-group – each of which consists of a function from objects to morphisms of . On the other hand, is just a functor: it gives two maps, one taking pairs of objects to objects, the other doing the same for morphisms. Clearly, the map is just given by . The map taking pairs of morphisms to morphisms of is less intuitively obvious. Since I already claimed and are equivalent, it should be no surprise that we ought to be able to reconstruct the other two parts of from it as special cases. These are morphism-maps for the functors, (which give or ), and the natural transformation maps (which give or ). In fact, there are only two sensible ways to combine these four bits of information, and the fact that is natural means precisely that they're the same, so: Given the above, though, it's not so hard to see that a 2-group action really involves two group actions: of the objects of on the objects of , and of the morphisms of on objects of . They fit together nicely because objects can be identified with their identity morphisms: furthermore, being a functor gives an action of -objects on -morphisms which fits in between them nicely. But what of the transformation groupoid? What is the analog of the transformation groupoid, if we repeat its construction in ? The Transformation Double Category of a 2-Group Action The answer is that a category (such as a groupoid) internal to is a double category. The compact way to describe it is as a "category in ", with a category of objects and a category of morphisms, each of which of course has objects and morphisms of its own. For the transformation double category, following the same construction as for sets, the object-category is just , and the morphism-category is , and the target functor is just the action map . (The other structure maps that make this into a category in can similarly be worked out by following your nose). This is fine, but the internal description tends to obscure an underlying symmetry in the idea of double categories, in which morphisms in the object-category and objects in the morphism-category can switch roles, and get a different description of "the same" double category, denoted the "transpose". A different approach considers these as two different types of morphism, "horizontal" and "vertical": they are the morphisms of horizontal and vertical categories, built on the same set of objects (the objects of the object-category). The morphisms of the morphism-category are then called "squares". This makes a convenient way to draw diagrams in the double category. Here's a version of a diagram from our paper with the notation I've used here, showing what a square corresponding to a morphism looks like: The square (with the boxed label) has the dashed arrows at the top and bottom for its source and target horizontal morphisms (its images under the source and target functors: the argument above about naturality means they're well-defined). The vertical arrows connecting them are the source and target vertical morphisms (its images under the source and target maps in the morphism-category). Horizontal and Vertical Slices of So by construction, the horizontal category of these squares is just the object-category . For the same reason, the squares and vertical morphisms, make up the category . On the other hand, the vertical category has the same objects as , but different morphisms: it's not hard to see that the vertical category is just the transformation groupoid for the action of the group of -objects on the set of -objects, . Meanwhile, the horizontal morphisms and squares make up the transformation groupoid . These are the object-category and morphism-category of the transpose of the double-category we started with. We can take this further: if squares aren't hip enough for you – or if you're someone who's happy with 2-categories but finds double categories unfamiliar – the horizontal and vertical categories can be extended to make horizontal and vertical bicategories. They have the same objects and morphisms, but we add new 2-cells which correspond to squares where the boundaries have identity morphisms in the direction we're not interested in. These two turn out to feel quite different in style. First, the horizontal bicategory extends by adding 2-morphisms to it, corresponding to morphisms of : roughly, it makes the morphisms of into the objects of a new transformation groupoid, based on the action of the group of automorphisms of the identity in (which ensures the square has identity edges on the sides.) This last point is the only constraint, and it's not a very strong one since and essentially determine the entire 2-group: the constraint only relates to the structure of . The constraint for the vertical bicategory is different in flavour because it depends more on the action . Here we are extending a transformation groupoid, . But, for some actions, many morphisms in might just not show up at all. For 1-morphisms , the only 2-morphisms which can appear are those taking to some which has the same effect on as . So, for example, this will look very different if is free (so only automorphisms show up), or a trivial action (so that all morphisms appear). In the paper, we look at these in the special case of an adjoint action of a 2-group, so you can look there if you'd like a more concrete example of this difference. Speculative Remarks The starting point for this was a project (which I talked about a year ago) to do with higher gauge theory – see the last part of the linked post for more detail. The point is that, in gauge theory, one deals with connections on bundles, and morphisms between them called gauge transformations. If one builds a groupoid out of these in a natural way, it turns out to result from the action of a big symmetry group of all gauge transformations on the moduli space of connections. In higher gauge theory, one deals with connections on gerbes (or higher gerbes – a bundle is essentially a "0-gerbe"). There are now also (2-)morphisms between gauge transformations (and, in higher cases, this continues further), which Roger Picken and I have been calling "gauge modifications". If we try to repeat the situation for gauge theory, we can construct a 2-groupoid out of these, which expresses this local symmetry. The thing which is different for gerbes (and will continue to get even more different if we move to -gerbes and the corresponding -groupoids) is that this is not the same type of object as a transformation double category. Now, in our next paper (which this one was written to make possible) we show that the 2-groupoid is actually very intimately related to the transformation double category: that is, the local picture of symmetry for a higher gauge theory is, just as in the lower-dimensional situation, intimately related to a global symmetry of an entire moduli 2-space, i.e. a category. The reason this wasn't obvious at first is that the moduli space which includes only connections is just the space of objects of this category: the point is that there are really two special kinds of gauge transformations. One should be thought of as the morphisms in the moduli 2-space, and the other as part of the symmetries of that 2-space. The intuition that comes from ordinary gauge theory overlooks this, because the phenomenon doesn't occur there. Physically-motivated theories are starting to use these higher-categorical concepts more and more, and symmetry is a crucial idea in physics. What I've sketched here is presumably only the start of a pattern in which "symmetry" extends to higher-categorical entities. When we get to 3-groups, our simplifying assumptions that use "strictification" results won't even be available any more, so we would expect still further new phenomena to show up – but it seems plausible that the tight relation between global and local symmetry will still exist, but in a way that is more subtle, and refines the standard understanding we have of symmetry today. Moving to Hamburg; Talk in Brno: 2-Symmetry of Moduli Spaces Posted by Jeffrey Morton under 2-groups, algebra, category theory, double categories, gauge theory, higher gauge theory, moduli spaces, talks Since I moved to Hamburg, Alessandro Valentino and I have been organizing one series of seminar talks whose goal is to bring people (mostly graduate students, and some postdocs and others) up to speed on the tools used in Jacob Lurie's big paper on the classification of TQFT and proof of the Cobordism Hypothesis. This is part of the Forschungsseminar ("research seminar") for the working groups of Christoph Schweigert, Ingo Runkel, and Christoph Wockel. First, I gave one introducing myself and what I've done on Extended TQFT. In our main series We've had a series of four so far – two in which Alessandro outlined a sketch of what Lurie's result is, and another two by Sebastian Novak and Marc Palm that started catching our audience up on the simplicial methods used in the theory of -categories which it uses. Coming up in the New Year, Nathan Bowler and I will be talking about first -categories, and then -categories. I'll do a few posts summarizing the talks around then. Some people in the group have done some work on quantum field theories with defects, in relation to which, there's this workshop coming up here in February! The idea here is that one could have two regions of space where different field theories apply, which are connected along a boundary. We might imagine these are theories which are different approximations to what's going on physically, with a different approximation useful in each region. Whatever the intuition, the regions will be labelled by some category, and boundaries between regions are labelled by functors between categories. Where different boundary walls meet, one can have natural transformations. There's a whole theory of how a 3D TQFT can be associated to modular tensor categories, in sort of the same sense that a 2D TQFT is associated to a Frobenius algebra. This whole program is intimately connected with the idea of "extending" a given TQFT, in the sense that it deals with theories that have inputs which are spaces (or, in the case of defects, sub-spaces of given ones) of many different dimensions. Lurie's paper describing the n-dimensional cobordism category, is very much related to the input to a theory like this. Brno Visit This time, I'd like to mention something which I began working on with Roger Picken in Lisbon, and talked about for the first time in Brno, Czech Republic, where I was invited to visit at Masaryk University. I was in Brno for a week or so, and on Thursday, December 13, I gave this talk, called "Higher Gauge Theory and 2-Group Actions". But first, some pictures! This fellow was near the hotel I stayed in: Since this sculpture is both faceless and hard at work on nonspecific manual labour, I assume he's a Communist-era artwork, but I don't really know for sure. The Christmas market was on in Náměstí Svobody (Freedom Square) in the centre of town. This four-headed dragon caught my eye: On the way back from Brno to Hamburg, I met up with my wife to spend a couple of days in Prague. Here's the Christmas market in the Old Town Square of Prague: Anyway, it was a good visit to the Czech Republic. Now, about the talk! Moduli Spaces in Higher Gauge Theory The motivation which I tried to emphasize is to define a specific, concrete situation in which to explore the concept of "2-Symmetry". The situation is supposed to be, if not a realistic physical theory, then at least one which has enough physics-like features to give a good proof of concept argument that such higher symmetries should be meaningful in nature. The idea is that Higher Gauge theory is a field theory which can be understood as one in which the possible (classical) fields on a space/spacetime manifold consist of maps from that space into some target space . For the topological theory, they are actually just homotopy classes of maps. This is somewhat related to Sigma models used in theoretical physics, and mathematically to Homotopy Quantum Field Theory, which considers these maps as geometric structure on a manifold. An HQFT is a functor taking such structured manifolds and cobordisms into Hilbert spaces and linear maps. In the paper Roger and I are working on, we don't talk about this stage of the process: we're just considering how higher-symmetry appears in the moduli spaces for fields of this kind, which we think of in terms of Higher Gauge Theory. Ordinary topological gauge theory – the study of flat connections on -bundles for some Lie group , can be looked at this way. The target space is the "classifying space" of the Lie group – homotopy classes of maps in are the same as groupoid homomorphisms in . Specifically, the pair of functors and relating groupoids and topological spaces are adjoints. Now, this deals with the situation where is a homotopy 1-type, which is to say that it has a fundamental groupoid , and no other interesting homotopy groups. To deal with more general target spaces , one should really deal with infinity-groupoids, which can capture the whole homotopy type of – in particular, all its higher homotopy groups at once (and various relations between them). What we're talking about in this paper is exactly one step in that direction: we deal with 2-groupoids. We can think of this in terms of maps into a target space which is a 2-type, with nontrivial fundamental groupoid , but also interesting second homotopy group (and nothing higher). These fit together to make a 2-groupoid , which is a 2-group if is connected. The idea is that is the classifying space of some 2-group , which plays the role of the Lie group in gauge theory. It is the "gauge 2-group". Homotopy classes of maps into correspond to flat connections in this 2-group. For practical purposes, we use the fact that there are several equivalent ways of describing 2-groups. Two very directly equivalent ways to define them are as group objects internal to , or as categories internal to – which have a group of objects and a group of morphisms, and group homomorphisms that define source, target, composition, and so on. This second way is fairly close to the equivalent formulation as crossed modules . The definition is in the slides, but essentially the point is that is the group of objects, and with the action , one gets the semidirect product which is the group of morphisms. The map makes it possible to speak of and acting on each other, and that these actions "look like conjugation" (the precise meaning of which is in the defining properties of the crossed module). The reason for looking at the crossed-module formulation is that it then becomes fairly easy to understand the geometric nature of the fields we're talking about. In ordinary gauge theory, a connection can be described locally as a 1-form with values in , the Lie algebra of . Integrating such forms along curves gives another way to describe the connection, in terms of a rule assigning to every curve a holonomy valued in which describes how to transport something (generally, a fibre of a bundle) along the curve. It's somewhat nontrivial to say how this relates to the classic definition of a connection on a bundle, which can be described locally on "patches" of the manifold via 1-forms together with gluing functions where patches overlap. The resulting categories are equivalent, though. In higher gauge theory, we take a similar view. There is a local view of "connections on gerbes", described by forms and gluing functions (the main difference in higher gauge theory is that the gluing functions related to higher cohomology). But we will take the equivalent point of view where the connection is described by -valued holonomies along paths, and -valued holonomies over surfaces, for a crossed module , which satisfy some flatness conditions. These amount to 2-functors of 2-categories . The moduli space of all such 2-connections is only part of the story. 2-functors are related by natural transformations, which are in turn related by "modifications". In gauge theory, the natural transformations are called "gauge transformations", and though the term doesn't seem to be in common use, the obvious term for the next layer would be "gauge modifications". It is possible to assemble a 2-groupoid , whose space of objects is exactly the moduli space of 2-connections, and whose 1- and 2-morphisms are exactly these gauge transformations and modifications. So the question is, what is the meaning of the extra information contained in the 2-groupoid which doesn't appear in the moduli space itself? Our claim is that this information expresses how the moduli space carries "higher symmetry". 2-Group Actions and the Transformation Double Category What would it mean to say that something exhibits "higher" symmetry? A rudimentary way to formalize the intuition of "symmetry" is to say that there is a group (of "symmetries") which acts on some object. One could get more subtle, but this should be enough to begin with. We already noted that "higher" gauge theory uses 2-groups (and beyond into -groups) in the place of ordinary groups. So in this context, the natural way to interpret it is by saying that there is an action of a 2-group on something. Just as there are several equivalent ways to define a 2-group, there are different ways to say what it means for it to have an action on something. One definition of a 2-group is to say that it's a 2-category with one object and all morphisms and 2-morphisms invertible. This definition makes it clear that a 2-group has to act on an object of some 2-category . For our purposes, just as we normally think of group actions on sets, we will focus on 2-group actions on categories, so that is the 2-category of interest. Then an action is just a map: The unique object of – let's call it , gets taken to some object . This object is the thing being "acted on" by . The existence of the action implies that there are automorphisms for every morphism in (which correspond to the elements of the group of the crossed module). This would be enough to describe ordinary symmetry, but the higher symmetry is also expressed in the images of 2-morphisms , which we might call 2-symmetries relating 1-symmetries. What we want to do in our paper, which the talk summarizes, is to show how this sort of 2-group action gives rise to a 2-groupoid (actually, just a 2-category when the being acted on is a general category). Then we claim that the 2-groupoid of connections can be seen as one that shows up in exactly this way. (In the following, I have to give some credit to Dany Majard for talking this out and helping to find a better formalism.) To make sense of this, we use the fact that there is a diagrammatic way to describe the transformation groupoid associated to the action of a group on a set . The set of morphisms is built as a pullback of the action map, . This means that morphisms are pairs , thought of as going from to . The rule for composing these is another pullback. The diagram which shows how it's done appears in the slides. The whole construction ends up giving a cubical diagram in , whose top and bottom faces are mere commuting diagrams, and whose four other faces are all pullback squares. To construct a 2-category from a 2-group action is similar. For now we assume that the 2-group action is strict (rather than being given by a weak 2-functor). In this case, it's enough to think of our 2-group not as a 2-category, but as a group-object in – the same way that a 1-group, as well as being a category, can be seen as a group object in . The set of objects of this category is the group of morphisms of the 2-category, and the morphisms make up the group of 2-morphisms. Being a group object is the same as having all the extra structure making up a 2-group. To describe a strict action of such a on , we just reproduce in the diagram that defines an action in : The fact that is an action just means this commutes. In principle, we could define a weak action, which would mean that this commutes up to isomorphism, but we won't be looking at that here. Constructing the same diagram which describes the structure of a transformation groupoid (p29 in the slides for the talk), we get a structure with a "category of objects" and a "category of morphisms". The construction in gives us directly a set of morphisms, while itself is the set of objects. Similarly, in , the category of objects is just , while the construction gives a category of morphisms. The two together make a category internal to , which is to say a double category. By analogy with , we call this double category . We take as the category of objects, as the "horizontal category", whose morphisms are the horizontal arrows of the double category. The category of morphisms of shows up by letting its objects be the vertical arrows of the double category, and its morphisms be the squares. These look like this: The vertical arrows are given by pairs of objects , and just like the transformation 1-groupoid, each corresponds to the fact that the action of takes to . Each square (morphism in the category of morphisms) is given by a pair of morphisms, one from (given by an element in ), and one from . The horizontal arrow on the bottom of this square is: The fact that these are equal is exactly the fact that is a natural transformation. The double category turns out to have a very natural example which occurs in higher gauge theory. Higher Symmetry of the Moduli Space The point of the talk is to show how the 2-groupoid of connections, previously described as , can be seen as coming from a 2-group action on a category – the objects of this category being exactly the connections. In the slides above, for various reasons, we did this in a discretized setting – a manifold with a decomposition into cells. This is useful for writing things down explicitly, but not essential to the idea behind the 2-symmetry of the moduli space. The point is that there is a category we call , whose objects are the connections: these assign -holonomies to edges of our discretization (in general, to paths), and -holonomies to 2D faces. (Without discretization, one would describe these in terms of -valued 1-forms and -valued 2-forms.) The morphisms of are one type of "gauge transformation": namely, those which assign -holonomies to edges. (Or: -valued 1-forms). They affect the edge holonomies of a connection just like a 2-morphism in . Face holonomies are affected by the -value that comes from the boundary of the face. What's physically significant here is that both objects and morphisms of describe nonlocal geometric information. They describe holonomies over edges and surfaces: not what happens at a point. The "2-group of gauge transformations", which we call , on the other hand, is purely about local transformations. If is the vertex set of the discretized manifold, then : one copy of the gauge 2-group at each vertex. (Keeping this finite dimensional and avoiding technical details was one main reason we chose to use a discretization. In principle, one could also talk about the 2-group of -valued functions, whose objects and morphisms, thinking of it as a group object in , are functions valued in morphisms of .) Now, the way acts on is essentially by conjugation: edge holonomies are affected by pre- and post-multiplication by the values at the two vertices on the edge – whether objects or morphisms of . (Face holonomies are unaffected). There are details about this in the slides, but the important thing is that this is a 2-group of purely local changes. The objects of are gauge transformations of this other type. In a continuous setting, they would be described by -valued functions. The morphisms are gauge modifications, and could be described by -valued functions. The main conceptual point here is that we have really distinguished between two kinds of gauge transformation, which are the horizontal and vertical arrows of the double category . This expresses the 2-symmetry by moving some gauge transformations into the category of connections, and others into the 2-group which acts on it. But physically, we would like to say that both are "gauge transformations". So one way to do this is to "collapse" the double category to a bicategory: just formally allow horizontal and vertical arrows to compose, so that there is only one kind of arrow. Squares become 2-cells. So then if we collapse the double category expressing our 2-symmetry relation this way, the result is exactly equivalent to the functor category way of describing connections. (The morphisms will all be invertible because is a groupoid and is a 2-group). I'm interested in this kind of geometrical example partly because it gives a good way to visualize something new happening here. There appears to be some natural 2-symmetry on this space of fields, which is fairly easy to see geometrically, and distinguishes in a fundamental way between two types of gauge transformation. This sort of phenomenon doesn't occur in the world of – a set has no morphisms, after all, so the transformation groupoid for a group action on it is much simpler. In broad terms, this means that 2-symmetry has qualitatively new features that familiar old 1-symmetry doesn't have. Higher categorical versions – -groups acting on -groupoids, as might show up in more complicated HQFT – will certainly be even more complicated. The 2-categorical version is just the first non-trivial situation where this happens, so it gives a nice starting point to understand what's new in higher symmetry that we didn't already know. 2-Erlangen Program; Manifold Calculus talk (Pedro Brito) Posted by Jeffrey Morton under 2-groups, geometry, moduli spaces, sheaves (Note: WordPress seems to be having some intermittent technical problem parsing my math markup in this post, so please bear with me until it, hopefully, goes away…) As August is the month in which Portugal goes on vacation, and we had several family visitors toward the end of the summer, I haven't posted in a while, but the term has now started up at IST, and seminars are underway, so there should be some interesting stuff coming up to talk about. First, I'll point out that that Derek Wise has started a new blog, called simply "Simplicity", which is (I imagine) what it aims to contain: things which seem complex explained so as to reveal their simplicity. Unless I'm reading too much into the title. As of this writing, he's posted only one entry, but a lengthy one that gives a nice explanation of a program for categorified Klein geometries which he's been thinking a bunch about. Klein's program for describing the geometry of homogeneous spaces (such as spherical, Euclidean, and hyperbolic spaces with constant curvature, for example) was developed at Erlangen, and goes by the name "The Erlangen Program". Since Derek is now doing a postdoc at Erlangen, and this is supposed to be a categorification of Klein's approach, he's referred to it the "2-Erlangen Program". There's more discussion about it in a (somewhat) recent post by John Baez at the n-Category Cafe. Both of them note the recent draft paper they did relating a higher gauge theory based on the Poincare 2-group to a theory known as teleparallel gravity. I don't know this theory so well, except that it's some almost-equivalent way of formulating General Relativity I'll refer you to Derek's own post for full details of what's going on in this approach, but the basic motivation isn't too hard to set out. The Erlangen program takes the view that a homogeneous space is a space (let's say we mean by this a topological space) which "looks the same everywhere". More precisely, there's a group action by some , which we understand to be "symmetries" of the space, which is transitive. Since every point is taken to every other point by some symmetry, the space is "homogeneous". Some symmetries leave certain points where they are – they form the stabilizer subgroup . When the space is homogeneous, it is isomorphic to the coset space, . So Klein's idea is to say that any time you have a Lie group and a closed subgroup , this quotient will be called a "homogeneous space". A familiar example would be Euclidean space, , where is the Euclidean group and is the orthogonal group, but there are plenty of others. This example indicates what Cartan geometry is all about, though – this is the next natural step after Klein geometry (Edit: Derek's blog now has a visual explanation of Cartan geometry, a.k.a. "generalized hamsterology", new since I originally posted this). We can say that Cartan is to Klein as Riemann is to Euclid. (Or that Cartan is to Riemann as Klein is to Euclid – or if you want to get maybe too-precisely metaphorical, Cartan is the pushout of Klein and Riemann over Euclid). The point is that Riemannian geometry studies manifolds – spaces which are not homogeneous, but look like Euclidean space locally. Cartan geometry studies spaces which aren't homogeneous, but can be locally modelled by Klein geometries. Now, a Riemannian geometry is essentially a manifold with a metric, describing how it locally looks like Euclidean space. An equivalent way to talk about it is a manifold with a bundle of Euclidean spaces (the tangent spaces) with a connection (the Levi-Civita connection associated to the metric). A Cartan geometry can likewise be described as a -bundle with fibre with a connection Then the point of the "2-Erlangen program" is to develop similar geometric machinery for 2-groups (a.k.a. categorical groups). This is, as usual, a bit more complicated since actions of 2-groups are trickier than group-actions. In their paper, though, the point is to look at spaces which are locally modelled by some sort of 2-Klein geometry which derives from the Poincare 2-group. By analogy with Cartan geometry, one can talk about such Poincare 2-group connections on a space – that is, some kind of "higher gauge theory". This is the sort of framework where John and Derek's draft paper formulates teleparallel gravity. It turns out that the 2-group connection ends up looking like a regular connection with torsion, and this plays a role in that theory. Their draft will give you a lot more detail. Talk on Manifold Calculus On a different note, one of the first talks I went to so far this semester was one by Pedro Brito about "Manifold Calculus and Operads" (though he ran out of time in the seminar before getting to talk about the connection to operads). This was about motivating and introducing the Goodwillie Calculus for functors between categories of spaces. (There are various references on this, but see for instance these notes by Hal Sadofsky). In some sense this is a generalization of calculus from functions to functors, and one of the main results Goodwillie introduced with this subject, is a functorial analog of Taylor's theorem. I'd seen some of this before, but this talk was a nice and accessible intro to the topic. So the starting point for this "Manifold Calculus" is that we'd like to study functors from spaces to spaces (in fact this all applies to spectra, which are more general, but Pedro Brito's talk was focused on spaces). The sort of thing we're talking about is a functor which, given a space , gives a moduli space of some sort of geometric structures we can put on , or of mappings from . The main motivating example he gave was the functor for some fixed manifold . Given a manifold , this gives the mapping space of all immersions of into . (Recalling some terminology: immersions are maps of manifolds where the differential is nondegenerate – the induced map of tangent spaces is everywhere injective, meaning essentially that there are no points, cusps, or kinks in the image, but there might be self-intersections. Embeddings are, in addition, local homeomorphisms.) Studying this functor means, among other things, looking at the various spaces of immersions of each into . We might first ask: can be immersed in at all – in other words, is nonempty? So, for example, the Whitney Embedding Theorem says that if is at least , then there is an embedding of into (which is therefore also an immersion). In more detail, we might want to know what is, which tells how many connected components of immersions there are: in other words, distinct classes of immersions which can't be deformed into one another by a family of immersions. Or, indeed, we might ask about all the homotopy groups of , not just the zeroth: what's the homotopy type of ? (Once we have a handle on this, we would then want to vary ). It turns out this question is manageable, party due to a theorem of Smale and Hirsch, which is a generalization of Gromov's h-principle – the original principle applies to solutions of certain kinds of PDE's, saying that any solution can be deformed to a holomorphic one, so if you want to study the space of solutions up to homotopy, you may as well just study the holomorphic solutions. The Smale-Hirsch theorem likewise gives a homotopy equivalence of two spaces, one of which is . The other is the space of "formal immersions", called . It consists of all , where is smooth, and is a map of tangent spaces which restricts to , and is injective. These are "formally" like immersions, and indeed has an inclusion into , which happens to be a homotopy equivalence: it induces isomorphisms of all the homotopy groups. These come from homotopies taking each "formal immersion" to some actual immersion. So we've approximated , up to homotopy, by . (This "homotopy" of functors makes sense because we're talking about an enriched functor – the source and target categories are enriched in spaces, where the concepts of homotopy theory are all available). We still haven't got to manifold calculus, but it will be all about approximating one functor by another – or rather, by a chain of functors which are supposed to be like the Taylor series for a function. The way to get this series has to do with sheafification, so first it's handy to re-describe what the Smale-Hirsch theorem says in terms of sheaves. This means we want to talk about some category of spaces with a Grothendieck topology. So lets let be the category whose objects are -dimensional manifolds and whose morphisms are embeddings (which, of course, are necessarily codimension 0). Now, the point here is that if is an embedding in , and has an immersion into , this induces an immersion of into . This amounst to saying is a contravariant functor: That makes a presheaf. What the Smale-Hirsch theorem tells us is that this presheaf is a homotopy sheaf – but to understand that, we need a few things first. First, what's a homotopy sheaf? Well, the condition for a sheaf says that if we have an open cover of , then So to say how is a homotopy sheaf, we have to give a topology, which means defining a "cover", which we do in the obvious way – a cover is a collection of morphisms such that the union of all the images is just . The topology where this is the definition of a cover can be called , because it has the property that given any open cover and choice of 1 point in , that point will be in some of the cover. This is part of a family of topologies, where only allows those covers with the property that given any choice of points in , some open set of the cover contains them all. These conditions, clearly, get increasingly restrictive, so we have a sequence of inclusions (a "filtration"): Now, with respect to any given one of these topologies , we have the usual situation relating sheaves and presheaves. Sheaves are defined relative to a given topology (i.e. a notion of cover). A presheaf on is just a contravariant functor from (in this case valued in spaces); a sheaf is one which satisfies a descent condition (I've discussed this before, for instance here, when I was running the Stacks Seminar at UWO). The point of a descent condition, for a given topology is that if we can take the values of a functor "locally" – on the various objects of a cover for – and "glue" them to find the value for itself. In particular, given a cover for , and a cover, there's a diagram consisting of the inclusions of all the double-overlaps of sets in the cover into the original sets. Then the descent condition for sheaves of spaces is that The general fact is that there's a reflective inclusion of sheaves into presheaves (see some discussion about reflective inclusions, also in an earlier post). Any sheaf is a contravariant functor – this is the inclusion of into $latex PSh( \mathcal{E} )$. The reflection has a left adjoint, sheafification, which takes any presheaf in to a sheaf which is the "best approximation" to it. It's the fact this is an adjoint which makes the inclusion "reflective", and provides the sense in which the sheafification is an approximation to the original functor. The way sheafification works can be worked out from the fact that it's an adjoint to the inclusion, but it also has a fairly concrete description. Given any one of the topologies , we have a whole collection of special diagrams, such as: (using the usual notation where is the intersection of two sets in a cover, and the maps here are the inclusions of that intersection). This and the various other diagrams involving these inclusions are special, given the topology . The descent condition for a sheaf says that if we take the image of this diagram: then we can "glue together" the objects and on the overlap to get one on the union. That is, is a sheaf if is a colimit of the diagram above (intuitively, by "gluing on the overlap"). In a presheaf, it would come equipped with some maps into the and : in a sheaf, this object and the maps satisfy some universal property. Sheafification takes a presheaf to a sheaf which does this, essentially by taking all these colimits. More accurately, since these sheaves are valued in spaces, what we really want are homotopy sheaves, where we can replace "colimit" with "homotopy colimit" in the above – which satisfies a universal property only up to homotopy, and which has a slightly weaker notion of "gluing". This (homotopy) sheaf is called because it depends on the topology which we were using to get the class of special diagrams. One way to think about is that we take the restriction to manifolds which are made by pasting together at most open balls. Then, knowing only this part of the functor , we extend it back to all manifolds by a Kan extension (this is the technical sense in which it's a "best approximation"). Now the point of all this is that we're building a tower of functors that are "approximately" like , agreeing on ever-more-complicated manifolds, which in our motivating example is . Whichever functor we use, we get a tower of functors connected by natural transformations: This happens because we had that chain of inclusions of the topologies . Now the idea is that if we start with a reasonably nice functor (like for example), then is just the limit of this diagram. That is, it's the universal thing which has a map into each commuting with all these connecting maps in the tower. The tower of approximations – along with its limit (as a diagram in the category of functors) – is what Goodwillie called the "Taylor tower" for . Then we say the functor is analytic if it's just (up to homotopy!) the limit of this tower. By analogy, think of an inclusion of a vector space with inner product into another such space which has higher dimension. Then there's an orthogonal projection onto the smaller space, which is an adjoint (as a map of inner product spaces) to the inclusion – so these are like our reflective inclusions. So the smaller space can "reflect" the bigger one, while not being able to capture anything in the orthogonal complement. Now suppose we have a tower of inclusions , where each space is of higher dimension, such that each of the is included into in a way that agrees with their maps to each other. Then given a vector , we can take a sequence of approximations in the spaces. If was "nice" to begin with, this series of approximations will eventually at least converge to it – but it may be that our tower of spaces doesn't let us approximate every in this way. That's precisely what one does in calculus with Taylor series: we have a big vector space of smooth functions, and a tower of spaces we use to approximate. These are polynomial functions of different degrees: first linear, then quadratic, and so forth. The approximations to a function are orthogonal projections onto these smaller spaces. The sequence of approximations, or rather its limit (as a sequence in the inner product space ), is just what we mean by a "Taylor series for ". If is analytic in the first place, then this sequence will converge to it. The same sort of phenomenon is happening with the Goodwillie calculus for functors: our tower of sheafifications of some functor are just "projections" onto smaller categories (of sheaves) inside the category of all contravariant functors. (Actually, "reflections", via the reflective inclusions of the sheaf categories for each of the topologies ). The Taylor Tower for this functor is just like the Taylor series approximating a function. Indeed, this analogy is fairly close, since the topologies will give approximations of which are in some sense based on points (so-called -excisive functors, which in our terminology here are sheaves in these topologies). Likewise, a degree- polynomial approximation approximates a smooth function, in general in a way that can be made to agree at points. Finally, I'll point out that I mentioned that the Goodwillie calculus is actually more general than this, and applies not only to spaces but to spectra. The point is that the functor defines a kind of generalized cohomology theory – the cohomology groups for are the . So the point is, functors satisfying the axioms of a generalized cohomology theory are represented by spectra, whereas here is a special case that happens to be a space. Lots of geometric problems can be thought of as classified by this sort of functor – if , the classifying space of a group, and we drop the requirement that the map be an immersion, then we're looking at the functor that gives the moduli space of -connections on each . The point is that the Goodwillie calculus gives a sense in which we can understand such functors by simpler approximations to them. Talk on Tricategories and Trifunctors for Higher Gauge Theory Posted by Jeffrey Morton under 2-groups, categorification, gauge theory, groupoids, higher gauge theory, moduli spaces In the most recent TQFT Club seminar, we had a couple of talks – one was the second in a series of three by Marco Mackaay, which as promised previously I'll write up together after the third one. The other was by Björn Gohla, a student of João Faria Martins, giving an overview on the subject of "Tricategories and Trifunctors", a mostly expository talk explaining some definitions. Actually, this was a bit more specific than a general introduction – the point of it was to describe a certain kind of mapping space. I've talked here before about representing the "configuration space" of a gauge theory as a groupoid: the objects are (optionally, flat) connections on a manifold , and the morphisms are gauge transformations taking one connection to another. The reason for the things Björn was talking about is analogous, except that in this case, the goal is to describe the configuration space of a higher gauge theory. There are at least two ways I know of to talk about higher gauge theory. One is in terms of categorical (or n-categorical) groups – which makes it a "categorification" of gauge theory in the sense of reproducing in (or ) an analog of a sturcture, gauge theory, originally formulated in . Among other outlines, you might look at this one by John Baez and John Huerta for an introduction. Another uses the lingo of crossed modules or crossed complexes. In either case, the essential point is the same: there is some collection of groups (or groupoids, but let's say groups to keep everything clear) which play the role of the single gauge group in ordinary gauge theory. In the first language, we can speak of a "2-group", or "categorical group" – a group internal to , or what is equivalent, a category internal to , which would have a group of objects and a group of morphisms (and, in higher settings still, groups of 2-morphisms, 3-morphisms, and so on). The structure maps of the category (source, target, composition, etc.) have to live in the category of groups. A crossed complex of groups (again, we could generalize to groupoids, but I won't) is a nonabelian variation on a chain complex: a sequence of groups with maps from one to the next. There are also a bunch more structures, which ultimately serve to reproduce all the kind of composition, source, and target maps in the -categorical groups: some groups act on others, there are "bracket" operations on one group valued in another, and so forth. This paper by Brown and Higgins explains how the two concepts are related when most of the groups are abelian, and there's a lot more about crossed complexes and related stuff in Tim Porter's "Crossed Menagerie". The point of all this right now is that these things play the role of the gauge group in higher gauge theory. The idea is that in gauge theory, you have a connection. Typically this is described in terms of a form valued in the Lie algebra of the gauge group. Then a (thin) homotopy classes of curves, gets a holonomy valued in the group by integrating that form. Alternatively, you can just think of the path groupoid of a manifold , where those classes of curves form the morphisms between the objects, which are just points of . Then a connection defines a functor , where is the gauge group thought of as a category (groupoid in fact) with one object. Or, you can just define a connection that way in the first place. In higher gauge theory, a similar principle exists: begin with the -path groupoid where the morphisms are (thin homotopy classes of) paths, the 2-morphisms are surfaces (really homotopy classes of homotopies of paths), and so on, so the -morphisms are -dimensional bits of . Then you could define an -connection as a -functor into an -group as defined above. OR, you could define it in terms of a tower of differential -forms valued in the crossed complex of Lie algebras associated to the crossed complex of Lie groups that replaces the gauge group. You can then use an integral to get an element of the group at level of the complex for any given -morphism in , which (via the equivalence I mentioned) amounts to the same thing as the other definition of connection. João Martins has done some work on this sort of thing when is dimension 2 (with Tim Porter) and 3 (with Roger Picken), which I guess is how Björn came to work on this question. The question is, roughly, how to describe the moduli space of these connections. The gist of the answer is that it's a functor -category , where is the -group. A little more generally, the question is how to describe mapping spaces for higher categories. In particular, he was talking about the case , which is where certain tricky issues start to show up. In particular every bicategory (the weakest form of 2-category) is (bi)equivalent to a strict 2-category, so there's no real need to worry about weakening things like associativity so that they only work up to isomorphism – these are all equalities. With 3-categories, this fails: the weakest kind of 3-category is a tricategory (introduced by Gordon, Power and Street, though also see the references beyond that link). These are always tri-equivalent to something stricter than general, but not completely strict: Gray-categories. The only equation from 2-categories which has to be weakened to an isomorphism here is the interchange law: given a square of four morphisms, we can either compose vertically first, and then horizontally, or vice versa. In a Gray-category, there's an "interchanger" isomorphism where is vertical composition of 2-cells, and is horizontal (i.e. the same direction as 1-cells). This is supposed to satisfy a compatibility condition. It's essentially the only one you can come up with starting with (and composing it in different orders by throwing in identities in various places). There's another way to look at things, as Björn explained, in terms of enriched category theory. If you have a monoidal category , then a -enriched category is one in which, for any two objects , there is an object of morphisms, and composition gives morphisms . A strict 3-category is enriched in , with its usual tensor product, dual to its internal hom (which gives the mapping 2-category of functors, natural transformations, and modifications, between any two 2-categories). A Gray category is similar, except that it is enriched in , a version of with a different tensor product, dual to the hom functor which gives the mapping 2-category with pseudonatural transformations (the weak version of the concept, where the naturality square only has to commute up to a specified 2-cell) as morphisms. These are not the same, which is where the unavoidability of weakening 3-categories "really" comes from. The upshot of this is as above: it matters which order we compose things in. Having defined Gray-categories, let's say and (which, in the applications I mentioned above, tend to actually be Gray-groupoids, though this doesn't change the theory substantially), the point is to talk about "mapping spaces" – that is, Gray-categories of Gray-functors (etc.) from to . Since they've been defined terms of enriched category theory, one wants to use the general theory of enriched functors, transformations, and so forth – which is a lot easier than trying to work out the correct definitions from scratch using a low-level description. So then a Gray-functor has an object map , mapping objects of to objects of , and then for each , a morphism in (which is our ), namely . There are a bunch of compatibility conditions, which can be expressed for any monoidal category (since they involve diagrams with the map for any triple, and the like). Similar comments apply to defining -natural transformations. There is a slight problem here, which is that in this case, is a 2-category, so we really need to use a form of weakly enriched categories… All the compatibility diagrams should have 2-cells in them, and so forth. This, too, gets complicated. So Björn explained is a shortcut from drawing -dimensional diagrams for these mapping -categories in terms of the arrow category . This is the category whose objects are the morphisms of , and whose morphisms are commuting squares, or when is a 2-category, squares with a 2-cell, so a morphism in from to is a triple like so: Morphism in arrow category The 2-morphisms in are commuting "pillows", where the front and back face are morphisms like the above. So is , where is a 2-cell, and the whole "pillow" commutes. When is a tricategory, then we need to go further – these 2-morphsims should be triples including a 3-cell filling the "pillow", and then 3-morphisms are commuting structures between these. These diagrams get hard to draw pretty quickly. This is the point of having an ordinary 2D diagram with at most 1-dimensional cells: pushing all the nasty diagrams into these arrow categories, we can replace a 2-cell representing a natural transformation with a diagram involving the arrow category. This uses that there are source and target maps (which are Gray-functors, of course) which we'll call . So then here (in one diagram) we have two ways of depicting a natural transformation between functors : One is the 2-cell, and the other is the functor into , such that and . To depict a modification between natural transformations (a 3-cell between 2-cells) just involves building the arrow category of , say , and drawing an arrow from into it. And so on: in principle, there is a tower above built by iterating the arrow category construction, and all the different levels of "functor", "natural transformation", "modification", and all the higher equivalents are just functors into different levels of this tower. (The generic term for the level of maps-between-maps-etc between -categories is " -transfor", a handy term coined here.) The advantage here is that at least the general idea can be extended pretty readily to higher values of than 3. Naturally, no matter which way one decides to do it, things will get complicated – either there's a combinatorial explosion of things to consider, or one has to draw higher-dimensional diagrams, or whatever. This exploding complexity of -categories (in this case, globular ones) is one of the reasons why simplicial appreaches – quasicategories or -categories – are good. They allow you to avoid talking about those problems, or at least fold them into fairly well-understood aspects of simplicial sets. A lot of things – limits, colimits, mapping spaces, etc. are pretty well understood in that case (see, for instance, the first chapter of Joshua Nicholls-Barrer's thesis for the basics, or Jacob Lurie's humongous book for something more comprehensive). But sometimes, as in this case, they just don't happen to be the things you want for your application. So here we have some tools for talking about mapping spaces in the world of globular -categories – and as the work by Martins/Porter/Picken show, it's motivated by some fairly specific work about invariants of manifolds, differential geometry, and so on. Talks – Ivan Smith on Floer Homology; Black Holes and Fuzzballs Posted by Jeffrey Morton under algebra, category theory, cohomology, conformal field theory, moduli spaces, physics So this is a couple of weeks backdated. I've had a pretty serious cold for a while – either it was bad in its own right, or this was just a case of the difference in native viruses between two different continents that my immune system wasn't prepared for. Then, too, last week was Republic Day – the 100th anniversary of the middle of three revolutions (the Liberal, the Republican, and the Carnation revolution that ousted the dictatorship regime in 1974 – and let me say that it's refreshing for a North American to be reminded that Republicanism is a refinement of Liberalism, though how the flowers fit into it is less straightforward). So my family and I went to attend some of the celebrations downtown, which were impressive. Anyway, with the TQFT club seminars starting up very shortly, I wanted to finish this post on the first talks I got to see here at IST, which were on pretty widely different topics. The first was by Ivan Smith, entitled "Quadrics, 3-Manifolds and Floer Cohomology". The second was a recorded video talk arranged by the string theory group. This was a recording of a talk given by Kostas Skenderis a couple of years ago, entitled "The Fuzzball Proposal for Black Holes". Ivan Smith – Quadrics, 3-Manfolds and Floer Cohomology Ivan Smith's talk began with some motivating questions from topology, symplectic geometry, and from the study of moduli spaces. The topological question talks about 3-manifolds and the space of representations of its fundamental group into a compact Lie group , which was generally or . Specifically, the question is how this space is affected by operations on such as surgery, taking covering spaces, etc. The symplectic geometry question asks, for a symplectic manifold , what the "mapping class group" of symplectic transformations – that is, the group of connected components of symplectomorphisms from to itself – in a sense, this is asking how much of the geometry is seen by the symplectic situation. The question about moduli spaces asks to characterize the action of the (again, mapping class group of) diffeomorphisms of a Riemann surface on the moduli space of bundles on it. (This space, for $\Sigma$ with genus , look like modulo conjugation. It is the complex-manifold version of the space of flat connections which I've been quite interested in for purposes of TQFT, though this is a coarse quotient, not a stack-like quotient. Lots of people are interested in this space in its various hats.) The point of the talk being to elucidate how these all fit together. The first part of the title, "Quadrics", referred to the fact that, when has genus 2, the moduli space we'll be looking at can be described as an intersection of some varieties (defined by quadric equations) in the projective space . Knowing this, one can describe some of its properties just by looking at intersections of curves. In general we're talking about complex manifolds, here. To start with, for Riemann surfaces (one-dimensional complex manifolds), he pointed out that there is an isomorphism between the mapping class groups of symplectomorphisms and diffeomorphisms: . But in general, for example, for 3-dimensional manifolds, there is structure in the symplectic maps which is forgotten by the smooth ones – there's still a map , but it has a kernel – there are distinct symplectic maps that all look like the identity up to smooth deformation. Now, our original question was what the action of the diffeomorphisms of on the moduli space of bundles over . An element of acts (by symplectic map) on it. The discrepancy we mentioned is that the map corresponding to will always have fixed points, but be smoothly equivalent to one that doesn't. So the smooth mapping class group can't detect the property of having fixed points. What it CAN detect, however, is information about intersections. In particular, as mentioned above, the moduli space of bundles over a genus 2 surface is an intersection; in this situation, there is an injective map back from the smooth mapping class group into the group of classes of symplectic maps. So looking symplectically loses nothing from the smooth case. Now, these symplectic maps tie into the third part of the title, "Floer Homology", as follows. Given a symplectic map , one can define a complex of vector spaces which is the usual cohomology of a chain complex generated by fixed points of the map , and with a differential which is defined by counting certain curves. The way this is set up, if is the identity so that all points are fixed points, one gets the usual cohomology of the space – except that it's defined so as to be the quantum cohomology of (for more, check out this tutorial by Givental). This has the same complex as the usual cohomology, but with the cup product replaced by a deformed product. It's an older theorem (due to Donaldson) that, at least for genus 2, the quantum cohomology of the moduli space of bundles over splits into a direct sum of rings: So one of the key facts is that this works also with Floer homology for other maps than the identity (so this becomes a special case). So replacing in the above with for any (acting either on the surface , or the induced action on the moduli space) still gives a true statement. Note that this actually implies the theorem that there are fixed points in the space of bundles, since the right hand side is always nontrivial. So at this point we have some idea of how Floer cohomology is part of what ties the original three questions together. To take a further look at these we can start to build a category combining much of the same information. This is the (derived) Fukaya category. The objects are Lagrangian submanifolds of a symplectic manifold – ones where the symplectic form vanishes. To start building the category, consider what we can build from pairs of such objects . This is rather like the above – we define a complex of vector spaces, which is the cohomology of another complex. Instead of being the complex freely generated by fixed points, though, it's generated by intersection points of and . This automatically becomes a module over , so the category we're building is enriched over these. Defining the structure of this category is apparently a little bit complicated – in particular, there is a composition product in the form of a cohomology operation. Furthermore, which Ivan Smith didn't have time to describe in detail, there are other "higher" products. These are Massey type products, which is to say higher-order cohomology operations, which involve more than two inputs. These give the whole structure (where one takes the direct sum of all those hom-modules to get one big module) the structure of an –algebra (so the Fukaya category is an -category, I suppose). This is one way of talking about weak higher categories (the higher products give the associator for composition, and its higher analogs), so in fact this is a pretty complex structure, which the talk didn't dwell on in detail. But in any case, the point is that the operations in the category correspond to cohomology operations. Then one deals with the "derived" Fukaya category . I understand derived categories to be (at least among other examples) a way of taking categories of complexes "up to homotopy", perhaps as a way of getting rid of some of this complication. Again, the talk didn't elaborate too much on this. However, the fundamental theorem about this category is a generalization of the theorem above above quantum cohomology: That is, the derived Fukaya category for the moduli space of bundles over is the category for the Riemann surface itself, summed with two copies of the category for a single point (which is replacing the two copies of ). This reduces to the previous theorem when we're looking at the map , just as before. So the last question Ivan Smith addressed about this is the fact that these sorts of categories are often hard to calculate explicitly, but they can be described in terms of some easily-described data. He gave the analogy of periodic functions – which may be quite complicated, but by means of Fourier decompositions, can be easily described in terms of sines and cosines, which are easy to analyze. In the same way, although the Fukaya categories for particular spaces might be complicated, they can be described in terms of the (derived) category of modules over the -algebras. In particular, every category embeds in a generic example . So by understanding categories like this, one can understand a lot about the categories that come from spaces, which generalize quantum cohomology as described above. I like this punchline of the analogy with Fourier analysis, as imprecise as it might be, because it suggests a nice way to approach complex entities by finding out the parts that can generate them, or simple but large things you might discover them inside. The Skenderis talk about black holes was interesting, in that it was a recorded version of a talk given somewhere else – I haven't seen this done before, but apparently the String Theory group does it pretty regularly. This has some obvious advantages – they can get a wider range of talks by many different speakers. There was some technical problem – I suppose due to the way the video was encoded – that meant the slides were sometimes unreadably blurry, but that's still better than not getting the speaker at all. I don't have the background in string theory to be able to really get at the meat of the talk, though it did involve the AdS/CFT correspondence. However, I can at least say a few concrete things about the motivation. First, the "fuzzball" proposal is a more-or-less specific proposal to deal with the problem of black hole entropy. The problem, basically, is that it's known that the thermodynamic entropy associated to a black hole – which can be computed in completely macroscopic terms – is proportional to the area of its horizon. On the other hand, in essentially every other setting, entropy has an interpretation in terms of counting microstates, so that the entropy of a "macrostate" is proportional to the logarithm of the number of microstates. (Or, in a thermal state, which is a statistical distribution, this is weighted by the probability of the microstate). So, for example, with a gas in a box, there are many macrostates that correspond to a relatively even distribution of position and momentum among the molecules, and relatively few in which all molecules are all in one small corner of the box. The reason this is a problem is that, classically, the state of a black hole is characterized by very few numbers: the mass, angular momentum, and electric charge. There doesn't seem to be room for "microstates" in a classical black hole. So the overall point of the proposal is to describe what microstates would be. The specific way this is done with "fuzzballs" is somewhat mysterious to me, but the overall idea makes sense. One interesting consequence of this approach is that event horizons would be strictly a property of thermal states, in whatever underlying theory one takes to be the quantum theory behind classical gravity (here assumed to be some specific form of string theory – the example he was using is something called the B1-B5 black hole, which I know nothing about). That's because a pure state would have a single microstate, hence have zero entropy, hence no horizon. Now, what little I do understand about the particular model relies on the fact that near a (classical) event horizon, the background metric has a component that looks like anti-deSitter space – a vacuum solution to the Einstein equations with a negative cosmological constant. (This part isn't so hard to see – AdS space has that "saddle-shaped" appearance of a hyperbolic surface, and so does the area around a horizon, even when you draw it like this.) But then, there is the AdS/CFT correspondence that says states for a gravitational field in (asymptotically) anti-deSitter space correspond to states for a conformal field theory (CFT) at the boundary. So the way to get microstates, in the "fuzzball" proposal, is to look at this CFT, and find geometries that correspond to them. Some would be well-approximated by the classical, horizon-ridden geometry, but others would be different. The fact that this CFT is defined at the boundary explains why entropy would be proportional to area, not volume, of the black hole – this being a manifestation of the so-called "holographic principle". The "fuzziness" that one throws away by reducing a thermal state that combines these many geometries to the classical "no-hair" black hole determined by just three numbers is exactly the information described by the entropy. I couldn't follow some parts of it, not having much string-theory background – I don't feel qualified to judge whether string theory makes sense as physics, but it isn't an approach I've studied much. Still, this talk did reinforce my feeling that the AdS/CFT correspondence, at the very least, is something well-worth learning about and important in its own right. Coming soon: descriptions of the TQFT club seminars which are starting up at IST. Starting out at IST; Quantales Posted by Jeffrey Morton under c*-algebras, category theory, moduli spaces, noncommutative geometry, stacks, update As I mentioned in my previous post, I've recently started out a new postdoc at IST – the Instituto Superior Tecnico in Lisbon, Portugal. Making the move from North America to Europe with my family was a lot of work – both before and after the move – involving lots of paperwork and shifting of heavy objects. But Lisbon is a good city, with lots of interesting things to do, and the maths department at IST is very large, with about a hundred faculty. Among those are quite a few people doing things that interest me. The group that I am actually part of is coordinated by Roger Picken, and has a focus on things related to Topological Quantum Field Theory. There are a couple of postdocs and some graduate students here associated in some degree with the group, and elsewhere than IST Aleksandar Mikovic and Joao Faria Martins. In the coming months there should be some activity going on in this group which I will get to talk about here, including a workshop which is still in development, so I'll hold off on that until there's an official announcement. Quantales I've also had a chance to talk a bit with Pedro Resende, mostly on the subject of quantales. This is something that I got interested in while at UWO, where there is a large contingent of people interested in category theory (mostly from the point of view of homotopy theory) as well as a good group in noncommutative geometry. Quantales were originally introduced by Chris Mulvey – I've been looking recently at a few papers in which he gives a nice account of the subject – here, here, and here. The idea emerged, in part, as a way of combining two different approaches to generalising the idea of a space. One is the approach from topos theory, and more specifically, the generalisation of topological spaces to locales. This direction also has connections to logic – a topos is a good setting for intuitionistic, but nevertheless classical, logic, whereas quantales give an approach to quantum logics in a similar spirit. The other direction in which they generalize space is the -algebra approach used in noncommutative geometry. One motivation of quantales is to say that they simultaneously incorporate the generalizations made in both of these directions – so that both locales and -algebras will give examples. In particular, a quantale is a kind of lattice, intended to have the same sort of relation to a noncommutative space as a locale has to an ordinary topological space. So to begin, I'll look at locales. A locale is a lattice which formally resembles the lattice of open sets for such a space. A lattice is a partial order with operations ("meet") and ("join"). These operations take the role of the intersection and union of open sets. So to say it formally resembles a lattice of open sets means that the lattice is closed under arbitrary joins, and finite meets, and satisfies the distributive law: Lattices like this can be called either "Frames" or "Locales" – the only difference between these two categories is the direction of the arrows. A map of lattices is a function that preserves all the structure – order, meet, and join. This is a frame morphism, but it's also a morphism of locales in the opposite direction. That is, . Another name for this sort of object is a "Heyting algebra". One of the great things about topos theory (of which this is a tiny starting point) is that it unifies topology and logic. So, the "internal logic" of a topos has a Heyting algebra (i.e. a locale) of truth values, where the meet and join take the place of logical operators "and" and "or". The usual two-valued logic is the initial object in , so while it is special, it isn't unique. One vital fact here is that any topological space (via the lattice of open sets) produces a locale, and the locale is enough to identify the space – so is an embedding. (For convenience, I'm eliding over the fact that the spaces have to be "sober" – for example, Hausdorff.) In terms of logic, we could imagine that the space is a "state space", and the truth values in the logic identify for which states a given proposition is true. There's nothing particularly exotic about this: "it is raining" is a statement whose truth is local, in that it depends on where and when you happen to look. To see locales as a generalisation of spaces, it helps to note that the embedding above is full – if and are locales that come from topological spaces, there are no extra morphisms in that don't come from continuous maps in . So the category of locales makes the category of topological spaces bigger only by adding more objects – not inventing new morphisms. The analogous noncommutative statement turns out not to be true for quantales, which is a little red-flag warning which Pedro Resende pointed out to me. What would this statement be? Well, the noncommutative analogue of the idea of a topological space comes from another embedding of categories. To start with, there is an equivalence : the category of locally compact, Hausdorff, topological spaces is (up to equivalence) the opposite of the category of commutative -algebras. So one simply takes the larger category of all -algebras (or rather, its opposite) as the category of "noncommutative spaces", which includes the commutative ones – the original locally compact Hausdorff spaces. The correspondence between an algebra and a space is given by taking the algebra of functions on the space. So what is a quantale? It's a lattice which is formally similar to the lattice of subspaces in some -algebra. Special elements – "right", "left," or "two-sided" elements – then resemble those subspaces that happen to be ideals. Some intuition comes from thinking about where the two generalizations coincide – a (locally compact) topological space. There is a lattice of open sets, of course. In the algebra of continuous functions, each open set determines an ideal – namely, the subspace of functions which vanish on . When such an ideal is norm-closed, it will correspond to an open set (it's easy to see that continuous functions which can be approximated by those vanishing on an open set will also do so – if the set is not open, this isn't the case). So the definition of a quantale looks much like that for a locale, except that the meet operation is replaced by an associative product, usually called . Note that unlike the meet, this isn't assumed to be commutative – this is the point where the generalization happens. So in particular, any locate gives a quantale with . So does any -algebra, in the form of its lattice of ideals. But there are others which don't show up in either of these two ways, so one might hope to say this is a nice all-encompassing generalisation of the idea of space. Now, as I said, there was a bit of a warning that comes attached to this hope. This is that, although there is an embedding of the category of -algebras into the category of quantales, it isn't full. That is, not only does one get new objects, one gets new morphisms between old objects. So, given algebras and , which we think of as noncommutative spaces, and a map of algebras between them, we get a morphism between the associated quantales – lattice maps that preserve the operations. However, unlike what happened with locales, there are quantale morphisms that don't correspond to algebra maps. Even worse, this is still true even in the case where the algebras are commutative, and just come from locally compact Hausdorff spaces: the associated quantales still may have extra morphisms that don't come from continuous functions. There seem to be three possible attitudes to this situation. First, maybe this is just the wrong approach to generalising spaces altogether, and the hints in its favour are simply misleading. Second, maybe quantales are absolutely the right generalisation of space, and these new morphisms are telling us something profound and interesting. The third attitude, which Pedro mentioned when pointing out this problem to me, seems most likely, and goes as follows. There is something special that happens with -algebras, where the analytic structure of the norm makes the algebras more rigid than one might expect. In algebraic geometry, one can take a space (algebraic variety or scheme) and consider its algebra of global functions. To make sure that an algebra map corresponds to a map of schemes, though, one really needs to make sure that it actually respects the whole structure sheaf for the space – which describe local functions. When passing from a topological space to a -algebra, there is a norm structure that comes into play, which is rigid enough that all algebra morphisms will automatically do this – as I said above, the structure of ideals of the algebra tells you all about the open sets. So the third option is to say that a quantale in itself doesn't quite have enough information, and one needs some extra data something like the structure sheaf for a scheme. This would then pick out which are the "good" morphisms between two quantales – namely, the ones that preserve this extra data. What, precisely, this data ought to be isn't so clear, though, at least to me. So there are some complications to treating a quantale as a space. One further point, which may or may not go anywhere, is that this type of lattice doesn't quite get along with quantum logic in quite the same way that locales get along with (intuitionistic) classical logic (though it does have connections to linear logic). In particular, a quantale is a distributive lattice (though taking the product, rather than , as the thing which distributes over ), whereas the "propositional lattice" in quantum logic need not be distributive. One can understand the failure of distributivity in terms of the uncertainty principle. Take a statement such as "particle has momentum and is either on the left or right of this barrier". Since position and momentum are conjugate variables, and momentum has been determined completely, the position is completely uncertain, so we can't truthfully say either "particle has momentum and is on the left or "particle has momentum and is on the right". Thus, the combined statement that either one or the other isn't true, even though that's exactly what the distributive law says: "P and (Q or S) = (P and Q) or (P and S)". The lack of distributivity shows up in a standard example of a quantum logic. This is one where the (truth values of) propositions denote subspaces of a vector space . "And" (the meet operation ) denotes the intersection of subspaces, while "or" (the join operation ) is the direct sum . Consider two distinct lines through the origin of – any other line in the plane they span has trivial intersection with either one, but lies entirely in the direct sum. So the lattice of subspaces is non-distributive. What the lattice for a quantum logic should be is orthocomplemented, which happens when has an inner product – so for any subspace , there is an orthogonal complement . Quantum logics are not very good from a logician's point of view, though – lacking distributivity, they also lack a sensible notion of implication, and hence there's no good idea of a proof system. Non-distributive lattices are fine (I just gave an example), and very much in keeping with the quantum-theoretic strategy of replacing configuration spaces with Hilbert spaces, and subsets with subspaces… but viewing them as logics is troublesome, so maybe that's the source of the problem. Now, in a quantale, there may be a "meet" operation, separate from the product, which is non-distributive, but if the product is taken to be the analog of "and", then the corresponding logic is something different. In fact, the natural form of logic related to quantales is linear logic. This is also considered relevant to quantum mechanics and quantum computation, and as a logic is much more tractable. The internal semantics of certain monoidal categories – namely, star-autonomous ones (which have a nice notion of dual) – can be described in terms of linear logic (a fairly extensive explanation is found in this paper by Paul-André Melliès). Part of the point in the connection seems to be resource-limitedness: in linear logic, one can only use a "resource" (which, in standard logic, might be a truth value, but in computation could be the state of some memory register) a limited number of times – often just once. This seems to be related to the noncommutativity of in a quantale. The way Pedro Resende described this to me is in terms of observations of a system. In the ordinary (commutative) logic of a locale, you can form statements such as "A is true, AND B is true, AND C is true" – whose truth value is locally defined. In a quantale, the product operation allows you to say something like "I observed A, AND THEN observed B, AND THEN observed C". Even leaving aside quantum physics, it's not hard to imagine that in a system which you observe by interacting with it, statements like this will be order-dependent. I still don't quite see exactly how these two frameworks are related, though. On the other hand, the kind of orthocomplemented lattice that is formed by the subspaces of a Hilbert space CAN be recovered in (at least some) quantale settings. Pedro gave me a nice example: take a Hilbert space , and the collection of all projection operators on it, . This is one of those orthocomplemented lattices again, since projections and subspaces are closely related. There's a quantale that can be formed out of its endomorphisms, , where the product is composition. In any quantale, one can talk about the "right" elements (and the "left" elements, and "two sided" elements), by analogy with right/left/two-sided ideals – these are elements which, if you take the product with the maximal element, , the result is less than or equal to what you started with: means is a right element. The right elements of the quantale I just mentioned happen to form a lattice which is just isomorphic to . So in this case, the quantale, with its connections to linear logic, also has a sublattice which can be described in terms of quantum logic. This is a more complicated situation than the relation between locales and intuitionistic logic, but maybe this is the best sort of connection one can expect here. In short, both in terms of logic and spaces, hoping quantales will be "just" a noncommutative variation on locales seems to set one up to be disappointed as things turn out to be more complex. On the other hand, this complexity may be revealing something interesting. Coming soon: summaries of some talks I've attended here recently, including Ivan Smith on 3-manifolds, symplectic geometry, and Floer cohomology. Recent Talk: Emre Coskun on Gerbes and Stacks Posted by Jeffrey Morton under category theory, geometry, gerbes, moduli spaces, stacks, talks I recently went to California to visit Derek Wise at UC Davis – we were talking about expanding the talk he gave at Perimeter Institute into a more developed paper about ETQFT from Lie groups. Between that, the end of the Winter semester, and the beginning of the "Summer" session (in which I'm teaching linear algebra), it's taken me a while to write up Emre Coskun's two-part talk in our Stacks And Groupoids seminar. Emre was explaining the theory of gerbes in terms of stacks. One way that I have often heard gerbes explained is in terms of a categorification of vector bundles – thus, the theory of "bundle gerbes", as described by Murray in this paper here. The essential point of that point of view is that bundles can be put together by taking trivial bundles on little neighbourhoods of a space, and "gluing" them together on two-fold overlaps of those neighbourhoods – the gluing functions then have to satisfy a cocycle condition so that they agree on triple overlaps. A gerbe, on the other hand, defines line bundles (not functions) on double overlaps, and the gluing functions now live on triple overlaps. The idea is that this begins a heirarchy of concepts, each of which categorifes the previous (after "gerbe", one just starts using terms like "2-gerbe", "3-gerbe", and so on). The levels of this hierarchy are supposed to be related to the various (nonabelian) cohomology groups of a space . I've mostly seen this point of view related to work by Jean-Luc Brylinski. It is a very differential-geometric sort of construction. Emre, on the other hand, was describing another side to the theory of gerbes, which comes out of algebraic geometry, and is closely related to stacks. There's a nice survey by Moerdijk which gives an account of gerbes from a similar point of view, though for later material, Emre said he drew on this book by Laumon and Moret-Bailly (which I can only find in the original French). As one might expect, a stack-theoretic view of gerbes thinks of them as generalizations of sheaves, rather than bundles. (The fact that there is a sheaf of sections of a bundle also generalizes to gerbes, so bundle-gerbes are a special case of this point of view). So the setup is that we have some space – Emre was talking about the context of algebraic geometry, so the relevant idea of space here is scheme (which, if you're interested, is assumed to have the etale topology – i.e. the one where covers use etale maps, the analog of local isomorphisms). In the second talk, he generalized this to -spaces: for some chosen scheme . That is, the category of "spaces" is based on the slice category of schemes equipped with maps into , with the obvious morphisms. This is a site, since there's a notion of a cover over and so forth; an -space is a sheaf (of sets) on this site. So in particular, a scheme over determines an -space, where by . (That is, the usual way a space determines a representable sheaf). There are also differential-geometric versions of gerbes. So, whatever the right notion of space, a stack over a space (in the sense of a sheaf of groupoids over , which we're assuming has the etale topology) is a gerbe if a couple of nice conditions apply: There's a cover , such that none of the is empty. Over any open , all the objects are isomorphic (i.e. is connected as a category) Notice that there doesn't have to be a global object – that is, needn't be empty – only some cover such that local objects exist – but where they do, they're all "the same". These conditions can also be summarized in terms of the fibred category . There are two maps from – the projection and the diagonal. The conditions respectively say these two maps are, locally, epi (i.e. surjective). Emre's first talk began by giving some examples of gerbes to motivate the rest. The first one is the "gerbe of splittings" of an Azumaya algebra. "An" Azumaya algebra is actually a sheaf of algebras over some scheme . The defining property is that locally it looks like the algebra of endomorphisms of a vector bundle. That is, on any neighborhood , we have: for some (algebraic) vector bundle . A special case is when is just a point, in which case an Azumaya algebra is the same thing as a matrix algebra . So Azumaya algebras are not too complicated to describe. The gerbe of splittings, for an Azumaya algebra is also not too complicated: a splitting is a way to represent an algebra as endomorphisms of a vector bundle – which in this case may only be possible locally. Over an given , its objects are pairs , where is a vector bundle over , and is an isomorphism. The morphisms are bundle isomorphisms that commute with the . So, roughly: if is locally isomorphic to endomorphisms of vector bundles, the gerbe of splittings is the stack of all the vector bundles and isomorphisms which make this work. It's easy to see this is a gerbe, since by definition, such bundles must exist locally, and necessarily they're all isomorphic. (This example – a gerbe consisting, locally, of a category of all vector bundles of a certain form – starts to suggest why one might want to think of gerbes as categorifying bundles.) Another easily constructed gerbe in a similar spirit is found from a complex line bundle over (and ). Then is a gerbe over , where the groupoid over a neighborhood has, as objects, pairs where is an isomorphism of line bundles. That is, the objects locally look like roots of . The gerbe is trivial (has a global object) if has a root. Cohomology One says that a gerbe is banded by a sheaf of groups on (or is the band of the gerbe, or is a -gerbe), if there are isomorphisms between the group and the automorphism group for each object over (the property of a gerbe means these are all isomorphic). (These isomorphisms should also commute with the group homomorphisms induced by maps of open sets.) So the band is, so to speak, the "local symmetry group over " of the gerbe in a natural way. In the case of the gerbe of splittings of above, the band is , where over any given neighborhood, , where is the group of units in the base field: that is, the group consists of all the invertible sections in the structure sheaf of . These get turned into bundle-automorphisms by taking a function to the automorphism that acts through multiplication by . The gerbe associated to a line bundle is banded by the group of -roots of unity in sections in the structure sheaf. From here, we can see how gerbes relate to cohomology. In particular, a -gerbe , we can associate a cohomology class . This class can be thought of as "the obstruction to the existence of a global object". So, in the case of an Azumaya algebra, it's the obstruction to being split (i.e. globally). The way this works is, given a covering with an object in , we take pull back this object along the morphisms corresponding to inclusions of sub-neighbourhoods, down to a triple-overlap . Then there are isomorphisms comparing the different pullbacks: , and so on. (The lowered indices denote which of the we're pulling back from). Then we get a 2-cocycle in (an isomorphism corresponding to what, for sheaves of sets, would be an identity). This is . The existence of this cocycle means that we're getting an element in , which we denote . If a global object exists, then all our local objects are restrictions of a global one, the cocycle will always turn out to be the identity, so this class is trivial. A non-trivial class implies an obstruction to gluing the local objects into global ones. In the second talk, Emre gave some more examples of gerbes which it makes sense to think of as moduli spaces, including one which any gerbe resembles locally. The first is the moduli space of all vector bundles over some (smooth, projective) curve . (Actually, one looks at those of some particular degree and rank , and requires a condition called stability). Actually, as discussed earlier in the seminar back in Aji's talk, the right way to see this is that there is a "fine" moduli space – really a stack and not necessarily a space (in whichever context) – called , and also a "coarse" moduli space called . Roughly, the actual space has points which are the isomorphism classes of vector bundles, while the stack remembers the whole groupoid of bundles and bundle-isomorphisms. So there's a map, which squashes a bundle to its isomorphism class: making the fine moduli space into a category fibred in groupoids – more than that, it's a stack – and more than that, it's a gerbe. That is, there's always a cover of such that there are some bundles locally, and (stable) bundles of a given rank and degree are always isomorphic. In fact, this is a -gerbe, as above. The next example is the gerbe of -torsors, for a group (that is, -sets which are isomorphic as -sets to – the intuition is that they're just like the group, but without a specified identity). The category consists of -torsors and their isomorphisms. This is a gerbe over the point . More interesting, when we're in the context of -spaces (and has a trivial action of on it), it becomes a -gerbe over . Part of the point here is that any trivial gerbe (i.e. one with a section) is just such a classifying space for some group. In particular, for the group of isomorphisms from a particular object to itself, crossed with . Since any gerbe has sections locally (that is, objects in for some ), every gerbe locally looks like one of these classifying-space gerbes. This is the analog to the fact that any bundle locally looks like a product. Conference: Connections in Geometry and Physics Posted by Jeffrey Morton under conferences, gauge theory, geometry, moduli spaces, physics, talks As promised in the previous post, here is a little writeup of the second conference I was at recently… Connections in Geometry and Physics The conference at PI was an interestingly varied cross-section of talks, with a good many of them about geometry which, to be honest, is a little over my head. Ostensibly about "connections", the talks actually ranged quite widely, which was interesting, and reminded me I have a lot af geometry to catch up on. A lot of talks had to do with structures at various places along the heirarchy: (1) symplectic manifolds, (2) Kähler manifolds, and (3) Calabi-Yau manifolds. These last are interesting to string theorists and others, in part because they satisfy a form of Einstein's equations, while also carrying a bunch of extra structure. Now, at least I know what all the above things are: Symplectic manifolds have the "symplectic form" , a non-degenerate exact 2-form (a canonical example being in the cotangent space to , which happens to be the configuration space for a particle moving in – symplectic forms often show up on configuration spaces). A Kähler manifold is symplectic, but also has a complex structure (i.e. a way to multiply tangent vectors by ), which preserves the symplectic form, and a metric, which gets along with both of the above. If the metric satisfies Einstein's equations and is flat (this really amounts to the connection to "connections", since this is the same as there being some flat connections, namely the Levi-Civita connection), then is a Calabi-Yau manifold. Anyway, this sets up the kind of geometry a lot of people were talking about, and while I didn't exactly have the background to follow everything, I got a sense of what kinds of questions people are interested in, which was good. A lot of questions have to do with Lagrangian submanifolds of any of the above (from symplectic through Calabi-Yau). These are submanifolds where the symplectic form gives zero when applied to any tangent, and which have the highest possible dimension consistent with this property (namely , if the original thing is -dimensional). Another theme which came up several times – for example, in the talk by Denis Auroux – has to do with "mirror symmetry" for Kähler manifolds (and Calabi-Yaus), which has to do with finding a "mirror" for the manifold , called where the complex geometry on the mirror corresponds to the symplectic geometry on , and vice versa. There were some talks in the direction of physics. One of the most obviously physical was Niky Kamran's, talking about a project he's worked on with F. Finster, J. Smoller, and S-T. Yau, about long-time dynamics of particles satisfying the Dirac equation, living on a background geometry described by the Kerr metric – which describes a rotating black hole. Since I worked with Niky on a related project for my M.Sc (my thesis was basically a summary putting together a bunch of results by these same four people), I followed this talk better than many of the others. Working on this project, I got a strong sense of how important symmetry is in studying a lot of real-world problems. One of the essential facts about the Kerr metric is that it's very symmetric: it's stable in time, and rotationally symmetric. Actually, all the black-hole solutions to Einstein's equations are quite symmetric – there is only a small family of solutions, parametrized by mass and angular momentum (and electrical charge). The symmetry makes differential equations written in terms of this metric much nicer – you can split things into the radial and angular parts, for example – and in particular, the wave equations Niky was talking about are integrable just because of this symmetry, so it's possible to get exact analytic results. (Other approaches to this kind of problem get results only numerically and approximately, but can deal with much more general backgrounds.) The starting point (which basically is what my thesis summarizes) is to show that there are no "bound states" for the Dirac equation. Fermions (which is what it describes) are most familiar to us in bound states: in shells orbiting the nucleus of an atom. But if the attractive force pulling on them is gravity, rather than electical charge, this situation isn't stable. The work Niky was talking about deals with what happens instead: what are the long-term dynamics of a fermion near a rotating black hole? They use spectral methods – basically, Fourier analysis – to find out. The Dirac equation is a wave equation (for a spinor field), and you can look at the different frequencies, and get an estimate of how fast they decay. (Since there aren't stable orbits, the strength of the spinor field has to decay over time.) In fact, they get a sharp estimate of the order (namely ). Basically, one should imagine that the wave is a superposition of "ripples" – some radiating outward from the event horizon, and some converging toward it. Put in terms of a particle – an electron, say, or a neutrino – this says it will either fall into the black hole, or (if it has enough energy) escape off to infinity. There were some other physics-ish talks, such as that by James Sparks, on the geometry of the "AdS/CFT" correspondence. This correspondence has to do with two kinds of quantum field theory. The "AdS" stands for "Anti de Sitter", which is a sort of geometric structure for a manifold which resembles a hyperboloid – actually, all the unit vectors in where the metric has signature (4,2): that is, the metric is something like . This hyperboloid is 5-dimensional, and has a metric with one timelike dimension. Plain old "de Sitter" space is a similar thing, but using a metric with signature (5,1). It's possible to define some field theory on AdS space, called supersymmetric supergravity. This theory turns out to have exactly the same algebra of observables as a different theory, "CFT" or conformal field theory, on the (conformal) boundary of Anti de Sitter space. Sparks told us about a geometric interpretation of this. Then there was Sergei Gukov, with a talk called "Brane Quantization", based on this work with Ed Witten. He was a little reticent to actually describe how this "brane quantization" actually works, preferring to refer us to that paper, but gave us a very nice, and relatively comprehensible overview of different approaches to quantizing a symplectic manifold. (As I said, they tend to show up as configuration spaces in classical physics. A basic problem of quantization is how to turn the algebra of functions on a symplectic manifold into an algebra of operators on a Hilbert space .) In particular, he contrasted their method with geometric quantization (which needs to make some arbitrary choices, then takes to be a space of sections of some line bundle on with a connection whose curvature is ), and with deformation quantization (which needs no special choices, but only constructs an algebra of operators by algebraic deformation, and not actually itself, which some people, but not Sergei Gukov, find satisfactory). The basic idea of Brane quantization seems to be that gets complexified (somehow – it might be either impossible, or non-unique), and then studying something called an A-model of the result. This is apparently related to, for example, Gromov-Witten theory, which I've written about here recently. Finally, I'll mention a few other talks which stood out as rather different from the rest. Veronique Godin talked about "Relative String Topology" – string topology being a way of studying space by looking at embeddings of the circle (or of paths) into it – that is, its loop space (or path space). Usually, invariants that come from path spaces only detect the homotopy type of the original spaces – in particular, they're not helpful as knot invariants. Godin talked about a clever way to detect more structure by means of an -coalgebra structure on the cohomology groups of the path space. The "relative" part means one's looking at a manifold with embedded submanifold (for example, is a knot in ), and considering only paths starting and ending on . (This is how one can get a coalgebra structure – turning one path into two paths if it crosses through again is a comultiplication – this extends to chains in the cohomology). Chris Brav gave a talk about how braid groups act on derived categories, which I didn't entirely follow, but subsequently he did explain to me in a pretty comprehensible way what people are trying to accomplish when they look at derived categories. At some point I'll have to think about this more carefully and maybe post about it. But roughly, it's the same sort of "nice categorical properties" I mentioned in the previous post, about smooth spaces. Looking at derived categories of sheaves on a space, makes the objects seem more complicated, but it also makes them behave better with respect to taking things like limits and colimits. Benjamin Young prefaced his talk, "Combinatorics Inspired by Donaldson-Thomas Theory" by pointing out that he's a combinatorialist, not a geometer. But Donaldson-Thomas invariants are apparently a kind of "signed count" of some geometric structures (as are a lot of invariants – the same kind of "weighted count" invariants appear in Gromov-Witten and Dijkgraaf-Witten theory, just for instance). So he described some geometry relating to "brane tilings" – basically, embedding certain kinds of graphis in a torus – and how they give rise to structures that correspond to certain kinds of Young diagrams ("not the same Young", he added, perhaps unnecessarily, but it got a chuckle anyway). So the counts can be turned into a combinatorial problem of counting those Young diagrams with the appropriate sign, which can be done using a generating function. So in any case, this conference had a whole range of talks, from several different fields. While I found myself lost in a number of talks, I was also quite fascinating with how wide a range of topics were embraced under its umbrella – "connections" indeed! So in the end this was one of those conferences which opened my eyes to a wider view of the field, which was certainly a good reason to go!
CommonCrawl
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 11 > Page 6590 Christoph Hitzenberger, Editor-in-Chief Effect of laser pulse shaping on photoacoustic dosimetry in retinal models Robert B. Brown, Suzie Dufour, Pascal Deladurantaye, Nolwenn Le Bouch, Pascal Gallant, Sébastien Méthot, Patrick J. Rochette, and Ozzy Mermut Robert B. Brown,1,5 Suzie Dufour,1,5 Pascal Deladurantaye,1 Nolwenn Le Bouch,1 Pascal Gallant,1 Sébastien Méthot,2,3 Patrick J. Rochette,2,3 and Ozzy Mermut1,4,* 1National Optics Institute (INO), 2740 Einstein St., Quebec City, G1P 4S4, Canada 2Laval University, Department of Ophthalmology and ORL, Quebec City, G1 V 0A6, Canada 3Regenerative Medicine Research Center, CHU Quebec, Saint-Sacrement Hospital, Quebec City, G1S 4L8, Canada 4York University, Department of Physics and Astronomy, Toronto, Canada, M3J 1P3, Canada 5authors contributed equally to this work *Corresponding author: [email protected] R Brown S Dufour P Deladurantaye N Bouch P Gallant S Méthot P Rochette O Mermut •https://doi.org/10.1364/BOE.403703 Robert B. Brown, Suzie Dufour, Pascal Deladurantaye, Nolwenn Le Bouch, Pascal Gallant, Sébastien Méthot, Patrick J. Rochette, and Ozzy Mermut, "Effect of laser pulse shaping on photoacoustic dosimetry in retinal models," Biomed. Opt. Express 11, 6590-6604 (2020) Potential of sub-microsecond laser pulse shaping for controlling microcavitation in selective retinal therapies (BOE) Selective retina therapy enhanced with optical coherence tomography for dosimetry control and monitoring: a proof of concept study (BOE) Gold nanoparticle targeted photoacoustic cavitation for potential deep tissue imaging and therapy (BOE) Photoacoustic Imaging and Spectroscopy Laser energy Laser therapies Power spectral density Original Manuscript: July 29, 2020 Photoacoustic sensing can be a powerful technique to obtain real-time feedback of laser energy dose in treatments of biological tissue. However, when laser therapy uses pulses with microsecond duration, they are not optimal for photoacoustic pressure wave generation. This study examines a programmable fiber laser technique using pulse modulation in order to optimize the photoacoustic feedback signal to noise ratio (SNR) in a context where longer laser pulses are employed, such as in selective retinal therapy. We have demonstrated with a homogeneous tissue phantom that this method can yield a greater than seven-fold improvement in SNR over non-modulated square pulses of the same duration and pulse energy. This technique was further investigated for assessment of treatment outcomes in leporine retinal explants by photoacoustic mapping around the cavitation-induced frequency band. Laser therapy is used in numerous medical and surgical applications through precise ablation, disruption or thermal treatment of exposed tissues. Most treatments require consistent and controlled dosimetry at precise locations. Therapeutic techniques can benefit from use of relatively long duration laser pulses which are limited by the thermal relaxation time of the treated tissue in order to provide localized treatment while avoiding the damage caused by extremely high peak energies. Typically, there are high levels of inhomogeneity in the optical absorption and scattering properties both within and across tissues in the same individual, as well as between different individuals. Thus, for effective locally confined treatments, a sensitive and accurate means to assess the fluence required to deliver a specified dose to the tissue is needed. In ocular procedures for glaucoma, such as in selective laser trabeculoplasty, optoacoustic approaches are promising avenues for detecting onset of thermo-mechanically induced gas microbubbles produced from pressure waves because of the thermoelastic expansion of the absorbing medium [1]. In retinal diseases, selective retinal therapy (SRT) is an attractive approach because employment of microsecond pulses allows destruction of the RPE with minimal damage to the photoreceptors and choroid [2]. In all cases, a dosimetric feedback mechanism is highly desirable during treatments of SRT and subthreshold micropulse diode laser therapy, where there is no visual response indicator for the ophthalmologist. Adjustment of laser power and temporal duration has shown potential in the development of titration methods as a dosimetry measure for subvisible tissue effects but such dosimetry can be challenging to perform reliably [3,4]. Although studies have demonstrated better outcomes from these new, effective treatments compared to the more destructive photocoagulation treatments routinely used [5], this barrier of obtaining accurate and personalized dosimetry feedback has precluded their implementation in common surgical practice. 1.1. Photoacoustic sensing for laser treatment dosimetry Photoacoustic (PA) sensing has been applied in an imaging context in many tissues [6]. As a response feedback mechanism, PA has the advantage that it is directly dependent on the absorbed energy dosage for short laser pulses. Given an incident optical fluence, F0, the fluence F(z), at a depth of z within an absorbing, low scattering, medium is determined by the Beer-Lambert relation, $F(z )/{F_0} = {e^{ - {\mathrm{\mu }_a}z}}$, and depends on µa, the absorption coefficient [7–10]. In the photo thermo-mechanical treatment regime, thermal confinement occurs when the laser energy is absorbed at a rate that exceeds the heat diffusion rate in tissue. In the irradiated volume, heat accumulates because it cannot escape through heat conduction. Thermal confinement happens when the laser pulse duration (pulse width) is much shorter than the time of dissipation of the absorbed energy by thermal conduction (tissue thermal relaxation time) [11]. Under thermal confinement conditions, the temperature rise, ΔT, is dictated by the absorbed optical energy divided by the volumetric heat capacity, Cv, and the density, ρ, of the material, (1)$$\Delta T = {\mu _a}{F_0}{e^{ - {\mathrm{\mu }_a}z}}/\rho {C_v}. $$ Heat accumulation also produces thermoelastic stresses due to the thermal expansion of tissue, which can be induced by application of shorter laser pulses [12]. This induces transient vapor bubble formation causing mechanical damage to the cells and tissue disruption [13]. Thermoelastic stress occurs when heat accumulates faster than the acoustic relaxation rate of the material. Under such stress confinement conditions, the rapid temperature increase in a confined volume results in the buildup of pressure (thermal pressure) as the material attempts to expand defined by the Grüneisen parameter, $\mathrm{\Gamma } = \beta v_s^2/{C_v}$ [14]. Here, β and vs are the thermal coefficient of volumetric expansion and the velocity of sound, respectively. The laser induced pressure, P(t), follows the following relation [7,15,16], (2)$$P(t )= \; \mathrm{\Gamma }{\mu _a}{F_0}{e^{({\mu _a}ct)}}. $$ where c is the speed of sound, and t is time of acoustic arrival at the ultrasonic transducer. Microsecond pulses used in SRT as well as other applications [4,17–19], produce pressure waves in the ultrasonic MHz frequency range that can be measured with an acoustic transducer. Laser-induced cavitation formation and collapse produces the desired localized photomechanical damage in SRT and generates measurable pressure waves [20]. This makes photoacoustic monitoring attractive for dosimetry. Consequently, significant research investigations have explored PA measurements for treatment dosimetry [5,20–22] and for detection of cavitation events [8,21,22]. However, the microsecond pulse durations greatly exceed the stress confinement time constant, and thus are not optimal for photoacoustic signal generation. Therefore, sensitivity remains an important problem to address for effective dosimetric applications. Several factors affect the PA response. For example, for a constant pulse energy the pressure generation efficiency across the acoustic spectrum is higher for shorter pulses (1 ns) than for longer pulses (100 ns) [7]. However, the gain for these shorter pulses is diminished in thicker specimens (i.e. depth >3 cm) since they generate higher frequencies which are more strongly attenuated in aqueous media and tissue. Photoacoustic response also varies with the properties and geometry of the absorbing region, including the absorber cross section, concentration/aggregation of absorbers, the size of the laser spot, the absorption and physical properties of the material, and the speed of sound within them [23]. The detector and electronic amplification circuitry are also factors in the captured frequency response. As a result, the final measured PA signal is a convolution of laser parameters, sample properties and geometry, surrounding tissue acoustic transmission properties, instrument detector and amplification responses, as portrayed in Fig. 1. The PA response also includes acoustic and electronic noise which mixes with the photoacoustic signal, ultimately limiting the sensitivity of the system. Fig. 1. Generation of photoacoustic frequency response. Efforts to optimize photoacoustic generation based on modifying temporal laser shapes have not been extensively investigated. Sheinfeld et al. [24] reported on a pulse formatting method to optimize photoacoustic measurements and enhance the signal from select components in a mixture of PA generating sources. It was demonstrated that by modifying the laser pulse temporal profile to match the time-reversed impulse acoustic response of the PA system, the laser-induced photoacoustic SNR can be improved by approximately 1.5-fold versus square or Gaussian pulses. Furthermore, they demonstrated that if the impulse response is tailored to one PA-generating component of the sample, this signal can be amplified preferentially compared to other emitters present. Gao et al. [25] examined the effects of modulated laser pulses on the PA signal and found optimal modulation frequencies for biological tissues, providing ∼2-fold improvement in response over the range of modulation frequencies tested. Because the optimal frequency was dependent on the type of tissue, they suggested that photoacoustic resonance spectroscopy could be used for identification of tissues, or tissue properties. The objective of the present work is to investigate the application of amplitude modulation of microsecond pulse profiles on retinal explants to demonstrate a proof of concept of an approach to optimize the PA feedback SNR in a context where longer laser pulses are employed. This presents an important first step relevant to future high SNR SRT dosimetry in vivo, and other similar applications. 2.1. Phantom preparation and characterization Recent studies report the suitable acoustic and optical properties of polyvinyl chloride plastisol (PVC-P) for the fabrication of PA phantoms [26–30]. Its properties (e.g. the speed of sound, heat capacity, and coefficient of thermal expansion) mimic those of biological tissues [27,30]. Phantoms were therefore prepared from a PVC-P formulation (701-89 Vynaflex plastisol compound, Gripworks). The selected black pigmented PVC-P mixture was chosen for its elevated coefficient of absorption, µa, mimicking the highly absorbent pigmented retinal epithelium layer (RPE). It is also efficiently generates a strong repeatable acoustic response. The plastisol solution was degassed under vacuum for 3 h, while a metallic mold was pre-heated to 155°C. The degassed solution was poured to form a 6 mm thick layer in the heated mold and cured at 155°C for 5 minutes, and then allowed to cool before demolding. Phantoms were cut into 2 cm x 2 cm sections. The speed of sound in the PVC-P phantom was determined by subjecting the bottom surface of the phantom to 1 ns laser pulses and measuring the time required for the midpoint between compression and rarefaction acoustic waves to reach the hydrophone, and dividing this result by the thickness of the phantom. The heat capacity of the phantom was determined in a differential scanning calorimeter (DSC6200, Perkin Elmer), using a 10 mg sapphire disk as a heat capacity reference and based on the three-run method previously described [30]. Phantom spatial uniformity was evaluated by measuring the laser-induced PA response in constant conditions (laser spot size and fluence) across the phantom surface. 2.2. Retinal explants preparation Rabbit heads from young pigmented brown rabbits were obtained from a local slaughterhouse. The rabbits were not genetically chosen. Retinal explants were extracted from fresh rabbit eyes, in accordance with the Association for Research in Vision and Ophthalmology (ARVO) Animal Statement and our local institution's guidelines. Following enucleation, eyes were cut around the iris region. The anterior portion of the eye, the vitreous humor and the retinal layer were removed, and the remaining posterior globe was sectioned to facilitate planar horizontal mounting. The collected explants were kept for a maximum of 2 h at room temperature in phosphate buffer saline for experimentation. 2.3. Experimental setup and data analysis The experimental setup configuration is shown in Fig. 2(a). A pulse programmable fiber laser (MOPAW, 532 nm) [31] was employed using custom-programmable temporal pulse shapes from 3 ns to 2 µs in duration. Amplitude modulated pulses were created as a single laser pulse by programming the temporal amplitude profile of our fiber laser. Our approach is to apply high frequency amplitude modulation to these microsecond pulses as our modulation scheme to create various modulated pulses, in some cases resembling pulse trains as shown in Fig. 2(a) inset. Thus, high frequency content was introduced into microsecond pulses of the type employed in SRT [5]. The laser beam was focused onto the sample (PVC-P phantom or retinal explants shown in Fig. 2(b) and 2(c), respectively) by a long working distance objective lens. Beam displacement was achieved using a pair of motorized galvanometer mounted mirrors (Thorlabs, GVSM002). An acousto-optic modulator (AOM) (Intra action, AFM-1102A1) was used to control the laser treatment pulse energy. Pulse energy was determined with a joulemeter tap (Coherent, J-10Si-Le) and the pulse shape was monitored using a fast photodetector (Electro-Optics Technology Inc., GaAs PIN detector ET-4000). Laser spot size was measured to be 28 µm as imaged by a CCD camera using 1/e2 intensity threshold for a non-saturated image of the laser beam transverse profile. Similar to previous studies, acoustic waves were captured with hydrophone [32] model HNC-1000 (Onda) coupled to an AH1100 20dB amplifier (Onda) with a 20 MHz bandwidth. In some experiments, a HeNe laser at 633 nm (Melles Griot, 5 mW) was used as a reflection probe to confirm the presence of cavitation events [32] (data not shown). The signal was detected by an avalanche photodetector (ThorLabs, APD 120A2). A LabVIEW program was designed to scan samples with the treating beams, control the treating laser pulse energy and acquire the PA signals for each treatment zone. Two electronic cards (NI PCIe-6323 and GaGe CSE1222-4G) were used to control the galvanometric mirrors, the acousto-optic modulator, and acquire data from the hydrophone, the power meter, and the avalanche photodiode. Fig. 2. (a) Schematic representation of the experimental setup. (c-b) Photographs of the hydrophone positioned under the microscope objective in the proximity of a (b) phantom or (c) a retinal explant. Photodetectors and amplifier data were stored on disk. Photoacoustic signal amplitude, PA power spectral density, and acoustic wave spectrogram calculation as well as image reconstruction were obtained with custom MATLAB scripts. 3.1 Phantom properties The fabricated phantom, shown in Fig. 3(a), was used as a simple, homogeneous, stable substrate to test different laser pulse shapes. Its properties were measured to validate its similarity to reported biological tissue and designed to match tissue acoustic properties. The speed of sound in the phantom was found to be approximately 1520 m/s. The measured specific heat capacity was 1.1 J/gK at 35°C which is similar to the previously reported value of 0.85 for PVC without plasticizer [33]. Comparing to tissues, this is significantly lower than most soft tissues with high water content (3-4 J/gK), though similar to cortical bone [34] sufficient for validation purposes. The PA response uniformity over the phantom surface was measured to be 10% relative standard deviation (Fig. 3(b)). Fig. 3. (a) Photoacoustic phantom (a) and averaged PA response (b) for a fixed energy of 0.8 mJ. The measured standard deviation shows photoacoustic signal uniformity (n = 25 and 24 PA signals, respectively). 3.2. Modulated microsecond pulses and photoacoustic SNR improvement in phantom The photoacoustic SNR ratio was then evaluated for 0.6 µs pulses with various superimposed amplitude modulations applied to the pulse. Square wave amplitude modulation with 50% duty cycle were applied to maintain both the peak power (2X the unmodulated pulse) and overall energy equal for each of the tested modulated pulse shapes. Laser pulse amplitude modulation frequencies were chosen at relatively regular intervals to test different regions of the hydrophone response, ranging from 2.5 − 28 MHz. In addition to modulated 0.6 µs pulses, non-modulated 3 ns pulses were also assessed in the same experiment to provide a source of high frequency content that could provide insight as to the frequency response of the system. Photoacoustic measurements were also taken with the laser blocked, to allow estimation of the frequency content of instrumental noise sources independent from the generated PA signal. The average frequency responses of the 3 ns pulses and the noise data collected are plotted in Fig. 4(a). It is observed that higher SNR is present in the 3 − 7 MHz range as well as the 27 − 29 MHz range, and low SNR is seen below 2 MHz, between 10 − 27 MHz, and above 30 MHz. Fig. 4. (a) Averaged frequency response of 3 ns duration pulses (black) and the noise data (grey). (b) Signal to noise ratio obtained with different modulated pulse shapes. Representative pulse shapes are illustrated below the x-axis, where all overall pulse envelopes are 600 ns in length, with the exception of the 3 ns pulse. Pulse energies and modulation frequency are listed in the x-axis labels. Bandpass frequency filtered data are defined in the figure legend. Each data point is the average of 8 PA measurements. Collected from the oscilloscope, the non-filtered PA amplitude signal demonstrated very little improvement in the SNR for any of the modulated pulses, with a maximal SNR improvement over a non-modulated 600 ns square pulse being 1.7-fold for the 2.5 MHz pulse (Fig. 4(b)). However, by applying bandpass filters to the data centered on the frequency of the pulse modulation, much larger gains could be observed in the regions expected to have higher SNR based on the 3 ns pulse results. Maximal SNR was obtained for the 4.2 MHz modulated pulses, when the 3.2 − 5.2 MHz bandpass filter was applied, resulting in a 7-fold increase in SNR as compared to the non-modulated 0.6 µs square pulse of same energy with the same filter (or a 14-fold increase over the SNR of the raw signal). While the modulated pulses are expected to have a 2-fold increase in peak excitation power over the non-modulated square pulse due to the duty cycle (50% vs. 100%), this is not sufficient to explain the increase in SNR. In fact, a 3 ns pulse was found to have only a 2.2-fold increase in SNR compared to the 4.2 MHz modulated 600 ns pulse (both signals have optimal SNR using 4.2 MHz bandpass filter) despite a difference of approximately 100-fold in peak power. For modulated pulses at frequencies that were expected to have low SNR (14 MHz), the effect of the pulse modulation was minimal, with the PA response for both filtered and non-filtered results being comparable to the 0.6 µs square pulse. 3.3 Modulated microsecond pulses and photoacoustic SNR improvement in phantom: 2D mapping From knowledge that modulation frequencies ranging from 3 − 7 MHz gave the best SNR results in our system, we imaged the same section of a PA phantom with non-modulated and 5 MHz-modulated microsecond pulses of the same energy, 0.4 µJ/ pulse (see Fig. 5). Fig. 5. PVC-P phantom image as opbtained with non-modulated (a) and modulated (b) laser pulse obtained with a 0.1 − 100 MHz (left) and a 4.5-5.5 MHz (right) filter window. Filtered (dark blue) form as well as the envelope (black) of the acoustic wave (light blue) for a given image coordinate are represented for each images. For each image pixel, the acoustic wave was digitized. Then a 3 × 3 pixel moving average was applied. Acoustic signals were digitally filtered and the signal envelope was evaluated as the absolute value of its Hilbert transform. Images were constructed using the peak-to-peak amplitude of the Hilbert transform. Examples of the filtered form as well as the envelope of the acoustic wave for a given coordinate are represented for each image in Fig. 5 to show the signal processing leading to the image construction. Phantom images obtained with the non-modulated microsecond pulse (Fig. 5(a)) suffer from poor SNR (for both 0.1 − 100 MHz and 4.5 − 5 MHz filter window). However, phantom images obtained with 5 MHz-modulated pulses (Fig. 5(b), left) show a much better SNR ratio. This is mainly due to the 50% duty cycle affecting the peak power as discussed above. When applying a filter window centered on the modulation frequency (filtering all the noise outside of the laser modulation), the SNR can be further increased. In fact, the SNR obtained with a modulated-µs pulse is similar to that obtained with a single 10 ns pulse, a pulse duration commonly used for PA imaging (see Table 1). Table 1. SNR for different pulse formats. 3.4. Photoacoustic frequency-domain cavitation detection in retinal tissue In addition to SNR improvement, we examined pulse modulation for detection of cavitation or other non-linear events. When using a pulse modulation with a narrow spectral band (i.e. sinusoidal modulation), an acoustic wave is observed in a similar spectral band as long as the pulse energy remains in the linear thermoelastic regime. When a non-linear event occurs, such as a cavitation, ultrasound frequencies outside of the modulation band are generated. Thus, using spectral analysis, one can extract cavitation occurrence. Figure 6 shows the spectrograms of the PA signal obtained from a retinal explant with different laser pulse energies. Fig. 6. (a) Spectrograms of the PA signals obtained from a leporine retinal explant with different laser pulse energies (0.09 − 0.59 µJ). As laser pulse energy increases and cavitation threshold is reached (around 0.33 µJ), frequencies outside of the modulation bands appear. Normalized laser pulse format is represented in blue (lower section). Spectral densities shown integrated over 4.5-5.5 MHz (b) and 0.1-3.5 MHz (c) spectral windows for all the laser pulse energies used in (a). When this energy is sufficient to generate a measurable acoustic wave, the acoustic signal shows a clear band at the modulation frequency (5 MHz) and very little signal outside of this band (see spectrograms for 0.18 µJ/pulse and 0.25 µJ/pulse in Fig. 6(a)). For energies beyond cavitation threshold (confirmed via cavitation detection using He-Ne probe, data not shown), energy is transferred by the cavitation event to other frequencies, reducing the spectral content at the laser modulation frequency. Note that cavitation microbubbles also shield the laser energy absorption in the focal area as expected and previously observed [32], and the thermoelastic signal at 5 MHz is also reduced (see spectrograms for 0.33 µJ, 0.42 µJ and 0.59 µJ in Fig. 6(a)). Figure 6(b) and 6(c) shows the spectral density integrated over different spectral windows as a function of time. The spectral density within the 0.1 − 3.5 MHz band clearly shows the distinction between cases under and above cavitation threshold (Fig. 6(c)). 3.5. Mapping treatment outcome in retinal tissue We then applied this improved SNR and non-linear event detection method, enabled by pulse modulation, to probe laser treatment in a rabbit retinal explant model. We treated the retinal explant with 2 µs laser pulses modulated at 5 MHz (the laser pulse temporal profile was the same as in Fig. 6(a)). The treatment beam was scanned on a retinal sample containing two regions with distinct absorption characteristics due to the presence or absence of the RPE cellular layer (Fig. 7(a)), and the generated PA signal was monitored. Note that for a given scan, a constant treating spot size was used and scanned over the area (sample from 7a) to build, point by point, a PA response map. Fig. 7. Photoacoustic laser treatment mapping in retinal explants. (a) Image of the scanned retinal explant. (b) Map of the photoacoustic signals recorded as the treating laser pulse was scanned over the area shown in (a). Three scans were performed in the same area with increasing treating pulse energy (right to left). Spatial scale bar in (a-b) is 50 µm. (c) Spectrograms of the PA response induced by a laser pulse (0.67 µJ, 5 MHz, 2 µs pulse duration) before, during and after treatment pulses for 3 sample coordinates represented in (b). Temporal scale bar in (c) is 1 µs. The obtained PA feedback maps are shown in Fig. 7(b). Although the energy was kept constant for a scan, the PA response for each area/pixel in the map varied due to varying adsorption properties. Figure 7(c) represents the spectral signature of the acoustic wave for 3 different coordinates (pulse energy = 0.67 µJ, pulse energy density = 109 mJ/cm2) from which we can extract the treatment outcome. Such spectral analysis clearly shows the generation of ultrasound outside of the modulation band (characteristic to cavitation occurrence) visible in coordinates where absorption was high. In general, PA feedback images provide a clear distinction of the two regions above cavitation threshold. However, below cavitation threshold, the PA signal is hard to distinguish from noise (see Fig. 7(b)). Subthreshold (thermoelastic) feedback can be improved using spectral analysis. Filtering the signal around 5 MHz allows extracting the thermoelastic portion of the signal and enables recovery of anatomical information (or the level of absorbed laser energy, as seen in Fig. 8(a)). Additionally, mapping the signal around the cavitation-induced frequency band (Fig. 8(b)) is interpreted as the cavitation-related non-linear PA response and allows mapping of the desired photomechanical treatment effect. Both thermoelastic and photomechanical damage maps could be used as sensitive dosimetry feedback in laser treatment. Fig. 8. Photoacoustic mapping with spectral analysis. (a-b) Photoacoustic images for different treatment laser pulse energies with spectral filtering window of 4.5-5.5 MHz (a) and 2-3.5 MHz (b). (c) Binary map showing the presence (white pixels) or absence (black pixels) of signal at cavitation-induced frequency. We used a SNR of 3 (or 4.77 dB) as a threshold for cavitation signal detection. Scale bar is 50 µm in (a − c). For visualization purposes, we also mapped cavitation occurrence in a binary map where white pixels represent cavitation locations (thus photomechanical damage), and where black pixels represent location where no cavitation occurred (Fig. 8(c)). We assumed that we were in the presence of a cavitation when the signal around the cavitation-induced ultrasonic frequency band SNR reached a reasonable value of 3 (or 4.77dB). For each pixel the SNR was defined using the averaged energy ES and EN (energy of signal and energy of noise, respectively) of the monitored signal, s(t), and noise, n(t) (ultrasound recorded before the laser pulse occurrence) as per the following relations [35], (3)$$SN{R_{dB}} = 10lo{g_{10}}\left( {\frac{{{E_S}}}{{{E_N}}} - 1} \right), $$ (4, 5)$${E_S} = \frac{1}{T}\mathop \sum \nolimits_{t = 1}^T {[{s(t )+ n(t )} ]^2}t{\; \; }\textrm{and}{\; \; \; }{E_N} = \frac{1}{T}\mathop \sum \nolimits_{t = 1}^T {[{n(t )} ]^2}t. $$ In Eqs. (4) and (5), T represents the analysis window duration, which is 1.5 times the laser pulse duration in the present case. This assumption was observed to optimize the amount of false negative and false positive results when compared to visual cavitation assessment. 4.1 SNR improved photoacoustic feedback The present work demonstrates that, in a method similar to frequency-domain PA spectroscopy, pulse modulation can be used to optimize the laser-induced acoustic feedback SNR. We have demonstrated with a homogeneous tissue PA phantom that our approach can yield a greater than 7-fold improvement in SNR over non-modulated square pulses of the same duration and pulse energy. Also, while microsecond square laser pulses of 0.4 µJ yielded poor phantom images, modulated pulses with the same duration and energy provided clear phantom images (Fig. 5). The spectral response of the hydrophone appears to be the key determinant for the SNR of the PA spectrum. However, this is not the only factor. Tissue attenuation rates are often more complex than distilled water. Multiple absorbing geometries may have more complicated effects on the emitted PA spectrum and the use of flatter response detectors and amplifiers can lead to cases where the sample dictates the optimum frequency for detection. In fact, it has been demonstrated that certain geometries, sizes and concentrations of absorbing elements within samples can produce different frequency responses compared to others [23,25] allowing some level of selectivity when targeted with these frequencies. In these cases, it is useful to provide optimal pulse shapes to produce the highest possible SNR with the lowest laser energy. This can be achieved by properly taking advantage of the differential impulse response of the targets of interest. Moreover, in future implementations, the use of modulated pulses could be paired with highly sensitive resonant acoustic transducers to help photoacoustic mapping of selective retinal therapy outcomes in real time [36]. 4.2 Cavitation detection at optimal damage confinement in retinal explants In laser-based medical procedures, it is desirable to monitor cavitation threshold as an indicator of cellular damage [37]. As such, one can optimize damage confinement in treated tissue and improve therapy outcomes. When modulating the treatment laser pulse, the generated acoustic pressure has a spectral signature around the modulation frequency. Given that the very fine spectral response induced by a modulated treatment laser pulse differs from the cavitation-induced frequency response, spectral analysis can recover information and map the nature of the light-tissue interaction in the treatment site (as seen in the cavitation map of Fig. 8(c)). The energy densities used where significant cavitation was determined by spectral analysis (0.4 µJ; 65 mJ/cm2) is similar to the cavitation thresholds identified in previous studies [38]. Using this strategy, one can extract quantitative data from the PA feedback signal. 4.3 Implications for photoacoustic feedback dosimetry in ophthalmic applications In applications such as SRT, one can envision that improving the SNR for sub-cavitation threshold PA transients, with modulated laser pulses, provides a flexible method for higher sensitivity laser dosimetry. The photoacoustic signal can potentially provide both pre-treatment optical absorption mapping as well as enable treatment and therapy feedback solutions, such as in SRT. Moreover, this can be achieved sharing the same wavelength distribution (with respect to absorption and scattering properties) and same instrument (optical path/alignment). Increased SNR of PA feedback could be particularly relevant in treatments which require sub-cavitation threshold/thermoelastic regime laser fluences, as these regimes necessitate a high sensitivity indicator for damage. Since these treatments typically use microsecond pulses, there is a definite prospect to improve the PA feedback using the modulation strategies described here [39]. In a broader application context, this approach of high SNR photoacoustic absorption mapping achieved through laser pulse formatting may hold interesting opportunities in dosimetry or treatment feedback evaluations such as through PA examination of fluid movement inside the eye in treatment of glaucoma, as recently described [40]. In this study using retinal explants, we have demonstrated nearly an order of magnitude improvement in PA SNR with our pulse modulation approach. Nevertheless, its application to more complex clinical models, in which both light and ultrasound waves must travel through the whole eye, remains to be investigated and validated with suitable in vivo instrumentation. Nevertheless, its application to more complex clinical models, in which both light and ultrasound waves must travel through the whole eye, remains to be validated. We previously demonstrated in melanosome models that in longer laser pulses (> 100 ns) cavitation threshold radiant exposure and dynamics above the threshold strongly depend on the pulse format [32] whereas we observed that continuous pulses had similar effects on the cellular damage threshold of retinal explants [41]. This is because pulse modulation in the picosecond time scale is faster than the mechanical response of the system. These results suggest that sub-microsecond laser pulse shaping could be exploited to optimize precision in numerous applications of laser-directed microcavitation and control cellular damage thresholds with longer pulses. In concert with our modulated PA dosimetry scheme presented here, multimodal treatment with highly sensitive PA feedback may be feasible. However, more studies must be done with retinal explant and clinically relevant models to demonstrate that modulated laser pulses (with higher peak powers) have similar outcomes in a clinical context. Photoacoustic feedback detection is a promising method for SRT dosimetry [21,22]. However, microsecond laser pulses are not optimal for high amplitude PA wave generation since their pulse durations are above the stress confinement time constant. Here, we demonstrated that a pulse programmable fiber laser, generating pulse shapes at modulation frequencies that match the maximal response of the acoustic system significantly improves the PA signal to noise ratio. This technique allows for a greater ability to extract dosimetric signals out of a noisy background as the acoustic response can be subjected to a tight frequency bandpass filter in post-processing, eliminating the vast majority of system noise. The present work establishes that modulated pulses can lead to higher subthreshold PA feedback and more effective cavitation detection thereby potentially providing a more advanced approach for mapping laser treatment outcomes. Natural Sciences and Engineering Research Council of Canada (DGECR-2020-00227); Institut National d'Optique. The authors would like to thank Yves Taillon from INO for providing a programmable MOPAW laser system that was used for photoacoustic experiments, Louis Desbiens for expert assistance with pulse formatting, and Marc Girard for electrical noise management. We also thank Frédéric Émond for assembling the photoacoustic setup. Some authors have filed application for patent (US20180323571A1) and have a granted patent (US10390883B2). No authors have any financial benefit resulting from patents. Authors declare that there are no conflicts of interest related to this article. 1. K. Bliedtner, E. Seifert, and R. Brinkmann, "Towards automatically controlled dosing for selective laser trabeculoplasty," Trans. Vis. Sci. Tech. 8(6), 24 (2019). [CrossRef] 2. C. Framme, G. Schuele, J. Roider, R. Birngruber, and R. Brinkmann, "Influence of pulse duration and pulse number in selective RPE laser treatment," Lasers Surg. Med. 34(3), 206–215 (2004). [CrossRef] 3. D. Palanker, D. Lavinsky, R. Dalal, and P. Huie, "Non -damaging Laser Therapy of the Macula: Titration Algorithm and Tissue Response," Proc. SPIE 8930, 893016 (2014). [CrossRef] 4. D. Lavinsky, J. Wang, P. Huie, R. Dalal, S. Jun Lee, D. Yeong Lee, and D. Palanker, "Nondamaging retinal laser therapy: rationale and applications to the macula," Invest. Ophthalmol. Visual Sci. 57(6), 2488–2500 (2016). [CrossRef] 5. R. Brinkmann, J. Roider, and R. Birngruber, "Selective Retina Therapy (SRT): A review on methods, techniques, preclinical and first clinical results," Bull. Soc. Belge. Ophtalmol. 302, 51–69 (2006). 6. M. Xu and L. V. Wang, "Photoacoustic imaging in biomedicine," Rev. Sci. Instrum. 77(4), 041101 (2006). [CrossRef] 7. T. J. Allen, B. Cox, and P. C. Beard, "Generating photoacoustic signals using high-peak power pulsed laser diodes," Proc. SPIE 5697, 233–242 (2005). [CrossRef] 8. V. Serebryakov, É. Boĭko, and A. Yan, "Real-time optoacoustic monitoring of the temperature of the retina during laser therapy," J. Opt. Technol. 81(6), 312–321 (2014). [CrossRef] 9. I. V. Larina, K. V. Larin, and R. O. Esenaliev, "Real-time optoacoustic monitoring of temperature in tissues," J. Phys. D: Appl. Phys. 38(15), 2633–2639 (2005). [CrossRef] 10. S. L. Jacques, "Role of tissue optics and pulse duration on tissue effects during high-power laser irradiation," Appl. Opt. 32(13), 2447–2454 (1993). [CrossRef] 11. L. V. Zhigilei and B. J. Garrison, "Microscopic Mechanisms of Laser Ablation of Organic Solids in the Thermal and Stress Confinement Irradiation Regimes," J. Appl. Phys. 88(3), 1281–1298 (2000). [CrossRef] 12. A. Vogel and V. Venugopalan, "Pulsed laser ablation of soft biological tissues," in Optical-Thermal Response of Laser-Irradiated Tissue, 2nd ed. (Springer, 2011). 13. B. A. Rockwell, R. J. Thomas, and A. Vogel, "Ultrashort laser pulse retinal damage mechanisms and their impact on thresholds," Med. Laser Appl. 25(2), 84–92 (2010). [CrossRef] 14. E. Petrova, S. Ermilov, R. Su, V. Nadvoretskiy, A. Conjusteau, and A. Oraevsky, "Using optoacoustic imaging for measuring the temperature dependence of Grüneisen parameter in optically absorbing solutions," Opt. Express 21(21), 25077–25090 (2013). [CrossRef] 15. D.-K. Yao, C. Zhang, K. I. Maslov, and L. V. Wang, "Photoacoustic measurement of the Grüneisen parameter of tissue," J. Biomed. Opt. 19(1), 017007 (2014). [CrossRef] 16. T. J. Allen and P. C. Beard, "Development of a pulsed NIR multiwavelength laser diode excitation system for biomedical photoacoustic applications," Proc. SPIE 6086, 60861P (2006). [CrossRef] 17. L. Beltran, H. Abbasi, G. Rauter, N. Friederich, P. Cattin, and A. Zam, "Effect of laser pulse duration on ablation efficiency of hard bone in microseconds regime," in Third International Conference on Applications of Optics and Photonics, 104531S (2017). 18. Q. Fan, W. Hu, and A. T. Ohta, "Efficient single-cell poration by microsecond laser pulses," Lab. Chip 15(2), 581–588 (2015). [CrossRef] 19. I. Rohde, J.-M. Masch, D. Theisen-Kunde, M. Marczynski-Bühlow, R. Bombien Quaden, G. Lutter, and R. Brinkmann, "Resection of Calcified Aortic Heart Leaflets In Vitro by Q-Switched 2 µm Microsecond Laser Radiation," J. Card. Surg. 30(2), 157–162 (2015). [CrossRef] 20. J. Roider, S. H. M. Liew, C. Klatt, H. Elsner, K. Poerksen, J. Hillenkamp, R. Brinkmann, and R. Birngruber, "Selective retina therapy (SRT) for clinically significant diabetic macular edema," Graefes Arch. Clin. Exp. Ophthalmol. 248(9), 1263–1272 (2010). [CrossRef] 21. G. Schuele, H. Elsner, C. Framme, J. Roider, R. Birngruber, and R. Brinkmann, "Optoacoustic real-time dosimetry for selective retina treatment," J. Biomed. Opt. 10(6), 064022 (2005). [CrossRef] 22. G. Schuele, E. Joachimmeyer, C. Framme, J. Roider, R. Birngruber, and R. Brikmann, "Optoacoustic control system for selective treatment of the retinal pigment epithelium," Proc. SPIE 4256, 71 (2001). [CrossRef] 23. E. Hysi, D. Dopsa, and M. C. Kolios, "Photoacoustic radio-frequency spectroscopy (PA-RFS): A technique for monitoring absorber size and concentration," Proc. SPIE 8581, 85813W (2013). [CrossRef] 24. A. Sheinfeld, E. Bergman, S. Gilead, and A. Eyal, "The use of pulse synthesis for optimization of photoacoustic measurements," Opt. Express 17(9), 7328–7338 (2009). [CrossRef] 25. F. Gao, X. Feng, Y. Zheng, and C.-D. Ohl, "Photoacoustic resonance spectroscopy for biological tissue characterization," J. Biomed. Opt. 19(6), 067006 (2014). [CrossRef] 26. S. E. Bohndiek, S. Bodapati, D. Van De Sompel, S.-R. Kothapalli, and S. S. Gambhir, "Development and application of stable phantoms for the evaluation of photoacoustic imaging instruments," PLoS One 8(9), e75533 (2013). [CrossRef] 27. M. Fonseca, B. Zeqiri, P. Beard, and B. Cox, "Characterization of a PVCP-based tissue-mimicking phantom for quantitative photoacoustic imaging," Proc. SPIE 9539, 953911 (2015). [CrossRef] 28. L. Maggi, M. A. von Krüger, W. C. A. Pereira, and E. E. C. Monteiro, "Development of silicon-based materials for ultrasound biological phantoms," IEEE International Ultrasonics Symposium, 1962–1965 (2009). 29. W. C. Vogt, C. Jia, K. A. Wear, B. S. Garra, and T. J. Pfefer, "Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties," J. Biomed. Opt. 21(10), 101405 (2016). [CrossRef] 30. L. Maggi, G. Cortela, M. A. von Krüger, C. Negreira, and W. C. A. Pereira, "Ultrasonic Attenuation and Speed in phantoms made of PVCP and Evaluation of acoustic and thermal properties of ultrasonic phantoms made of polyvinyl chloride-plastisol (PVCP)," IWBBIO Proc., 233–241 (2013). 31. P. Deladurantaye, V. Roy, L. Desbiens, M. Drolet, Y. Taillon, and P. Galarneau, "Ultra Stable, Industrial Green Tailored Pulse Fiber Laser with Diffraction-limited Beam Quality for Advanced Micromachining," J. Phys.: Conf. Ser. 276(1), 012017 (2011). [CrossRef] 32. P. Deladurantaye, S. Méthot, O. Mermut, P. Galarneau, and P. J. Rochette, "Potential of sub-microsecond laser pulse shaping for controlling microcavitation in selective retinal therapies," Biomed. Opt. Express 11(1), 109–132 (2020). [CrossRef] 33. M. Kok, K. Demirelli, and Y. Aydogdu, "Thermophysical properties of blend of poly (vinyl chloride) with poly (isobornyl acrylate)," Int. J. Sci. Technol. 3(1), 37–42 (2008). 34. " http://www.itis.ethz.ch/who-we-are/," (retrieved 2016-09-02, 2016). 35. M. Köse, S. Tagcioğlu, and Z. Telatar, "Signal-to-noise ratio estimation of noisy transient signals," Communications Faculty of Sciences University of Ankara Series A2-A3 57(1), 11–19 (2015). [CrossRef] 36. R. J. Zemp, L. Song, R. Bitton, K. Kirk Shung, and L. V. Wang, "Realtime photoacoustic microscopy in vivo with a 30-MHz ultrasound array transducer," Opt. Express 16(11), 7915–7928 (2008). [CrossRef] 37. G. Schuele, M. Rumohr, G. Huettmann, and R. Brinkmann, "RPE damage thresholds and mechanisms for laser exposure in the microsecond-to-millisecond time regimen," Invest. Ophthalmol. Vis. Sci. 46(2), 714–719 (2005). [CrossRef] 38. H. Lee, C. Alt, C. M. Pitsillides, and C. P. Lin, "Optical detection of intracellular cavitation during selective laser targeting of the retinal pigment epithelium: dependence of cell death mechanism on pulse duration," J. Biomed. Opt. 12(6), 064034 (2007). [CrossRef] 39. S. Dufour, R. B. Brown, P. Gallant, and O. Mermut, "Improved photoacoustic Dosimetry for retinal laser surgery," Proc. SPIE 1047, 104742I (2018). [CrossRef] 40. Y. H. Yücel, K. Cardinell, S. Khattak, X. Zhou, M. Lapinski, F. Cheng, and N. Gupta, "Active lymphatic drainage from the eye measured by noninvasive photoacoustic imaging of near-infrared nanoparticles," Invest. Ophthalmol. Visual Sci. 59(7), 2699–2707 (2018). [CrossRef] 41. S. Dufour, R. Brown, P. Deladurantaye, S. Méthot, P. Gallant, P. J. Rochette, S. Boyd, and O. Mermut, "Comparing effects of two microsecond laser pulse regimes on cavitation dynamics in RPE cells," Photonics North (2016), pp. 1-1. K. Bliedtner, E. Seifert, and R. Brinkmann, "Towards automatically controlled dosing for selective laser trabeculoplasty," Trans. Vis. Sci. Tech. 8(6), 24 (2019). C. Framme, G. Schuele, J. Roider, R. Birngruber, and R. Brinkmann, "Influence of pulse duration and pulse number in selective RPE laser treatment," Lasers Surg. Med. 34(3), 206–215 (2004). D. Palanker, D. Lavinsky, R. Dalal, and P. Huie, "Non -damaging Laser Therapy of the Macula: Titration Algorithm and Tissue Response," Proc. SPIE 8930, 893016 (2014). D. Lavinsky, J. Wang, P. Huie, R. Dalal, S. Jun Lee, D. Yeong Lee, and D. Palanker, "Nondamaging retinal laser therapy: rationale and applications to the macula," Invest. Ophthalmol. Visual Sci. 57(6), 2488–2500 (2016). R. Brinkmann, J. Roider, and R. Birngruber, "Selective Retina Therapy (SRT): A review on methods, techniques, preclinical and first clinical results," Bull. Soc. Belge. Ophtalmol. 302, 51–69 (2006). M. Xu and L. V. Wang, "Photoacoustic imaging in biomedicine," Rev. Sci. Instrum. 77(4), 041101 (2006). T. J. Allen, B. Cox, and P. C. Beard, "Generating photoacoustic signals using high-peak power pulsed laser diodes," Proc. SPIE 5697, 233–242 (2005). V. Serebryakov, É. Boĭko, and A. Yan, "Real-time optoacoustic monitoring of the temperature of the retina during laser therapy," J. Opt. Technol. 81(6), 312–321 (2014). I. V. Larina, K. V. Larin, and R. O. Esenaliev, "Real-time optoacoustic monitoring of temperature in tissues," J. Phys. D: Appl. Phys. 38(15), 2633–2639 (2005). S. L. Jacques, "Role of tissue optics and pulse duration on tissue effects during high-power laser irradiation," Appl. Opt. 32(13), 2447–2454 (1993). L. V. Zhigilei and B. J. Garrison, "Microscopic Mechanisms of Laser Ablation of Organic Solids in the Thermal and Stress Confinement Irradiation Regimes," J. Appl. Phys. 88(3), 1281–1298 (2000). A. Vogel and V. Venugopalan, "Pulsed laser ablation of soft biological tissues," in Optical-Thermal Response of Laser-Irradiated Tissue, 2nd ed. (Springer, 2011). B. A. Rockwell, R. J. Thomas, and A. Vogel, "Ultrashort laser pulse retinal damage mechanisms and their impact on thresholds," Med. Laser Appl. 25(2), 84–92 (2010). E. Petrova, S. Ermilov, R. Su, V. Nadvoretskiy, A. Conjusteau, and A. Oraevsky, "Using optoacoustic imaging for measuring the temperature dependence of Grüneisen parameter in optically absorbing solutions," Opt. Express 21(21), 25077–25090 (2013). D.-K. Yao, C. Zhang, K. I. Maslov, and L. V. Wang, "Photoacoustic measurement of the Grüneisen parameter of tissue," J. Biomed. Opt. 19(1), 017007 (2014). T. J. Allen and P. C. Beard, "Development of a pulsed NIR multiwavelength laser diode excitation system for biomedical photoacoustic applications," Proc. SPIE 6086, 60861P (2006). L. Beltran, H. Abbasi, G. Rauter, N. Friederich, P. Cattin, and A. Zam, "Effect of laser pulse duration on ablation efficiency of hard bone in microseconds regime," in Third International Conference on Applications of Optics and Photonics, 104531S (2017). Q. Fan, W. Hu, and A. T. Ohta, "Efficient single-cell poration by microsecond laser pulses," Lab. Chip 15(2), 581–588 (2015). I. Rohde, J.-M. Masch, D. Theisen-Kunde, M. Marczynski-Bühlow, R. Bombien Quaden, G. Lutter, and R. Brinkmann, "Resection of Calcified Aortic Heart Leaflets In Vitro by Q-Switched 2 µm Microsecond Laser Radiation," J. Card. Surg. 30(2), 157–162 (2015). J. Roider, S. H. M. Liew, C. Klatt, H. Elsner, K. Poerksen, J. Hillenkamp, R. Brinkmann, and R. Birngruber, "Selective retina therapy (SRT) for clinically significant diabetic macular edema," Graefes Arch. Clin. Exp. Ophthalmol. 248(9), 1263–1272 (2010). G. Schuele, H. Elsner, C. Framme, J. Roider, R. Birngruber, and R. Brinkmann, "Optoacoustic real-time dosimetry for selective retina treatment," J. Biomed. Opt. 10(6), 064022 (2005). G. Schuele, E. Joachimmeyer, C. Framme, J. Roider, R. Birngruber, and R. Brikmann, "Optoacoustic control system for selective treatment of the retinal pigment epithelium," Proc. SPIE 4256, 71 (2001). E. Hysi, D. Dopsa, and M. C. Kolios, "Photoacoustic radio-frequency spectroscopy (PA-RFS): A technique for monitoring absorber size and concentration," Proc. SPIE 8581, 85813W (2013). A. Sheinfeld, E. Bergman, S. Gilead, and A. Eyal, "The use of pulse synthesis for optimization of photoacoustic measurements," Opt. Express 17(9), 7328–7338 (2009). F. Gao, X. Feng, Y. Zheng, and C.-D. Ohl, "Photoacoustic resonance spectroscopy for biological tissue characterization," J. Biomed. Opt. 19(6), 067006 (2014). S. E. Bohndiek, S. Bodapati, D. Van De Sompel, S.-R. Kothapalli, and S. S. Gambhir, "Development and application of stable phantoms for the evaluation of photoacoustic imaging instruments," PLoS One 8(9), e75533 (2013). M. Fonseca, B. Zeqiri, P. Beard, and B. Cox, "Characterization of a PVCP-based tissue-mimicking phantom for quantitative photoacoustic imaging," Proc. SPIE 9539, 953911 (2015). L. Maggi, M. A. von Krüger, W. C. A. Pereira, and E. E. C. Monteiro, "Development of silicon-based materials for ultrasound biological phantoms," IEEE International Ultrasonics Symposium, 1962–1965 (2009). W. C. Vogt, C. Jia, K. A. Wear, B. S. Garra, and T. J. Pfefer, "Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties," J. Biomed. Opt. 21(10), 101405 (2016). L. Maggi, G. Cortela, M. A. von Krüger, C. Negreira, and W. C. A. Pereira, "Ultrasonic Attenuation and Speed in phantoms made of PVCP and Evaluation of acoustic and thermal properties of ultrasonic phantoms made of polyvinyl chloride-plastisol (PVCP)," IWBBIO Proc., 233–241 (2013). P. Deladurantaye, V. Roy, L. Desbiens, M. Drolet, Y. Taillon, and P. Galarneau, "Ultra Stable, Industrial Green Tailored Pulse Fiber Laser with Diffraction-limited Beam Quality for Advanced Micromachining," J. Phys.: Conf. Ser. 276(1), 012017 (2011). P. Deladurantaye, S. Méthot, O. Mermut, P. Galarneau, and P. J. Rochette, "Potential of sub-microsecond laser pulse shaping for controlling microcavitation in selective retinal therapies," Biomed. Opt. Express 11(1), 109–132 (2020). M. Kok, K. Demirelli, and Y. Aydogdu, "Thermophysical properties of blend of poly (vinyl chloride) with poly (isobornyl acrylate)," Int. J. Sci. Technol. 3(1), 37–42 (2008). " http://www.itis.ethz.ch/who-we-are/ ," (retrieved 2016-09-02, 2016). M. Köse, S. Tagcioğlu, and Z. Telatar, "Signal-to-noise ratio estimation of noisy transient signals," Communications Faculty of Sciences University of Ankara Series A2-A3 57(1), 11–19 (2015). R. J. Zemp, L. Song, R. Bitton, K. Kirk Shung, and L. V. Wang, "Realtime photoacoustic microscopy in vivo with a 30-MHz ultrasound array transducer," Opt. Express 16(11), 7915–7928 (2008). G. Schuele, M. Rumohr, G. Huettmann, and R. Brinkmann, "RPE damage thresholds and mechanisms for laser exposure in the microsecond-to-millisecond time regimen," Invest. Ophthalmol. Vis. Sci. 46(2), 714–719 (2005). H. Lee, C. Alt, C. M. Pitsillides, and C. P. Lin, "Optical detection of intracellular cavitation during selective laser targeting of the retinal pigment epithelium: dependence of cell death mechanism on pulse duration," J. Biomed. Opt. 12(6), 064034 (2007). S. Dufour, R. B. Brown, P. Gallant, and O. Mermut, "Improved photoacoustic Dosimetry for retinal laser surgery," Proc. SPIE 1047, 104742I (2018). Y. H. Yücel, K. Cardinell, S. Khattak, X. Zhou, M. Lapinski, F. Cheng, and N. Gupta, "Active lymphatic drainage from the eye measured by noninvasive photoacoustic imaging of near-infrared nanoparticles," Invest. Ophthalmol. Visual Sci. 59(7), 2699–2707 (2018). S. Dufour, R. Brown, P. Deladurantaye, S. Méthot, P. Gallant, P. J. Rochette, S. Boyd, and O. Mermut, "Comparing effects of two microsecond laser pulse regimes on cavitation dynamics in RPE cells," Photonics North (2016), pp. 1-1. Abbasi, H. Allen, T. J. Alt, C. Aydogdu, Y. Beard, P. Beard, P. C. Beltran, L. Bergman, E. Birngruber, R. Bitton, R. Bliedtner, K. Bodapati, S. Bohndiek, S. E. Boiko, É. Bombien Quaden, R. Boyd, S. Brikmann, R. Brinkmann, R. Brown, R. Brown, R. B. Cardinell, K. Cattin, P. Cheng, F. Conjusteau, A. Cortela, G. Dalal, R. Deladurantaye, P. Demirelli, K. Desbiens, L. Dopsa, D. Drolet, M. Dufour, S. Elsner, H. Ermilov, S. Esenaliev, R. O. Eyal, A. Fan, Q. Feng, X. Fonseca, M. Framme, C. Friederich, N. Galarneau, P. Gallant, P. Gambhir, S. S. Gao, F. Garra, B. S. Garrison, B. J. Gilead, S. Gupta, N. Hillenkamp, J. Hu, W. Huettmann, G. Huie, P. Hysi, E. Jacques, S. L. Jia, C. Joachimmeyer, E. Jun Lee, S. Khattak, S. Kirk Shung, K. Klatt, C. Kok, M. Kolios, M. C. Köse, M. Kothapalli, S.-R. Lapinski, M. Larin, K. V. Larina, I. V. Lavinsky, D. Liew, S. H. M. Lin, C. P. Lutter, G. Maggi, L. Marczynski-Bühlow, M. Masch, J.-M. Maslov, K. I. Mermut, O. Méthot, S. Monteiro, E. E. C. Nadvoretskiy, V. Negreira, C. Ohl, C.-D. Ohta, A. T. Oraevsky, A. Palanker, D. Pereira, W. C. A. Petrova, E. Pfefer, T. J. Pitsillides, C. M. Poerksen, K. Rauter, G. Rochette, P. J. Rockwell, B. A. Rohde, I. Roider, J. Roy, V. Rumohr, M. Schuele, G. Seifert, E. Serebryakov, V. Sheinfeld, A. Song, L. Su, R. Tagcioglu, S. Taillon, Y. Telatar, Z. Theisen-Kunde, D. Thomas, R. J. Van De Sompel, D. Venugopalan, V. Vogel, A. Vogt, W. C. von Krüger, M. A. Wang, L. V. Wear, K. A. Xu, M. Yan, A. Yao, D.-K. Yeong Lee, D. Yücel, Y. H. Zam, A. Zemp, R. J. Zeqiri, B. Zheng, Y. Zhigilei, L. V. Zhou, X. Biomed. Opt. Express (1) Bull. Soc. Belge. Ophtalmol. (1) Communications Faculty of Sciences University of Ankara Series A2-A3 (1) Graefes Arch. Clin. Exp. Ophthalmol. (1) Int. J. Sci. Technol. (1) Invest. Ophthalmol. Vis. Sci. (1) Invest. Ophthalmol. Visual Sci. (2) J. Appl. Phys. (1) J. Biomed. Opt. (5) J. Card. Surg. (1) J. Opt. Technol. (1) J. Phys. D: Appl. Phys. (1) J. Phys.: Conf. Ser. (1) Lab. Chip (1) Lasers Surg. Med. (1) Med. Laser Appl. (1) Rev. Sci. Instrum. (1) Trans. Vis. Sci. Tech. (1) (1) Δ T = μ a F 0 e − μ a z / ρ C v . (2) P ( t ) = Γ μ a F 0 e ( μ a c t ) . (3) S N R d B = 10 l o g 10 ( E S E N − 1 ) , (4, 5) E S = 1 T ∑ t = 1 T ⁡ [ s ( t ) + n ( t ) ] 2 t and E N = 1 T ∑ t = 1 T ⁡ [ n ( t ) ] 2 t . SNR for different pulse formats. Pulse duration (ns) Modulation (MHz) SNRfil. a SNRNo fil. 1000 5 35 5 1000 - 6.4 1.45 a Bandpass filter: 4.5 − 5MHz.
CommonCrawl
Stability of normalized solitary waves for three coupled nonlinear Schrödinger equations April 2016, 36(4): 1759-1788. doi: 10.3934/dcds.2016.36.1759 Sharp estimates for fully bubbling solutions of $B_2$ Toda system Weiwei Ao 1, Department of Mathematics, University of British Columbia, Vancouver, B.C., V6T 1Z2, Canada Received January 2015 Revised May 2015 Published September 2015 In this paper, we obtain sharp estimates of fully bubbling solutions of the $B_2$ Toda system in a compact Riemann surface. Our main goal in this paper are (i) to obtain sharp convergence rate, (ii) to completely determine the location of bubbles, (iii) to derive the $\partial_z^2$ condition. Keywords: convergence rate, Toda system, sharp estimates, bubbling solutions., classification of solution. Mathematics Subject Classification: Primary: 35J47; Secondary: 35J6. Citation: Weiwei Ao. Sharp estimates for fully bubbling solutions of $B_2$ Toda system. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1759-1788. doi: 10.3934/dcds.2016.36.1759 W. W. Ao, C. S. Lin and J. C. Wei, On Non-topological Solutions of the $A_2$ and $B_2$ Chern-Simons System,, Memoirs of Amer. Math. Soc., (). Google Scholar W. W. Ao, C. S. Lin and J. C. Wei, On Non-topological Solutions of the $G_2$ Chern-Simons System,, Comm. Analysis and Geometry, (). Google Scholar D. Bartolucci, C. C. Chen, C. S. Lin and G. Tarantello, Profile of blow-up solutions to mean field equations with singular data., Comm. Partial Diff. Equ., 29 (2004), 1241. Google Scholar J. Bolton, G. R. Jensen, M. Rigoli and L. M. Woodward, On conformal minimal immersions of $S^2$ into $CP^n$,, Mathematische Annalen, 279 (1988), 599. doi: 10.1007/BF01458531. Google Scholar L. Battaglia, A. Jevnikar, A. Malchiodi and D. Ruiz, A general existence result for the Toda system on compact surfaces,, Advances in Mathematics, 285 (2015), 937. doi: 10.1016/j.aim.2015.07.036. Google Scholar L. Battaglia and A. Malchiodi, A Moser-Trudinger inequality for the singular Toda system,, Bull. Inst. Math. Acad. Sin., 9 (2014), 9. Google Scholar D. Chae and O. Y. Imanuvilov, The existence of non-topological multi-vortex solutions in the relativistic self-dual Chern-Simons theory,, Comm. Math. Phys., 215 (2000), 119. doi: 10.1007/s002200000302. Google Scholar C. C. Chen and C. S. Lin, Sharp estimates for solutions of multi-bubbles in compact Riemann surfaces,, Comm. Pure Appl. Math., 55 (2002), 728. doi: 10.1002/cpa.3014. Google Scholar C. C. Chen and C. S. Lin, Topological degree for a mean field equation on Riemann surfaces,, Comm. Pure Appl. Math., 56 (2003), 1667. doi: 10.1002/cpa.10107. Google Scholar C. C. Chen and C. S. Lin, Mean field equations of Liouville type with singular data: Sharper estimates,, Discrete Contin. Dyn. Syst., 28 (2010), 1237. doi: 10.3934/dcds.2010.28.1237. Google Scholar C. C. Chen and C. S. Lin, Mean field equations of Liouville type with the singular data: Topological formula,, Comm. Pure Appl. Math., 68 (2015), 887. doi: 10.1002/cpa.21532. Google Scholar S. S. Chern and J. G. Wolfson, Maps of the two-sphere into a complex Grassmann manifold. II,, Annal. Math., 125 (1987), 301. doi: 10.2307/1971312. Google Scholar G. Dunne, Mass degeneracies in self-dual models,, Phys. Lett. B, 345 (1995), 452. doi: 10.1016/0370-2693(94)01649-W. Google Scholar G. Dunne, Self-dual Chern-Simons Theories,, Lect. Note Phys., 36 (1995). doi: 10.1007/978-3-540-44777-1. Google Scholar G. Dunne, Vacuum mass spectra for $SU(N)$ self-dual Chern-Simons-Higgs,, Nucl. Phys. B, 433 (1995), 333. doi: 10.1016/0550-3213(94)00476-U. Google Scholar A. Doliwa, Holomorphic curves and Toda system,, Lett. Math. Phys., 39 (1997), 21. doi: 10.1007/s11005-997-1032-7. Google Scholar P. Griffiths and J. Harris, Principles of Algebraic Geometry,, Wiley-Interscience, (1978). Google Scholar M. A. Guest, Harmonic Maps, Loop Groups, and Integrable Systems,, London Mathematical Society Student Texts, (1997). doi: 10.1017/CBO9781139174848. Google Scholar J. Jost, C. S. Lin and G. F. Wang, Analytic aspects of the Toda system. II. Bubbling behavior and existence of solutions,, Comm. Pure Appl. Math., 59 (2006), 526. doi: 10.1002/cpa.20099. Google Scholar J. Jost and G. F. Wang, Classification of solutions of a Toda system in $\mathbbR^2$,, Int. Math. Res. Not., 6 (2002), 277. doi: 10.1155/S1073792802105022. Google Scholar T. J. Kuo and C. S. Lin, Sharp estimate of solutions to mean field equation with integer singular sources: the first order approximation,, preprint, (2013). Google Scholar Y. Y. Li, Harnack type inequality: The method of moving planes,, Comm. Math. Phys., 200 (1999), 421. doi: 10.1007/s002200050536. Google Scholar C. S. Lin and C. L. Wang, Elliptic functions, Green functions and the mean field equations on tori,, Annal. Math., 172 (2010), 911. doi: 10.4007/annals.2010.172.911. Google Scholar C. S. Lin and J. C. Wei, Sharp estimates for bubbling solutions of a fourth order mean field equation,, Annali della Scuola Normale Superiore di Pisa Classe di Scienze, 6 (2007), 599. Google Scholar C. S。 Lin and J. C. Wei, Locating the peaks of solutions via the maximum principle. II. A local version of the method of moving planes,, Comm. Pure Appl. Math., 56 (2003), 784. doi: 10.1002/cpa.10073. Google Scholar C. S. Lin and S. Yan, Existence of bubbling solutions for Chern-Simons model on a torus,, Arch. Ration. Mech. Anal., 207 (2013), 353. doi: 10.1007/s00205-012-0575-7. Google Scholar C. S. Lin and S. Yan, Bubbling solutions for the $SU(3)$ Chern-Simon model on a torus,, Comm. Pure Appl. Math, 66 (2013), 991. doi: 10.1002/cpa.21454. Google Scholar C. S. Lin and S. Yan, Bubbling solutions for relativistic Abelian Chern-Simons model on a torus,, Comm. Math. Phys., 297 (2010), 733. doi: 10.1007/s00220-010-1056-1. Google Scholar C. S. Lin, L. P. Wang and J. C. Wei, Topological degree for solutions of a fourth order mean field equation,, Math. Zeit., 268 (2011), 675. doi: 10.1007/s00209-010-0690-9. Google Scholar C. S. Lin, J. C. Wei and C. Zhao, Sharp estimates for fully bubbling solutions of a $SU(3)$ Toda system,, Geom. Funct. Anal., 22 (2012), 1591. doi: 10.1007/s00039-012-0193-4. Google Scholar C. S. Lin, J. C. Wei and C. Y. Zhao, Asymptotic behavior of $SU(3)$ Toda system in a bounded domain},, Manus. Math., 137 (2012), 1. doi: 10.1007/s00229-011-0451-z. Google Scholar C. S. Lin, J. C. Wei and D. Ye, Classification and non-degeneracy of $SU(n + 1)$ Toda system with singular sources,, Invent. Math., 190 (2012), 169. doi: 10.1007/s00222-012-0378-3. Google Scholar C. S. Lin, J. C. Wei and L. Zhang, Classification of blowup limits for $SU(3)$ Toda singular Toda system,, Analysis and PDE, 8 (2015), 807. doi: 10.2140/apde.2015.8.807. Google Scholar C. S. Lin, J. C. Wei and L. Zhang, Local profile of fully bubbling solutions to $SU(n+1)$ Toda system,, preprint, (). Google Scholar C. S. Lin and L. Zhang, Profile of bubbling solutions to a Liouville system,, Annal. Inst. H. Poincar Anal. Non Lineaire, 27 (2010), 117. doi: 10.1016/j.anihpc.2009.09.001. Google Scholar C. S. Lin and L. Zhang, A topological degree counting for some Liouville systems of mean field equations,, Comm. Pure Appl. Math., 64 (2011), 556. doi: 10.1002/cpa.20355. Google Scholar C. S. Lin and L. Zhang, On Liouville systems at critical parameters, part 1: One bubble,, J. Funct. Anal., 264 (2013), 2584. doi: 10.1016/j.jfa.2013.02.022. Google Scholar A. Malchiodi and D. Ruiz, New improved Moser-Trudinger inequalities and singular Liouville equations on compact surfaces,, Geom. Funct. Anal., 21 (2011), 1196. doi: 10.1007/s00039-011-0134-7. Google Scholar A. Malchiodi and D. Ruiz, A variational analysis of the Toda system on compact surfaces,, Comm. Pure Appl. Math., 66 (2013), 332. doi: 10.1002/cpa.21433. Google Scholar A. Malchiodi and C. B. Ndiaye, Some existence results for the Toda system on closed surfaces,, Atti Dell'accademia Pontificia Dei Nuovi Lincei, 18 (2007), 391. doi: 10.4171/RLM/504. Google Scholar M. Nolasco and G. Tarantello, Double vortex condensates in the Chern-Simons theory,, Cal. Var. Partial Diff. Equ., 9 (1999), 31. doi: 10.1007/s005260050132. Google Scholar M. Nolasco and G. Tarantello, Vortex condensates for the $SU(3)$ Chern-Simons theory,, Comm. Math. Phys., 213 (2000), 599. doi: 10.1007/s002200000252. Google Scholar F. Robert and J. C. Wei, Asymptotic behavior of a fourth order mean field equation with Dirichlet boundary condition,, Indiana University Math. J., 57 (2008), 2039. doi: 10.1512/iumj.2008.57.3324. Google Scholar J. C. Wei, Asymptotic behavior of a nonlinear fourth order eigenvalue problem,, Comm. Partial Diff. Equ., 21 (1996), 1451. doi: 10.1080/03605309608821234. Google Scholar Y. S. Yang, The relativistic non-abelian Chern-Simons equation,, Comm. Math. Phys., 186 (1999), 199. doi: 10.1007/BF02885678. Google Scholar Yong Liu. Even solutions of the Toda system with prescribed asymptotic behavior. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1779-1790. doi: 10.3934/cpaa.2011.10.1779 Qiao Liu, Shangbin Cui. Regularizing rate estimates for mild solutions of the incompressible Magneto-hydrodynamic system. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1643-1660. doi: 10.3934/cpaa.2012.11.1643 Marek Fila, Michael Winkler. Sharp rate of convergence to Barenblatt profiles for a critical fast diffusion equation. Communications on Pure & Applied Analysis, 2015, 14 (1) : 107-119. doi: 10.3934/cpaa.2015.14.107 Marco Cappiello, Fabio Nicola. Sharp decay estimates and smoothness for solutions to nonlocal semilinear equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1869-1880. doi: 10.3934/dcds.2016.36.1869 Yuxia Guo, Jianjun Nie. Classification for positive solutions of degenerate elliptic system. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1457-1475. doi: 10.3934/dcds.2018130 Gerald Teschl. On the spatial asymptotics of solutions of the Toda lattice. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1233-1239. doi: 10.3934/dcds.2010.27.1233 Oleg Makarenkov, Paolo Nistri. On the rate of convergence of periodic solutions in perturbed autonomous systems as the perturbation vanishes. Communications on Pure & Applied Analysis, 2008, 7 (1) : 49-61. doi: 10.3934/cpaa.2008.7.49 Zhilei Liang. Convergence rate of solutions to the contact discontinuity for the compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1907-1926. doi: 10.3934/cpaa.2013.12.1907 Robert Schippa. Sharp Strichartz estimates in spherical coordinates. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2047-2051. doi: 10.3934/cpaa.2017100 Manuel del Pino, Michal Kowalczyk, Juncheng Wei. The Jacobi-Toda system and foliated interfaces. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 975-1006. doi: 10.3934/dcds.2010.28.975 Linghai Zhang. Decay estimates with sharp rates of global solutions of nonlinear systems of fluid dynamics equations. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2181-2200. doi: 10.3934/dcdss.2016091 Tohru Nakamura, Shinya Nishibata, Naoto Usami. Convergence rate of solutions towards the stationary solutions to symmetric hyperbolic-parabolic systems in half space. Kinetic & Related Models, 2018, 11 (4) : 757-793. doi: 10.3934/krm.2018031 Haitao Yang, Yibin Zhang. Boundary bubbling solutions for a planar elliptic problem with exponential Neumann data. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5467-5502. doi: 10.3934/dcds.2017238 David Cruz-Uribe, SFO, José María Martell, Carlos Pérez. Sharp weighted estimates for approximating dyadic operators. Electronic Research Announcements, 2010, 17: 12-19. doi: 10.3934/era.2010.17.12 Tong Li, Hui Yin. Convergence rate to strong boundary layer solutions for generalized BBM-Burgers equations with non-convex flux. Communications on Pure & Applied Analysis, 2014, 13 (2) : 835-858. doi: 10.3934/cpaa.2014.13.835 Isaac Alvarez-Romero, Gerald Teschl. On uniqueness properties of solutions of the Toda and Kac-van Moerbeke hierarchies. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2259-2264. doi: 10.3934/dcds.2017098 Haiyan Yin, Changjiang Zhu. Convergence rate of solutions toward stationary solutions to a viscous liquid-gas two-phase flow model in a half line. Communications on Pure & Applied Analysis, 2015, 14 (5) : 2021-2042. doi: 10.3934/cpaa.2015.14.2021 Haibo Cui, Zhensheng Gao, Haiyan Yin, Peixing Zhang. Stationary waves to the two-fluid non-isentropic Navier-Stokes-Poisson system in a half line: Existence, stability and convergence rate. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4839-4870. doi: 10.3934/dcds.2016009 Jingbo Dou, Fangfang Ren, John Villavert. Classification of positive solutions to a Lane-Emden type integral system with negative exponents. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 6767-6780. doi: 10.3934/dcds.2016094 Sergiu Aizicovici, Hana Petzeltová. Convergence to equilibria of solutions to a conserved Phase-Field system with memory. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 1-16. doi: 10.3934/dcdss.2009.2.1 Weiwei Ao
CommonCrawl
Effects of combined application of nitrogen fertilizer and biochar on the nitrification and ammonia oxidizers in an intensive vegetable soil Qing-Fang Bi1,3 na1, Qiu-Hui Chen4 na1, Xiao-Ru Yang3, Hu Li3, Bang-Xiao Zheng3, Wei-Wei Zhou1, Xiao-Xia Liu5, Pei-Bin Dai6, Ke-Jie Li2 & Xian-Yong Lin1,2 AMB Express volume 7, Article number: 198 (2017) Cite this article Soil amended with single biochar or nitrogen (N) fertilizer has frequently been reported to alter soil nitrification process due to its impact on soil properties. However, little is known about the dynamic response of nitrification and ammonia-oxidizers to the combined application of biochar and N fertilizer in intensive vegetable soil. In this study, an incubation experiment was designed to evaluate the effects of biochar and N fertilizer application on soil nitrification, abundance and community shifts of ammonia-oxidizing bacteria (AOB) and ammonia oxidizing archaea (AOA) in Hangzhou greenhouse vegetable soil. Results showed that single application of biochar had no significant effect on soil net nitrification rates and ammonia-oxidizers. Conversely, the application of only N fertilizer and N fertilizer + biochar significantly increased net nitrification rate and the abundance of AOB rather than AOA, and only AOB abundance was significantly correlated with soil net nitrification rate. Moreover, the combined application of N fertilizer and biochar had greater effect on AOB communities than that of the only N fertilizers, and the relative abundance of 156 bp T-RF (Nitrosospira cluster 3c) decreased but 60 bp T-RF (Nitrosospira cluster 3a and cluster 0) increased to become a single predominant group. Phylogenetic analysis indicated that all the AOB sequences were grouped into Nitrosospira cluster, and most of AOA sequences were clustered within group 1.1b. We concluded that soil nitrification was stimulated by the combined application of N fertilizer and biochar via enhancing the abundance and shifting the community composition of AOB rather than AOA in intensive vegetable soil. Biochar, a carbon-rich product, was derived from the pyrolysis carbonization organic matter under anoxic or hypoxic and relatively low temperature conditions (≤ 700 °C) (Lehmann and Joseph 2015). Biochar with its potential agronomic benefits has been largely certified to exhibit strong improvement on soil quality (Lehmann 2007; Lehmann et al. 2006, 2011). To be specific, studies have indicated that adding biochar into soil could enhance nutrient availability and sequester carbon, increase soil pH and cation-exchange capacity as a soil conditioner, and also alter soil microbial populations resulting in impacting on nutrient cycling (Lehmann et al. 2011). For these reasons, biochar has been increasingly evaluated as a soil amendment to improve soil fertility generating higher productivity. Moreover, biochar play an important role in soil nitrogen (N) cycle via reducing inorganic-N leaching and N2O emission (Pan et al. 2017; Singh et al. 2010; Spokas and Reicosky 2009; Xu et al. 2014), increasing biological N fixation (Rondon et al. 2007) and enhancing N availability for crops (Zheng et al. 2013). Therefore, biochar may impact the process of soil nitrification. Nitrification is a central process in the nitrogen cycle, by which microorganisms oxidize ammonium (NH4 +) to generate nitrate (NO3 −), making soil nitrogen available for crop growth (Kowalchuk and Stephen 2001). The rate-limiting step of nitrification is the oxidation of NH4 +, which is driven by ammonia-oxidizing bacteria (AOB) and ammonia oxidizing archaea (AOA). Even though both AOB and AOA have been demonstrated as key drivers in ammonia oxidation in agricultural soil (Jin et al. 2010; Li and Gu 2013), their functional importance differs from various environmental conditions (He et al. 2007; Leininger et al. 2006). Moreover, many studies have shown that biochar addition significantly accelerated soil nitrification and improved the amount of soil ammonia-oxidizing microorganisms (Nelissen et al. 2012; Song et al. 2013). In forest soil, the abundance of AOB and nitrification rate have been found to increase with the charcoal addition (DeLuca et al. 2006; Ball et al. 2010). This is explained by the biochar adsorption of nitrification-inhibiting compounds such as terpenes and phenols (Ball et al. 2010). Contrarily, some researches have shown that biochar addition had significant inhibiting effect on nitrification, which was attributed to the agricultural system with high nitrification rate (DeLuca et al. 2006) or the presence of nitrification-inhibiting compound (α-pinene) in the biochar (Clough et al. 2010). Given that the critical role of AOA and AOB in soil nitrification process, the effect of biochar in agricultural situations might be indirectly or partially via its impact on the ammonia-oxidizing community itself. Compared with cereal production, more intensive cropping rotations and frequent irrigation, and much larger input of nutrients are always performed in the greenhouse vegetable system in China (Shen et al. 2010). That could lead to a series of problems such as soil acidification, salinization, hardening, and nutrient imbalance, causing soil degradation and yield reduction. Moreover, annual N fertilizer inputs are 3–4 times greater in the greenhouse vegetable system than that in the non-vegetable system (Ju et al. 2006), whereas the nitrogen use efficiency was very low in the intensive vegetable soil (He et al. 2006). Therefore, those problems are becoming a serious challenge to establish the sustainable intensive vegetable agricultural. As mentioned before, biochar with special physical and chemical properties can be used as a soil amendment, and has significant effects on alleviating soil acidification, improving soil structure, increasing soil available nutrients and vegetable yield (Chan et al. 2007). However, to our knowledge, little is known about biochar and N fertilizer interaction on nitrification and ammonia-oxidizing microbial community in the intensive vegetable soil. Therefore, we performed an incubation experiment to unravel the dynamic response of nitrification and ammonia oxidizers to the single application N fertilizer (urea and (NH4)2SO4) or biochar and the combined application of N fertilizer and biochar across a greenhouse vegetable soil. To differentiate the role of AOA and AOB, quantitative real-time PCR (qPCR) and terminal restriction fragment length polymorphism (T-RFLP) combined with clone libraries were used to determine the abundance and structure of ammonia-oxidizing microbial communities. Soil description and soil sampling Soil sample was collected from a vegetable greenhouse in the urban–rural transitional area (30°17′ N, 120°13′ E) of Hangzhou City, China. Vegetables have been cultivated intensively for 30–40 years at this site (Chen et al. 2008). The soil was sandy loam (clay 6.2%, silt 33.7%, sand 60.1%) (Chen et al. 2015) with 3.1 g kg−1 total N (TN), 27.6 g kg−1 organic matter (OM) and pH value of 7.0. The collected soil samples from the top layer (0–15 cm) without any debris were grounded to pass through a 2-mm sieve after air-dried. A part of soil samples was used to measure the chemical properties, and the remainder were preserved for laboratory incubation experiment. Characterization of biochar Biochar used in this experiment was produced from rice straw, which was carbonized under hypoxic condition at 600 °C. The biochar had a pH of 10.2, a total carbon (C) content of 53.7%, a TN content of 1.2%, a total hydrogen (H) content of 1.2%, a H:C ratio of 0.3, a C:N ratio of 53.5%, a 35.1% ash content. To revive soil microbial activity, the air-dried soil was pre-incubated at 25 °C and 60% of water-holding capacity (WHC) for 2 weeks. The experiment was conducted in 100 mL plastic jars containing 50 g soil with six treatments including control, urea, (NH4)2SO4, 2% biochar, 2% biochar + urea, 2% biochar + (NH4)2SO4, and each treatment was replicated for three times. The amount of 2% biochar added to soil was 2 mg kg−1 soil, which was calculated corresponding to a nitrogen addition of 200 mg N kg−1 soil. All these jars were then incubated inside incubator at 25 °C for 48 days. During the incubation period, soil moisture contents were kept constant at 65% WHC by adding deionized water based on the weighing method. The destructive sampling was performed during incubation period of 0, 1, 3, 7, 14, 21, 28, 35, 42 and 48 days. Soil property analysis Soil pH was measured at a soil solution ratio of 1:5 (w/v) with a pH meter. Soil organic matter was determined by external-heat potassium dichromate oxidation-colorimetric method (Nelson and Sommers 1982). TN contents were measured by the Kjeldahl method. Grain size distribution of the soil samples was measured by a Mastersizer 2000 Laser Grainsize (Mastersizer 2000 Laser Grainsize, Malvern Instruments, Worcestershire, UK). Soil ammonium N (NH4 +–N) and nitrate N (NO3 −–N) were extracted by 2 mol L−1 KCl solution (soil/KCl, 1:5) and measured by a flow injection analyzer (FLA sta 5000 Analyzer, Foss, Denmark). Soil NH4 +–N and NO3 −–N contents at day 0 were determined using the preincubation soil. And the net nitrification rate (n) was calculated following the Persson and Wirén (1995) equation: $$ n({\text{mg N kg}}^{ - 1} {\text{soil day}}^{ - 1} ) = \frac{{\left( {{\text{NO}}_{3}^{ - } - {\text{N}}} \right)_{t2} - \left( {{\text{NO}}_{3}^{ - } - {\text{N}}} \right)_{t1} }}{t} $$ Where t is the number of days between two sampling time (t2 and t1, day), and (NO3 −–N) t1 and (NO3 −–N) t2 are the nitrate concentrations at time 1 and time 2, respectively. Soil DNA extraction and quantitative real-time PCR of amoA genes Soil DNA was extracted from ~ 0.5 g frozen samples using a FastDNA SPIN Kit for soil (Bio101, Vista, CA) according to the manufacturer's protocol. The DNA was stored at − 20 °C for the molecular analyses described below. The abundances of AOA and AOB were determined via qPCR on a Bio-Rad CFX 1000 real-time PCR machine. Two primer pairs were used for detecting the AOA and AOB (Additional file 1: Table S1). Each PCR reaction was performed in a 20-μL mixture containing 1 μL of tenfold diluted DNA, 0.5 μM of each primer and 10 μL of SYBR Premix EX Taq™, following with PCR protocol: initial denaturation at 95 °C for 3 min, 40 cycles of 95 °C for 10 s, 55 °C for 30 s and 72 °C for 40 s. Serial dilutions of linearized plasmids containing cloned amoA genes were conducted to make calibration curves. Only one peak at a melting temperature (Tm) was detected. Only the standard curves with PCR efficiencies of 90–110% and correlation coefficients > 0.99 were employed in this study. T-RFLP of amoA genes for ammonia oxidizers For analysis of the ammonia oxidizers community, T-RFLP analysis was performed on DNA extracted from soil samples of all treatments at day 3. The same primers used in the qPCR with forward primer labeled with 6-FAM (6-carboxyfluorescein) were used in the T-RFLP analyses (Additional file 1: Table S1). The AOA and AOB samples were digested with restriction enzymes by HpyCH4V (NEB) and MspΙ (NEB), respectively. Fragment size was carried out with an ABI PRISM 3030 × L genetic analyzer (Applied Biosystems, Warrington, UK). T-RFs (terminal restriction fragments) with sizes longer than 50 bp and percentages higher than 1% were kept for cluster analysis and the rest fragments were discarded. Cloning and sequencing To identify the main T-RFs, the AOB and AOA clone libraries from the control soil were constructed with the same primers used in the qPCR analysis. Following manufacturer's instructions, clones were generated by a TOPO® TA Cloning kit (Invitrogen, Carlsbad, CA). And the T-RFLPs were transformed into numerical data using ABI 3730 × L DNA analyzer (Applied Biosystems). Phylogenetic analyses were conducted with MEGA software (Tamura et al. 2013). The sequences were performed using the BLAST program in the GenBank database. Nucleotide sequences of amoA genes for the clone libraries in this study have deposited in GenBank under the accession numbers MF616026–MF616122. Correlation, variance analyses (ANOVA) and the multiple stepwise linear regression were performed by IBM SPSS Statistics version 21.0. Principal component analysis (PCA) were performed using R studio with VEGAN package. All figures were generated using OriginPro 8.5. Dynamics of NH4 +–N and NO3 −–N concentrations No significant differences of NH4 +–N and NO3 −–N concentrations were found between only biochar amended and control treatments (Fig. 1). The urea and (NH4)2SO4 treatments showed an immediate NH4 + release with the highest NH4 +–N concentrations (192.9 and 177.7 mg kg−1 in the urea and (NH4)2SO4 treatments, respectively) at day 1, and dramatically declined, but was no significant difference compared to the control after incubation of 14 days (Fig. 1a). Whereas the concentrations of NO3 −–N increased rapidly and then kept stable (Fig. 1b). With the combination of biochar and nitrogen fertilizer addition, the rate of decrease in the NH4 +–N concentration was higher than the single application of N fertilizer treatments before day 7, and NO3 −–N concentration also showed greater increase. Dynamics of NH4 +–N (a) and NO3 −–N (b) contents after the application of biochar and nitrogen fertilizer in the greenhouse vegetable soil For the net nitrification rate, there was no significant difference between the treatment with single application of biochar and control. Whereas the net nitrification rate significantly increased in the nitrogen fertilizer treatments (Fig. 2) (p < 0.05), which were greater in the (NH4)2SO4 treatments (24.8 ± 3.8 and 32.8 ± 1.6 mg kg−1 day−1 at day 3 and day 7, respectively) than in the urea treatments (14.1 ± 2.9 and 23.8 ± 2.2 mg kg−1 day−1 at day 3 and day 7, respectively). Additionally, for the biochar + N fertilizer treatments, the net nitrification rates were significantly higher than the single application of nitrogen fertilizers at day 3 (p < 0.05). Moreover, the net nitrification rate in the biochar + (NH4)2SO4 treatment (41.9 ± 2.4 mg kg−1 day−1) was significantly higher in comparison with the biochar +urea treatment (31.5 ± 3.0 mg kg−1 day−1). These results indicated that the combined application of nitrogen fertilizer and biochar could enhance the nitrification of vegetable soil. Dynamics of net nitrification rate after the application of biochar and nitrogen fertilizer in the greenhouse vegetable soil Abundance of ammonia oxidizers During the incubation, the abundance of amoA in all treatments ranged from 2.3 × 108 to 5.1 × 108 copies g−1 dry soil and 1.0 × 108 to 3.0 × 108 copies g−1 dry soil for AOA and AOB, respectively (Fig. 3). In this vegetable soil, the abundance of AOA was higher than AOB and the ratios of AOA to AOB ranged from 1.0 to 3.8. During the incubation, there was no significant difference on the AOB abundance between the only biochar treatment and the control. More interestingly, the AOB amoA gene abundances in all the N fertilizer amended treatments increased significantly (p < 0.05), which reached up from 1.6 to 2 times than the control at day 3. Furthermore, the AOB amoA gene abundances in the biochar + N fertilizer treatments were higher than that in the only N fertilizer treatments. Results also showed that the AOB amoA gene abundance was always higher in the (NH4)2SO4 treatments than that in the urea treatments (Fig. 3a). However, the application of biochar and nitrogen fertilizer had no effects on AOA amoA gene abundance (Fig. 3b). Dynamics of AOA (a) and AOB (b) amoA gene copies after the application of biochar and nitrogen fertilizer in the greenhouse vegetable soil Community compositions of AOA and AOB As shown in the Fig. 4, the application of different fertilizers had a great impact on AOB community compositions, but only slight variations were observed in AOA. The predominant T-RFs in AOB community were 60 and 156 bp, which accounted for above 80% of the total community (Fig. 4a). Except for 264 bp T-RF, only biochar treatment showed no difference from the control on the AOB T-RFLP profiles. With the addition of N fertilizer, the relative abundance of 60 bp T-RF increased significantly, but 156 bp T-RF decreased (p < 0.05). Moreover, the relative abundances of 60 bp T-RF were higher in the biochar + N fertilizer treatments than those in the only N fertilizer treatments. Additionally, the 235 bp T-RF was not detected in the biochar + N fertilizer treatments. PCA analysis of the AOB T-RFLP profiles further revealed that the two axes explained 84.7%, and the AOB community composition was significantly influenced by biochar and N fertilizer (Additional file 1: Figure S1). Results showed that both the biochar and N fertilizer treatments were clearly separated from the control, whereas only biochar and only N fertilizer treatments and the biochar + N fertilizer treatments were clustered together. Moreover, the Shannon and Simpson index are normally used to characterize species diversity in a community. For AOB community, both the Shannon and Simpson index decreased significantly in the only (NH4)2SO4 and biochar + N fertilizer treatments when compared with the control which were significantly reduced with the biochar and N fertilizer addition (p < 0.05) (Table 1). Overall, our results demonstrated that the AOB community diversity in treatments with combined application of biochar and N fertilizer shifted significantly. Relative abundance (a) and principle component analysis (b) of the ammonia oxidizing bacterial T-RFs Table 1 The Shannon index (H) and Simpson index (D) of AOB and AOA community structure diversity Phylogeny of AOA and AOB The AOB phylogenetic tree showed that AOB sequences were divided into five distinct clusters including Nitrosospira cluster 3a, 3b, 3c, cluster 0 and Nitrosospira sp. Nsp65, affiliating to genus of Nitrosospira (Fig. 5). The 156 bp T-RF mainly belonged to Nitrosospira cluster 3c, the other 21% belonged to Nitrosospira sp. Nsp65 and cluster 0. And T-RF 60 bp distributed in various genera of Nitrosospira. The T-RFs of 235 and 256 bp were affiliated with Nitrosospira sp. Nsp65 and cluster 3a, respectively. Neighbor-joining phylogenetic tree of a bacterial amoA sequences and b archaeal amoA sequences retrieved from the vegetable soil. Sequences from this study are shown in bold and described as clone name (accession number) T-RF size. Reference sequences are described as clone name (environment, accession number). Bootstrap values (> 50%) are indicated at branch points. The scale bar represents 5% estimated sequence divergence. The accession numbers in GenBank were MF616026–MF616122 The AOA phylogenetic analysis indicated that AOA sequences of vegetable soil (98%) were highly homologous to Nitrososphaera gargensis, which belonged to group 1.1b, and only one sequence belonging to group 1.1a. The T-RFs of 197 bp, 299 bp and 63 bp were closely aligned with group 1.1b, and the 283 bp T-RF was affiliated with group 1.1a and group 1.1b. In this study, no significant differences of NH4 +–N, NO3 −–N concentrations and net nitrification rates were observed between only biochar addition and control treatment (Fig. 1), indicating that biochar addition had little effect on the soil nitrification in the absence of N fertilizer. This result may be caused by the limited NH4 + in the collected soil sample, which are the substrate of hydrolysis by ammonia-oxidizing microorganisms, or the biochar with the carbon-rich but nutrient-poor characteristic (Alburquerque et al. 2013). As expected, soil NH4 +–N concentrations significantly increased after the addition of N fertilizer, and sharply decreased to an equilibrium at day 7 or day 14 (Fig. 1a). In contrast, the NO3 −–N concentrations showed a rapid increase and also reached up an equilibrium correspondingly (Fig. 1b). These results suggested that when adding the N fertilizer, the conversion from ammonium to nitrate for crop growth via nitrification was immediate and rapid. Additionally, the net nitrification rate reached up to 42 ± 3 mg kg−1 day−1 in this study (Fig. 2), indicating that the N fertilizer provided an obvious promotion in the nitrification of vegetable soil via ammonia-oxidizing microorganisms. Moreover, the NO3 −–N concentrations and net nitrification rates in the biochar + N fertilizer treatments were observed to be higher than only N fertilizer treatments, which reached a significant higher level at day 3 and day 7 (p < 0.05), respectively, demonstrating that the combined application of N fertilizer and biochar had a synergistic effect on soil nitrification. Our findings were consistent with the previous study, which indicated that crop growth were simulated applied with biochar and mineral fertilizer (Asai et al. 2009; Schulz and Glaser 2012; Van Zwieten et al. 2010). As for biochar, the ability of promoting nitrification could be attributed to the adsorption of inhibiting substances of nitrification such as phenols and terpene in biochar (Ball et al. 2010; Berglund et al. 2004; DeLuca et al. 2006). Furthermore, biochar could significantly increase soil organic carbon, resulting in high ratios of carbon to nitrogen, which could enhance soil nitrification to improve the bioavailability of nitrogen (Clough et al. 2013). In addition, Zhao et al. (2013) found that the soil pH increased significantly after combined application of biochar and N fertilizer in an acid agricultural soil, which was also promoted with the increase of the amount of biochar. The abundance of AOB increased significantly in N fertilizer treatments (p < 0.05), which was line with the higher net nitrification rate in these soils (Figs. 2 and 3a). Additionally, significantly positive correlation was also observed between net nitrification rate and AOB abundance rather than AOA abundance (r = 0.829**, p < 0.01) (Additional file 1: Table S2), permutational multivariate analyses also showed positive correlation in AOB abundance and the effects of different treatments and incubation time (Additional file 1: Table S3). This suggested that the increased abundance of AOB played a more direct positive role than AOA in the soil nitrification with N fertilizer, which was consistent with previous reports in natural and alkaline soil (Jia and Conrad 2009; Shen et al. 2008). Moreover, higher abundance of AOB was observed in N fertilizer + biochar treatments compared with only N fertilizer treatments, (Fig. 3a), indicating that combined application of N fertilizer with biochar could enhance the nitrification by increasing the abundance of AOB in this vegetable soil. However, the effects of biochar on ammonia-oxidizers were quite distinct from diverse soils. For example, with biochar addition in coastal saline soil, higher AOA abundance increased soil ammoxidation rate (Song et al. 2013). Whereas Prommer et al. (2014) found that biochar boosted both AOA and AOB abundances in agricultural soil, leading to the enhancement of soil potential nitrification rates. Overall, the increase in the amount of ammonia-oxidizing microorganisms associated with biochar addition may be due to the following four reasons. Firstly, large surface area and highly porous structure with water holding capacity and nutrient retention of biochar could provide resources for the specific metabolic needs of microorganisms (Steinbeiss et al. 2009). Secondly, biochar could improve living condition of biota by the increase of pH in acid soil (Ball et al. 2010). Thirdly, the source of carbon and nitrogen in biochar improve soil fertility. Finally, biochar might absorb the inhibiting substances such as polyphenols or tannins on nitrification (Ball et al. 2010; DeLuca et al. 2006). Nevertheless, some studies had reported that biochar addition showed no difference even a negative effect on soil nitrification (Clough et al. 2010; Spokas and Reicosky 2009). That may be due to the release of nitrification inhibitor such as ethylene and α-pinene via biochar to reduce soil ammonia-oxidizing microorganism activity (Berglund et al. 2004; DeLuca et al. 2006), which was different on the parent materials and conditions during the formation of biochar. The T-RFLP analysis showed that the AOB community structures varied from different treatments (Fig. 5). The dominance of Nitrosospira cluster 3 (contributing to 62% of the sequences) in AOB indicated that Nitrosospira cluster 3 played an important role in nitrification (Shen et al. 2011), whereas Nitrosomonas was not detected in the greenhouse vegetable soil (Fig. 5b). This was consistent with a previous study that Nitrosospira cluster 3 also dominated in a long-term fertilization sandy loam soil (Chu et al. 2007). Our results also revealed that the community structure and diversity of AOB in vegetable soil were significantly altered by the combined application of biochar and N fertilizer rather than the single application of biochar (Fig. 4). Compared with the control, the relative abundance of 60 bp T-RF increased from 36.5 to 64.6% in the biochar + (NH4)2SO4 treatment, which belonged to Nitrosospira cluster 3a and cluster 0, suggesting that the combined application of biochar and N fertilizer stimulated microbial growth of the related cluster. However, 156 bp T-RF belonging to Nitrosospira cluster 3c showed a significant decrease. Dempster et al. (2011) also found AOB community shifts occurred in biochar + N fertilizer treatments but not in the single biochar added treatment. Moreover, the significant decrease of Shannon and Simpson index also reflected the reduction in community diversity of AOB with the combined addition of biochar and N fertilizer in this vegetable soil (Table 1). The dominant AOA was affiliated to group 1.1b containing 98% sequences. Although many studies have indicated that AOA are considered to be the primary driver of nitrification (Chen et al. 2008; He et al. 2007; Leininger et al. 2006), in our study, there was no discernable changes in the AOA community after the combined application of N fertilizer and biochar, indicating the AOB community was more sensitive to the fertilization practice in vegetable soil. In agreement with our findings, extensive research have observed an obvious promotion effect of fertilizer on AOB community rather than AOA (Ai et al. 2013; Fan et al. 2011; Xia et al. 2011). In general, the variance in ecological niches of AOB and AOB are caused by their dissimilar sensitivity to soil properties (Shen et al. 2008). As AOB are considered to mainly dominated in the neutral and "nutrient-rich" environment, whereas AOA are better to adapted to low pH and "nutrient-poor" environment (Schauss et al. 2009). Furthermore, the reason that the significant shift in AOB community occurred in this study, may be the increase of soil pH and nutrient contents after the biochar (pH = 10.2) and N fertilizer addition. In conclusion, our results revealed that N fertilizer with the addition of biochar significantly stimulated soil nitrification and shifted the AOB abundance and community. T-RFLP of AOB indicated that the combined application of N fertilizer and biochar significantly increased the 60 bp T-RF (Nitrosospira cluster 3a and cluster 0) but decreased 156 bp T-RF (Nitrosospira cluster 3c). On the contrary, there were no visible changes in the AOA community compared to AOB. Moreover, the positive correlation between net nitrification rate and AOB abundance, indicating that AOB rather than AOA was the dominant ammonia oxidizer to drive soil nitrification in intensive vegetable soil. This has important implications that the combined utilization of N fertilizer and biochar enable to promote the nitrogen use efficiency. AOA: ammonia-oxidizing archaea AOB: ammonia-oxidizing bacteria qPCR: quantitative real-time PCR T-RFLP: terminal restriction fragment length polymorphism NH4 + : NO 3 - : WHC: water-holding capacity T-RFs: terminal restriction fragments ANOVA: variance analyses PCA: Ai C, Liang G, Sun J, Wang X, He P, Zhou W (2013) Different roles of rhizosphere effect and long-term fertilization in the activity and community structure of ammonia oxidizers in a calcareous fluvo-aquic soil. Soil Biol Biochem 57:30–42. https://doi.org/10.1016/j.soilbio.2012.08.003 Alburquerque JA, Salazar P, Barrón V, Torrent J, del Campillo MdC, Gallardo A, Villar R (2013) Enhanced wheat yield by biochar addition under different mineral fertilization levels. Agron Sustain Dev 33(3):475–484. https://doi.org/10.1007/s13593-012-0128-3 Asai H, Samson BK, Stephan HM, Songyikhangsuthor K, Homma K, Kiyono Y, Inoue Y, Shiraiwa T, Horie T (2009) Biochar amendment techniques for upland rice production in Northern Laos. Field Crops Res 111(1–2):81–84. https://doi.org/10.1016/j.fcr.2008.10.008 Ball PN, MacKenzie MD, DeLuca TH, Montana WEH (2010) Wildfire and charcoal enhance nitrification and ammonium-oxidizing bacterial abundance in dry montane forest soils. J Environ Qual 39(4):1243. https://doi.org/10.2134/jeq2009.0082 Berglund LM, DeLuca TH, Zackrisson O (2004) Activated carbon amendments to soil alters nitrification rates in Scots pine forests. Soil Biol Biochem 36(12):2067–2073. https://doi.org/10.1016/j.soilbio.2004.06.005 Chan KY, Van Zwieten L, Meszaros I, Downie A, Joseph S (2007) Agronomic values of greenwaste biochar as a soil amendment. Aust J Soil Res 45(8):629. https://doi.org/10.1071/sr07109 Chen T, Liu X, Zhu M, Zhao K, Wu J, Xu J, Huang P (2008) Identification of trace element sources and associated risk assessment in vegetable soils of the urban-rural transitional area of Hangzhou, China. Environmental Pollution 151(1):67–78. https://doi.org/10.1016/j.envpol.2007.03.004 Chen Q, Qi L, Bi Q, Dai P, Sun D, Sun C, Liu W, Lu L, Ni W, Lin X (2015) Comparative effects of 3,4-dimethylpyrazole phosphate (DMPP) and dicyandiamide (DCD) on ammonia-oxidizing bacteria and archaea in a vegetable soil. Appl Microbiol Biotechnol 99(1):477–487. https://doi.org/10.1007/s00253-014-6026-7 Chu H, Fujii T, Morimoto S, Lin X, Yagi K, Hu J, Zhang J (2007) Community structure of ammonia-oxidizing bacteria under long-term application of mineral fertilizer and organic manure in a sandy loam soil. Appl Environ Microbiol 73(2):485–491. https://doi.org/10.1128/AEM.01536-06 Clough TJ, Bertram JE, Ray JL, Condron LM, O'Callaghan M, Sherlock RR, Wells NS (2010) Unweathered wood biochar impact on nitrous oxide emissions from a bovine-urine-amended pasture soil. Soil Sci Soc Am J 74(3):852. https://doi.org/10.2136/sssaj2009.0185 Clough T, Condron L, Kammann C, Müller C (2013) A review of biochar and soil nitrogen dynamics. Agronomy 3(2):275–293. https://doi.org/10.3390/agronomy3020275 DeLuca TH, MacKenzie MD, Gundale MJ, Holben WE (2006) Wildfire-produced charcoal directly influences nitrogen cycling in ponderosa pine forests. Soil Sci Soc Am J 70(2):448. https://doi.org/10.2136/sssaj2005.0096 Dempster DN, Gleeson DB, Solaiman ZM, Jones DL, Murphy DV (2011) Decreased soil microbial biomass and nitrogen mineralisation with Eucalyptus biochar addition to a coarse textured soil. Plant Soil 354(1–2):311–324. https://doi.org/10.1007/s11104-011-1067-5 Fan F, Yang Q, Li Z, Wei D, Cui X, Liang Y (2011) Impacts of organic and inorganic fertilizers on nitrification in a cold climate soil are linked to the bacterial ammonia oxidizer community. Microb Ecol 62(4):982–990. https://doi.org/10.1007/s00248-011-9897-5 He F, Chen Q, Jiang R, Chen X, Zhang F (2006) Yield and nitrogen balance of greenhouse tomato (Lycopersicum esculentum Mill.) with conventional and site-specific nitrogen management in Northern China. Nutr Cycl Agroecosyst 77(1):1–14. https://doi.org/10.1007/s10705-006-6275-7 He JZ, Shen JP, Zhang LM, Zhu YG, Zheng Ym YM, Di Xu MG (2007) Quantitative analyses of the abundance and composition of ammonia-oxidizing bacteria and ammonia-oxidizing archaea of a Chinese upland red soil under long-term fertilization practices. Environ Microbiol 9(12):3152. https://doi.org/10.1111/j.1462-2920.2007.01481.x Jia Z, Conrad R (2009) Bacteria rather than Archaea dominate microbial ammonia oxidation in an agricultural soil. Environ Microbiol 11(7):1658–1671. https://doi.org/10.1111/j.1462-2920.2009.01891.x Jin T, Zhang T, Yan Q (2010) Characterization and quantification of ammonia-oxidizing archaea (AOA) and bacteria (AOB) in a nitrogen-removing reactor using T-RFLP and qPCR. Appl Microbiol Biotechnol 87(3):1167–1176. https://doi.org/10.1007/s00253-010-2595-2 Ju XT, Kou CL, Zhang FS, Christie P (2006) Nitrogen balance and groundwater nitrate contamination: comparison among three intensive cropping systems on the North China Plain. Environ Pollut 143(1):117–125. https://doi.org/10.1016/j.envpol.2005.11.005 Kowalchuk GA, Stephen JR (2001) Ammonia-oxidizing bacteria: a model for molecular microbial ecology. Annu Rev Microbiol 55(1):485–529 Lehmann J (2007) Bio-energy in the black. Front Ecol Environ 5(7):381–387 Lehmann J, Joseph S (2015) Biochar for environmental management: science, technology and implementation. Routledge, Abingdon Lehmann J, Gaunt J, Rondon M (2006) Bio-char sequestration in terrestrial ecosystems—a review. Mitig Adapt Strat Glob Change 11(2):395–419. https://doi.org/10.1007/s11027-005-9006-5 Lehmann J, Rillig MC, Thies J, Masiello CA, Hockaday WC, Crowley D (2011) Biochar effects on soil biota—a review. Soil Biol Biochem 43(9):1812–1836. https://doi.org/10.1016/j.soilbio.2011.04.022 Leininger S, Urich T, Schloter M, Schwark L, Qi J, Nicol GW, Prosser JI, Schuster SC, Schleper C (2006) Archaea predominate among ammonia-oxidizing prokaryotes in soils. Nature 442(7104):806–809. https://doi.org/10.1038/nature04983 Li M, Gu JD (2013) Community structure and transcript responses of anammox bacteria, AOA, and AOB in mangrove sediment microcosms amended with ammonium and nitrite. Appl Microbiol Biotechnol 97(22):9859–9874. https://doi.org/10.1007/s00253-012-4683-y Nelissen V, Rütting T, Huygens D, Staelens J, Ruysschaert G, Boeckx P (2012) Maize biochars accelerate short-term soil nitrogen dynamics in a loamy sand soil. Soil Biol Biochem 55:20–27. https://doi.org/10.1016/j.soilbio.2012.05.019 Nelson D, Sommers L (1982) Total carbon, organic carbon, and organic matter, methods of soil analysis, Part 2. Chemical and Microbiological Properties. p 539–580 Pan F, Chapman SJ, Li Y, Yao H (2017) Straw amendment to paddy soil stimulates denitrification but biochar amendment promotes anaerobic ammonia oxidation. J Soils Sediments. https://doi.org/10.1007/s11368-017-1694-4 Persson T, Wirén A (1995) Nitrogen mineralization and potential nitrification at different depths in acid forest soils. Plant Soil 168(1):55–65 Prommer J, Wanek W, Hofhansl F, Trojan D, Offre P, Urich T, Schleper C, Sassmann S, Kitzler B, Soja G, Hood-Nowotny RC (2014) Biochar decelerates soil organic nitrogen cycling but stimulates soil nitrification in a temperate arable field trial. PLoS ONE 9(1):e86388. https://doi.org/10.1371/journal.pone.0086388.g001 Rondon MA, Lehmann J, Ramírez J, Hurtado M (2007) Biological nitrogen fixation by common beans (Phaseolus vulgaris L.) increases with bio-char additions. Biol Fertil Soils 43(6):699–708 Schauss K, Focks A, Leininger S, Kotzerke A, Heuer H, Thiele-Bruhn S, Sharma S, Wilke BM, Matthies M, Smalla K, Munch JC, Amelung W, Kaupenjohann M, Schloter M, Schleper C (2009) Dynamics and functional relevance of ammonia-oxidizing archaea in two agricultural soils. Environ Microbiol 11(2):446–456. https://doi.org/10.1111/j.1462-2920.2008.01783.x Schulz H, Glaser B (2012) Effects of biochar compared to organic and inorganic fertilizers on soil quality and plant growth in a greenhouse experiment. J Plant Nutr Soil Sci 175(3):410–422. https://doi.org/10.1002/jpln.201100143 Shen JP, Zhang LM, Zhu YG, Zhang JB, He JZ (2008) Abundance and composition of ammonia-oxidizing bacteria and ammonia-oxidizing archaea communities of an alkaline sandy loam. Environ Microbiol 10(6):1601–1611. https://doi.org/10.1111/j.1462-2920.2008.01578.x Shen W, Lin X, Shi W, Min J, Gao N, Zhang H, Yin R, He X (2010) Higher rates of nitrogen fertilization decrease soil enzyme activities, microbial functional diversity and nitrification capacity in a Chinese polytunnel greenhouse vegetable land. Plant Soil 337(1–2):137–150. https://doi.org/10.1007/s11104-010-0511-2 Shen W, Lin X, Gao N, Shi W, Min J, He X (2011) Nitrogen fertilization changes abundance and community composition of ammonia-oxidizing bacteria. Soil Sci Soc Am J 75(6):2198–2205 Singh BP, Hatton BJ, Singh B, Cowie AL, Kathuria A (2010) Influence of biochars on nitrous oxide emission and nitrogen leaching from two contrasting soils. J Environ Qual 39(4):1224–1235 Song Y, Zhang X, Ma B, Chang SX, Gong J (2013) Biochar addition affected the dynamics of ammonia oxidizers and nitrification in microcosms of a coastal alkaline soil. Biol Fertil Soils 50(2):321–332. https://doi.org/10.1007/s00374-013-0857-8 Spokas KA, Reicosky DC (2009) Impacts of sixteen different biochars on soil greenhouse gas production. Ann Environ Sci 3:179–193 Steinbeiss S, Gleixner G, Antonietti M (2009) Effect of biochar amendment on soil carbon balance and soil microbial activity. Soil Biol Biochem 41(6):1301–1310. https://doi.org/10.1016/j.soilbio.2009.03.016 Tamura K, Stecher G, Peterson D, Filipski A, Kumar S (2013) MEGA6: molecular evolutionary genetics analysis version 6.0. Mol Biol Evol 30(12):2725–2729. https://doi.org/10.1093/molbev/mst197 Van Zwieten L, Kimber S, Downie A, Morris S, Petty S, Rust J, Chan KY (2010) A glasshouse study on the interaction of low mineral ash biochar with nitrogen in a sandy soil. Soil Res 48(7):569–576 Xia W, Zhang C, Zeng X, Feng Y, Weng J, Lin X, Zhu J, Xiong Z, Xu J, Cai Z, Jia Z (2011) Autotrophic growth of nitrifying community in an agricultural soil. ISME J 5(7):1226–1236. https://doi.org/10.1038/ismej.2011.5 Xu HJ, Wang XH, Li H, Yao HY, Su JQ, Zhu YG (2014) Biochar impacts soil microbial community composition and nitrogen cycling in an acidic soil planted with rape. Environ Sci Technol 48(16):9391–9399. https://doi.org/10.1021/es5021058 Zhao X, Wang S, Xing G (2013) Nitrification, acidification, and nitrogen leaching from subtropical cropland soils as affected by rice straw-based biochar: laboratory incubation and column leaching studies. J Soils Sediments 14(3):471–482. https://doi.org/10.1007/s11368-013-0803-2 Zheng H, Wang Z, Deng X, Herbert S, Xing B (2013) Impacts of adding biochar on nitrogen retention and bioavailability in agricultural soil. Geoderma 206:32–39. https://doi.org/10.1016/j.geoderma.2013.04.018 XYL and QHC designed the experiments. QFB, QHC, PBD, KJL and WWZ performed the experiments. QFB and QHC analyzed the data. QFB wrote the manuscript. XRY, HL, BXZ and XYL revised the paper. All authors read and approved the final manuscript. The authors declare that they have no completing interests. The datasets supporting the conclusions of this article are included within the article and its Additional file 1. This article does not contain any individual person's data. Ethical approval and consent to participate This article does not contain any studies with human participants or animals performed by any of the authors. This research was supported by the National Natural Science Foundation of China (41571130061), Strategic Priority Research Program of the Chinese Academy of Sciences (B) (XDB15020402), and the Program for S&T Cooperation Project of Zhejiang Province (CTZB-F150922AWZ-SNY1(2)). Qing-Fang Bi and Qiu-Hui Chen contributed equally to this work Key Laboratory of Subtropical Soil Science and Plant Nutrition of Zhejiang Province, College of Environmental & Resource Sciences, Zhejiang University, Hangzhou, 310058, China Qing-Fang Bi , Wei-Wei Zhou & Xian-Yong Lin MOE Key Laboratory of Environment Remediation and Ecological Health, College of Environmental & Resource Sciences, Zhejiang University, Hangzhou, 310058, China Ke-Jie Li Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen, 361021, China , Xiao-Ru Yang , Hu Li & Bang-Xiao Zheng Nanjing Institute of Environmental Sciences, Ministry of Environmental Protection, Nanjing, 210042, China Qiu-Hui Chen Zhejiang Agricultural Technology Extension Center, Hangzhou, 310020, China Xiao-Xia Liu Department of Applied Engineering, Zhejiang Economic and Trade Polytechnic, Hangzhou, 310018, China Pei-Bin Dai Search for Qing-Fang Bi in: Search for Qiu-Hui Chen in: Search for Xiao-Ru Yang in: Search for Hu Li in: Search for Bang-Xiao Zheng in: Search for Wei-Wei Zhou in: Search for Xiao-Xia Liu in: Search for Pei-Bin Dai in: Search for Ke-Jie Li in: Search for Xian-Yong Lin in: Correspondence to Xian-Yong Lin. 13568_2017_498_MOESM1_ESM.docx Additional file 1: Table S1. Primers of AOA and AOB used for molecular analyses. Table S2. Pearson correlation between the abundance of AOA/AOB and net nitrification rate. Table S3. Permutational multivariate analyses for the effects of different treatment (Treat) and incubation time (Time) on the abundance of AOA and AOB. Figure S1. The principal coordinates analysis (PCA) of AOB T-RFs in vegetable soils treated with urea, (NH4)2SO4, biochar, biochar + urea, biochar + (NH4)2SO4 based on Bray-Curtis distance. Bi, Q., Chen, Q., Yang, X. et al. Effects of combined application of nitrogen fertilizer and biochar on the nitrification and ammonia oxidizers in an intensive vegetable soil. AMB Expr 7, 198 (2017) doi:10.1186/s13568-017-0498-7 Nitrification Ammonia-oxidizing community Vegetable soil
CommonCrawl
KEN MONKS ∎ Professor of Mathematics ∎ [email protected] Undergraduate Competitions AMC at Scranton Lehigh Valley ARML Prove it! Math Academy Over the years I have mentored several undergraduate and high school student research projects in mathematics. Below are the papers that were written by my students, many of which have been published. Kraft, Benjamin & Monks, Keenan (my son), On Conjugacies of the 3x+1 Map Induced by Continuous Endomorphisms of the Shift Dynamical System, Discrete Mathematics, Vol. 310 (2010), 1875-1883. Monks, Maria (my daughter), Endomorphisms of the shift dynamical system, discrete derivatives, and applications., Discrete Mathematics, Vol. 309, Issue 16 (2009), 5196-5205. Monks, Ken M. (my son), The Sufficiency of Arithmetic Progressions for the 3x+1 Conjecture, Proc. Amer. Math. Soc., 134 (10), October (2006), 2861-2872 Fusaro, Marc, A Visual Representation of Sequence Space, Pi Mu Epsilon Journal, Pi Mu Epsilon Journal 10 (6), Spring 1997, 466-481 Marc's paper is the winner of the 1997 MAA EPADEL section Student Paper Competition. Marc's paper was also awarded a 1997 Richard V. Andree award after the officers and councilors of Pi Mu Epsilon judged it to be one of the best three papers to have appeared in the Pi Mu Epsilon Journal during 1997. Joseph, John, A Chaotic Extension of the 3x+1 Function to $\mathbb{Z}_{(2)}$, Fibonacci Quarterly, 36.4 (Aug 1998), 309-316 John's paper was the winner of the 1996 MAA EPADEL section Student Paper Competition. Farruggia, C., Lawrence, M., Waterhouse, B., The Elimination of a Family of Periodic Parity Vectors in the 3x+1 Problem, Pi Mu Epsilon Journal, 10 (4), Spring (1996), 275-280 Their paper was awarded a 1996 Richard V. Andree award after the officers and councilors of Pi Mu Epsilon judged it to be one of the best four papers to have appeared in the Pi Mu Epsilon Journal during 1996. Schneider, Eric, On the Workday Number for Finite Multigraphs in a Variation of Cops and Robbers, high school research project, Summer 2013 Yazinski, Jonathan, Pseudoperiodicity and the 3x+1 Congjugacy Function, honor's thesis, Spring 2003 Riggi, Carla, Hutchinson Operators in $\mathbb{R}^3$, honor's thesis, Spring 2001 Carla also wrote some 3D fractal software in Maple and the Maple help file for her software. Kucinski, Gina, Cycles for the 3x+1 Map on the Gaussian Integers, FSRP project, Spring 2000 Fraboni, Mike, Conjugacy and the 3x+1 Conjecture, FSRP project. Mike's paper was the winner of the 1998 MAA EPADEL section Student Paper Competition. If you are a student at the University of Scranton and might be interested in doing such a project, please contact Dr. Monks for further information. Undergraduate Contests Dr. Kenneth G. Monks 570 – 941 – 6101 Copyright © Kenneth G. Monks | Acknowledgments
CommonCrawl
Journal of Modern Dynamics January 2007 , Volume 1 , Issue 1 Open problems in dynamics and related fields Alexander Gorodnik 2007, 1(1): 1-35 doi: 10.3934/jmd.2007.1.1 +[Abstract](1929) +[PDF](308.5KB) The paper discusses a number of open questions,which were collected during the AIM workshop "Emerging applications of measure rigidity". The main emphasis is made on the rigidity problems in the theory of dynamical systems and their connections with Diophantine approximation, arithmetic geometry, and quantum chaos. Alexander Gorodnik. Open problems in dynamics and related fields. Journal of Modern Dynamics, 2007, 1(1): 1-35. doi: 10.3934/jmd.2007.1.1. On the cohomological equation for nilflows Livio Flaminio and Giovanni Forni 2007, 1(1): 37-60 doi: 10.3934/jmd.2007.1.37 +[Abstract](1715) +[PDF](231.8KB) Let $X$ be a vector field on a compact connected manifold $M$. An important question in dynamical systems is to know when a function $g: M\to \mathbb{R}$ is a coboundary for the flow generated by $X$, i.e., when there exists a function $f: M\to \mathbb{R}$ such that $Xf=g$. In this article we investigate this question for nilflows on nilmanifolds. We show that there exists countably many independent Schwartz distributions $D_n$ such that any sufficiently smooth function $g$ is a coboundary iff it belongs to the kernel of all the distributions $D_n$. Livio Flaminio, Giovanni Forni. On the cohomological equation for nilflows. Journal of Modern Dynamics, 2007, 1(1): 37-60. doi: 10.3934/jmd.2007.1.37. The first cohomology of parabolic actions for some higher-rank abelian groups and representation theory David Mieczkowski We make use of representation theory to study the first smooth almost-cohomology of some higher-rank abelian actions by parabolic operators. First, let $N$ be the upper-triangular group of $SL(2,\mathbb{C})$, $\Gamma$ any lattice and $\pi = L^2(SL(2,\mathbb{C})$/$\Gamma)$ the usual left-regular representation. We show that the first smooth almost-cohomology group $H_a^1(N, \pi)$ ≃ $H_a^1(SL(2,\mathbb{C}) , \pi)$. In addition, we show that the first smooth almost-cohomology of actions of certain higher-rank abelian groups $A$ acting by left translation on $(SL(2,\mathbb{R}) \times G)$/$\Gamma$ trivialize, where $G = SL(2,\mathbb{R})$ or $SL(2,\mathbb{C})$ and $\Gamma$ is any irreducible lattice. The abelian groups $A$ are generated by various mixtures of the diagonal and/or unipotent generators on each factor. As a consequence, for these examples we prove that the only smooth time changes for these actions are the trivial ones (up to an automorphism). David Mieczkowski. The first cohomology of parabolic actions for some higher-rank abelian groups and representation theory. Journal of Modern Dynamics, 2007, 1(1): 61-92. doi: 10.3934/jmd.2007.1.61. Entropy is the only finitely observable invariant Donald Ornstein and Benjamin Weiss 2007, 1(1): 93-105 doi: 10.3934/jmd.2007.1.93 +[Abstract](1919) +[PDF](139.5KB) Our main purpose is to present a surprising new characterization of the Shannon entropy of stationary ergodic processes. We will use two basic concepts: isomorphism of stationary processes and a notion of finite observability, and we will see how one is led, inevitably, to Shannon's entropy. A function $J$ with values in some metric space, defined on all finite-valued, stationary, ergodic processes is said to be finitely observable (FO) if there is a sequence of functions $S_{n}(x_{1},x_{2},...,x_{n})$ that for all processes $\mathcal{X}$ converges to $J(\mathcal{X})$ for almost every realization $x_{1}^{\infty}$ of $\mathcal{X}$. It is called an invariant if it returns the same value for isomorphic processes. We show that any finitely observable invariant is necessarily a continuous function of the entropy. Several extensions of this result will also be given. Donald Ornstein, Benjamin Weiss. Entropy is the only finitely observable invariant. Journal of Modern Dynamics, 2007, 1(1): 93-105. doi: 10.3934/jmd.2007.1.93. A dichotomy between discrete and continuous spectrum for a class of special flows over rotations Bassam Fayad and A. Windsor 2007, 1(1): 107-122 doi: 10.3934/jmd.2007.1.107 +[Abstract](1961) +[PDF](144.2KB) We provide sufficient conditions on a positive function so that the associated special flow over any irrational rotation is either weak mixing or $L^2$-conjugate to a suspension flow. This gives the first such complete classification within the class of Liouville dynamics. This rigidity coexists with a plethora of pathological behaviors. Bassam Fayad, A. Windsor. A dichotomy between discrete and continuous spectrum for a class of special flows over rotations. Journal of Modern Dynamics, 2007, 1(1): 107-122. doi: 10.3934/jmd.2007.1.107. Measure rigidity beyond uniform hyperbolicity: invariant measures for cartan actions on tori Boris Kalinin and Anatole Katok We prove that every smooth action $\a$ of $\mathbb{Z}^k,k\ge 2$, on the $(k+1)$-dimensional torus whose elements are homotopic to corresponding elements of an action $\a_0$ by hyperbolic linear maps preserves an absolutely continuous measure. This is the first known result concerning abelian groups of diffeomorphisms where existence of an invariant geometric structure is obtained from homotopy data. We also show that both ergodic and geometric properties of such a measure are very close to the corresponding properties of the Lebesgue measure with respect to the linear action $\a_0$. Boris Kalinin, Anatole Katok. Measure rigidity beyond uniform hyperbolicity: invariant measures for cartan actions on tori. Journal of Modern Dynamics, 2007, 1(1): 123-146. doi: 10.3934/jmd.2007.1.123. The Hopf argument Yves Coudène 2007, 1(1): 147-153 doi: 10.3934/jmd.2007.1.147 +[Abstract](1999) +[PDF](87.4KB) Let $T$ be a measure-preserving transformation of a metric space $X$. Assume $T$ is conservative and $X$ can be covered by a countable family of open sets, each of finite measure. Then any eigenfunction is invariant with respect to the stable foliation of $T$. Yves Coud\u00E8ne. The Hopf argument. Journal of Modern Dynamics, 2007, 1(1): 147-153. doi: 10.3934/jmd.2007.1.147. About JMD
CommonCrawl
Hot answers tagged terminology Why do we call Tycho Brahe by his first name? The short answer is that this is how he referred to himself. He was born Tyge Otteson Brahe but at the age of 15 (1561) changed 'Tyge' to Latinized 'Tycho', see Redd's biography of Tycho and Thoren's book Lord of Uraniborg. Johannes Müller, a well-known German astronomer, wrote under 'Regio Monte', which Melanchton, an educational authority at the Copenhagen ... terminology astronomy brahe astronomers Taking a look at her 1933 paper in Mathematische Annalen one sees: Similarly for a 1923 paper: From a glance at a few other papers, she (or all the journals) used "Noether" for her last name. Further, since the titles clearly indicate that the journals are quite happy with umlauts it is not by accident. Jon Custer Contributions to chemistry from medieval Arabia Jabir ibn Hayyan was the first to describe processes such as liquefaction, crystallisation, distillation, purification, oxidisation, evaporation and filtration. He also did an early classification of chemical elements around their properties which seems pertinent, and noted that "a certain quantity of acid is necessary in order to neutralize a given amount ... terminology chemistry middle-ages islamic-science VicAche Why did angular momentum get the letter L I just want to comment that the agreement on letters, by which we write $\frac d{dt}\mathbf L=\mathbf M$ for the law of angular momentum, must have come very late -- after 1964. As evidence, note that it is still written $\frac d{dt}\mathfrak N=\mathfrak M$ by Sommerfeld in Mechanik (1943, p.63); $\frac d{dt}\mathbf M=\mathbf L$ by Sommerfeld in Mechanics (... notation terminology Lemaitre, who proposed the first version of "Big Bang" in 1927, called it Primeval Atom hypothesis since late 1930s, notably in the 1950 book of this name. However, it was rather different from the subsequently adopted version, and singularity cosmologies did not gain much traction until 1950s to get a common label. By then, when they came to be ... terminology cosmology Why is one meter as long as it is? This number has no significance. Its origin is historical. Originally the meter was defined as 1/40,000,000 part of the Paris meridian. Based on the measurement of this meridian, they made a standard rod in Paris. Since it is inconvenient to base the definition on something which is difficult to measure, the meter was soon redefined simply as the length of ... terminology geometry units measurement Alexandre Eremenko Why is there no named unit for momentum but there is one for energy? There is a historical reason. But it was not a fluke of history, the underlying reason is that energy comes up in non-mechanical (thermal, electric) contexts whereas momentum does not. Derived alternative, newton-meter in SI, did not arise naturally in such contexts, and alternative units, like calories, were used prior to the discovery of the general energy ... physics units terminology When did the names of scientists first become the names of scientific units? History of the metric system - Wikipedia says: In 1861, Charles Bright and Latimer Clark proposed the names of ohm, volt, and farad in honour of Georg Ohm, Alessandro Volta and Michael Faraday respectively for the practical units based on the centimetre-gramme-second absolute system. This was supported by Thomson (Lord Kelvin). These names were later scaled ... terminology discoveries units scientists Manjil P. Saikia Did anybody know Pi well enough in 1592 to celebrate Pi day? There could not have been a $\pi$ day in 1592 regardless of calendar conventions for the simple reason that there was no such thing as $\pi$ back then. The symbol was introduced by William Jones in 1706 and did not come into common usage until after 1737, when Euler popularized it in his texts. This was similar to zero, which got a placeholder symbol long ... mathematics terminology computation Why are étale morphisms called "étale"? From Milne's site: There are two different words in French, "étaler", which means spread out or displayed and is used in "éspace étalé", and "étale", which is rare except in poetry. According to Illusie, it is the second that Grothendieck chose for étale morphism. The Petit Larousse defines "mer étale" as "mer ... mathematics terminology algebraic-geometry grothendieck Who invented the integers? Very few (if any) mathematicians before Cantor thought of the SET of integers. Certainly for Euclid it was completely evident that the sequence of integers extends without limit. (He actually has a famous theorem that the sequence of PRIMES extends without limit). Who discovered this we will never know because very few mathematical sources before Euclid ... mathematics terminology There was a related question on Math.SE, which Mauro Allegranza answered with reference to Cajori's classic History of Mathematical Notations (v.II, p.205). It is a great source and is freely available online. Surprisingly, it was not Leibniz, the notational lion of calculus, who introduced it. "A provisional, temporary notation $\Delta$ for differential ... terminology notation What is the etymology behind sine, cosine, tangent, etc.? Victor Katz is not a linguist and a lot of what he says in the quoted extract is wrong: for example that "Arabic is written without vowels" and that the word in question is spelt "jb". In fact it is written jyb جيب (as mobileink has pointed out). But the decisive error from the viewpoint of the history of science is his failure to remark that Sanskrit jyā ... mathematics terminology geometry Are there widely accepted math symbols using non-Latin alphabets or characters other than Greek and Hebrew? The letter Ш (sha) of the Cyrillic alphabet is widely accepted in theoretical computer science as the symbol for the shuffle product, which gives the shuffle algebra. The same letter is also used to denote the Tate-Shafarevich group, but I'm not sure if it's really a standard (the letter was introduced by Cassels only in 1990 in 1962 instead of TS, see below ... What is the status of the three crises in the history of mathematics? No, it is not widely accepted. The language of "crises" is rather obsolete and mostly reflects the attitudes of the early 20th century projected backwards. At that time, the contemporaries did indeed characterize the situation in mathematics (and physics) as a crisis. For example, Weyl's 1920 address was titled "The new foundational crisis in mathematics", ... For sinus, see : Victor Katz, A History of Mathematics (3rd edition, 2008), apge 253 : The English word "sine" comes from a series of mistranslations of the Sanskrit jya-ardha (chord-half). Aryabhata frequently abbreviated this term to jya or its synonym jiva. When some of the Hindu works were later translated into Arabic, the word was simply transcribed ... Mauro ALLEGRANZA Who started calling the matrix multiplication "multiplication"? The same person who introduced it, Cayley. Sylvester first used the term "matrix" (womb in Latin) for an array of numbers in 1848, but did not do much with it. Cayley started developing matrix algebra in 1855 and summarized his theory in A Memoir on the Theory of Matrices (1858). In the opening paragraphs he writes: "It will be, seen that matrices (... mathematics terminology linear-algebra Actually when we say Integer today, we mean set of all positive whole numbers, negative whole numbers and zero. But this complete set was not discovered/invented in a day. People were working with integers from the very beginning. They might be using different names though(like Whole numbers, Natural numbers, ...). According to Wikipedia Negative ... Amit Tyagi How did we come up with the name "atomic bomb"? I'm going to try to answer the question you end with, "Why has atomic bomb instead of another better term become the predominant term?", rather than the question in your title, because that's the historical question. (The other might be interesting to debate, but unlikely to produce a satisfying final answer.) Let's start with the Google ngram, showing the ... physics terminology Who first used the word "calculus", and what did it describe? According to Carl B. Boyer, "The history of the calculus and its conceptual development", Dover Publications 1959, page 98, The improved notation led also to methods which were so much more facile in application than the cumbrous geometrical procedures of Archimedes, of which they were modifications, that these methods were eventually recognized as ... mathematics terminology calculus Alan U. Kennington Who said $\pi$ is a constant since it is not even a real number? This question is based on a misunderstanding. The statement that $\pi$ is constant has precise meaning: $\pi$ is a ratio of the length of circumference to the length of diameter. The statement that it is constant means that it is the same for all circles. (This statement is independent of the representation of this ratio with digits). Contrary to what many ... mathematics physics terminology mathematical-logic real-analysis What is the origin of "an algebra" as in vector space with multiplication? Actually, it happened in the reverse order, algebras came first, and vector spaces only later. For the vector space story see When did people start viewing a matrix as a linear transformation between two vector spaces? Peano gave the modern axiomatization of them only in 1888, and he called them linear systems. But the use of "an algebra" in essentially ... mathematics terminology abstract-algebra Why are canonical coordinates canonical? Such coordinates were called canonical because they are those in which equations of motion (or, of the hamiltonian flow of a function $H$) take the "canonical form" $$ \frac{dq_i}{dt}=\frac{\partial H}{\partial p_i}, \qquad \frac{dp_i}{dt}=-\frac{\partial H}{\partial q_i} $$ first written by Poisson (1809, pp. 272, 313), Lagrange (1810, p. 350), and Hamilton ... physics terminology classical-mechanics symplectic-geometry Why do we call a linear mapping "linear mapping"? The theory of Linear Algebra, along with the associated concept of linear mapping, was named as "linear" by its creator, Hermann Graßmann, which he developed in his 1844 linear algebra manifesto, Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik [The Theory of Linear Extension, a New Branch of Mathematics], and also later in Die Ausdehnungslehre: ... mathematics terminology reference-request linear-algebra silvascientist Why do we say "Matrices" and "Vertices", but "Complexes" rather than "Complices"? Because unlike vertex, matrix or simplex, that came directly from Latin and have primarily mathematical uses, complex was borrowed through French around 1650s, with the meaning "a whole comprised of parts". By the time of entering mathematics as a noun it already had colloquially established plural in English, complexes. It is similar with apexes, annexes, ... The $r$ is for "radius", and in particular, describes the radial vector from the origin to the location described by the vector. This is sensible because some sort of polar or spherical coordinates are the most common for many physical applications, where the forces described have some sort of spherical symmetry, and point radially outward. Jerry Schirmer When was the term "corollary" first used in proofs? "Corollary" is similar to the word "bonus": a little extra (i.e. an extra proposition coming from a demonstration). The term Euclid uses is πόρισμα "porism," which Liddell-Scott-Jones cite as akin to πορίζω in the sense of "to find (money)." For instance, after I.15: Πόρισμα ἐκ δὴ τούτου φανερὸν ὅτι, ἐὰν δύο εὐθεῖαι ... mathematics terminology linguistics Michael E2 There are several non-alphabetic symbols, the best known is the integral sign $\int$ and the Weierstrass $P$-function $\wp$. To be sure their origins are letters of Latin alphabet, but they are special stylized symbols, and as far as I know there is no computer code for them in the standard sets of computer characters. Strictly speaking they do not belong to ... It is sometimes asserted that $\varnothing$ for the empty set was introduced by Bourbaki using a Danish and Norwegian letter. EDIT: The source is the Weil autobiography, cited in Jeff Miller's collection of the origins of mathematical expressions: André Weil (1906-1998) says in his autobiography that he was responsible for the symbol: Wisely, we had ... Gerald Edgar "Goethe" is not an "anglicization" of "Göthe" or indeed of "Göte"; it is the way the poet spelt his own name. The overriding principle is that everyone is entitled to spell his or her name as he or she likes. history-of-terminology terminology × 295 mathematics × 120 notation × 24 abstract-algebra × 18 mathematical-logic × 15 mathematicians × 12 reference-request × 12 units × 10 naming-conventions × 10 group-theory × 10 calculus × 9 philosophy-of-science × 9 number-theory × 9 computer-science × 9 quantum-mechanics × 8 classical-mechanics × 8 linear-algebra × 8 geometry × 7 differential-geometry × 7 differential-equations × 7 astronomy × 6 theoretical-physics × 6 real-analysis × 6
CommonCrawl
Characteristics Of Human Traffickers The major factors — on both a societal and personal level — that cause or contribute to people being vulnerable to trafficking include:. Change in Behavior. Human trafficking is simply the illegal movement of people across international borders. Traffickers might compare themselves to other traffickers in order to dissociate themselves from the most brutal manifestations of human trafficking. When you think of a human trafficking victim, what comes to mind? A young woman who has been kidnapped, drugged and bound as she is. 8 million are. General Information & Research. Innovation is the mode that is the most prominent in dealing with the strain theory. Throughout the technological design and implementation process, decisions should be. Israelis Teach Americans about Human Trafficking January 18, 2018 January 19, 2018 Quintus Sertorius 4 Comments The jews, in whatever society they parasitize, are the leading purveyors of vice and degeneracy. of human trafficking including the recognized definition, the prevalence of human trafficking, the characteristics commonly associated with both victims and offenders, as well as information regarding the Modus Operandi (M. A forced marriage is where one or both people do not (or in cases of people with disabilities, cannot) consent to the marriage and pressure or abuse is used. Wants are influenced by salesmanship : Salesmanship also influences human wants. 140 • health and human rights volume 15, no. The real definition of human trafficking. 8 – HUMAN TRAFFICKING REV 07/2017. Paula Bird was a student in Professor's Birdsong Refugee and Asylum course in the Spring of 2010. An Integrated Theoretical Framework to Describe Human Trafficking of Young Women and Girls for Involun tary Prostitution 557 trafficking for involuntary prostitution. Teens that are being groomed by sex traffickers may also have a change in behavior. Some suffer from fear, shame, and distrust of law enforcement. This hidden population involves the commercial sex industry, agriculture, factories, hotel and restaurant businesses, domestic workers, marriage. Traffickers, sometimes also known as pimps, use coercion, manipulation, threats of violence, and exert financial control over their victims in order to keep them trapped in a lifestyle of being. The impact of human trafficking is enormous. When you think of a human trafficking victim, what comes to mind? A young woman who has been kidnapped, drugged and bound as she is. candidate, University of Michigan College of Pharmacy Target Audience This continuing education activity was designed specifically for pharmacists and pharmacy technicians Disclosure Statement. Human Trafficking can be either a commercial sex act that is induced by force, fraud, or coercion, or in which the person induced to perform such an act has not attained 18 years of age. Top 10 Countries Infamous for Human Trafficking. Choose from 37 different sets of ctip flashcards on Quizlet. The Visualizing Efforts of Truckers Against Trafficking Data. J Midwifery Womens Health. Common characteristics of human trafficking schemes from actual cases include exchanging sex acts for access to heroine and other illicit drugs, threats of violence to prevent escape from involuntary servitude and manipulating victims with promises of love and. The United Nations defines human trafficking as: "The recruitment, transportation, transfer, harboring or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability or of the giving or receiving. The perpetrators of human trafficking are called "traffickers". However, in the past few decades, HT became a global concern [2, 3]. On an ongoing basis, we ask our supporters to help call attention to youth at risk for child sex trafficking. Still, it's very rarely talked. Traffickers are using the Internet as a way to target unsuspecting and vulnerable youth for their own personal financial gain as targets are seen as none other than a dollar sign. The Civil Rights Division, Criminal Division, U. The resulting report, Characteristics of Suspected Human Trafficking Incidents, 2008–2010, includes data provided by the Bureau of Justice Assistance's (BJA) law enforcement grantees who serve on the task forces and focuses on open criminal investigations. 2 million youth at risk for or experiencing sex trafficking alone. Trafficking Incidents. State Courts Guide, Chapter 10: Labor Trafficking, identifying characteristics of cases in the State Courts by Steve Weller and John A. To tackle human trafficking effectively, it must be made visible. By Regan Lookadoo[1] Human Trafficking: A Modern Form of Slavery Human trafficking is a criminal activity that is increasing rapidly throughout the world (Polaris Project, 2014a), and it is estimated to produce yearly profits of $15. Human Trafficking in Georgia: A Survey of Law Enforcement Georgia Bureau of Investigation 3 | P a g e Introduction Human trafficking is a crime and a human rights abuse in which subjects use force, coercion, or. Airports are also hubs for human trafficking -- where adults or children are transported into forced labor or commercial sexual exploitation. Furthermore, by using IBM i2 software, the TAHub is trained to recognise and detect specific human trafficking terms and incidents, processing open-source data such as news feeds in order to help analysts to identify the characteristics of human trafficking incidents such as recruitment and transportation methods, and potentially predict them. When we think of human trafficking, we often assume that it's a relatively new phenomenon that has started fairly recently in American history. In international law, the definition of the crime is found in Article 3(a) of the Trafficking in Persons Protocol. GOŹDZIAK, PH. Identifying potential cases of human trafficking, therefore, is a natural extension of our role. Reality: Human Trafficking is not the same as human smuggling. Human trafficking, commonly coined as a form of modern day slavery, has increasingly gained public attention from law enforcement, advocates and policymakers as they attempt to combat the growth of this type of victimization. November 2013 CHARACTERISTICS OF A STATE COURT FOCUSED APPROACH TO ADDRESSING HUMAN TRAFFICKING For references and additional resources, go to www. More than 20 million people are affected by human trafficking each year. Pixabay Public Domain In the first study of its kind, research from the Institute of Psychiatry, Psychology & Neuroscience at King's College London (IoPPN) finds clinical evidence on the mental health effects of human trafficking. Human trafficking is defined by the U. The police were asked about their experiences with and knowledge of the characteristics of victims and traffickers, the methods used by the traffick- ers, anti-trafficking actions of the police, the reasons for trafficking, back- ground information about the nature of the trafficking problem, and their working relationships with other agencies. In some sex trafficking rings, the pimps set up a hierarchy, in which the most seasoned victim, and therefore the most trusted by the pimp, is sent out to recruit other girls and boys. Human trafficking has had many different forms over the year: corpse stealing, illegal immigration rings, sex slavery, and forced labor. v The "Test Your Knowledge About Trafficking" session can be used to introduce a training on trafficking in women. Traffickers look for people who are susceptible to coercion into the human trafficking industry. As a result, the victims of sex trade are treated as witnesses and denied the right to a legal counsel, a support person and damages. Would you know how to spot a human trafficking victim? While each victim is different, there are some common characteristics seen by law enforcement when interacting with them. Kentucky has been working to address the issues and needs of human trafficking through training, task forces and the passage of state legislation criminalizing human trafficking. supporting countries in the provision of physical, psychological, and social assistance to the victims,. The theme of geography that uses physical and human characteristics is Place. What is Human Trafficking? Human Trafficking in the US, defined: "Human trafficking is modern-day slavery and involves the use of force, fraud or coercion to obtain some type of labor or commercial sex act; or, commercial sex involving a person under 18 years of age. Human trafficking is a hidden but growing phenomenon both outside and within U. Researchers in an NIJ-funded study focused on the challenges faced in identification, investigation and prosecution of trafficking cases at the state and local levels. HUMAN TRAFFICKING A Community Epidemic brought to you by FREE 1. South Africa remains a primary source, destination, and transit country for human trafficking. Human trafficking is the act of recruiting, harboring, transporting, providing or obtaining a person for compelled labor or commercial sex acts through the use of force, fraud or coercion. All of these apps are available free of charge at the tap of a finger on a smartphone. We have over 50 years of experience helping and protecting less fortunate children, and want to share our insights. Bank drew on its research and data to develop "typologies," in the parlance of the financial crimes world, to identify possible human trafficking cases. While federal and state governments have passed legislation to criminalize trafficking offenses support victims and develop systems to prioritize the prosecution of human trafficking cases little is known about the characteristics of this crime or its victims. Identifying potential cases of human trafficking, therefore, is a natural extension of our role. This literature review focuses on the form of human trafficking that involves sex trafficking and prostitution. Human Trafficking and Sex Slavery The commercial sexual exploitation of American children has become "Big Business" in the USA and one of the worst crimes. The Formation of the Riverside County Anti Human Trafficking Task Force (cont. Dominique Roe-Sepowitz, Cmdr. aces of Human Trafficking is a nine-video series created by the Office for Victims of Crime (OVC) that blends the experiences of a diverse group of human trafficking survivors and professionals from across the Nation to raise awareness of the seriousness of this crime, the many forms it can take, and the important role. Human trafficking, at its core, is a business. General Information & Research. Its emphasis is on understanding the scope of the problem and the legal framework in place to help address it. BackgroundHuman trafficking is a crime that commonly results in acute and chronic physical and psychological harm. Report suspected human trafficking activity by calling 1-888-3737-888 or texting BeFree (233733). Human trafficking is simply the illegal movement of people across international borders. Like intimate partner violence, sexual assault, and stalking, human trafficking[i] has significant economic consequences for victims. Most importantly: Tell everyone you know. At its core, human trafficking is about abuse and cruelty toward human beings, and is a gross violation of human rights. • Human trafficking is a devastating human rights violation that takes place not only internationally, but also here in the United States. If you suspect human trafficking or are a victim you should contact the hotline for assistance. An Analysis of Human Trafficking Victims in Federal Court Cases by Brandon Nathaniel Chapman Thesis Advisor Christopher Shields, J. It's sometimes called "Modern-Day Slavery" and sometimes "Human Trafficking. Under the TVPA, human trafficking has occurred if a person was induced to perform labor or a commercial sex act through force, fraud, or coercion. Describes the characteristics of human trafficking investigations, suspects, and victims in cases opened by federally funded task forces between January 2008 and June 2010. Traffickers look for people who are susceptible to coercion into the human trafficking industry. The State Department's annual Trafficking in Persons Report examines human trafficking in 184 countries, including the U. As Thomas Pogge summarizes, ―2,735 million people (44. The crime of human trafficking includes labor and sex trafficking, and is well hidden. ‖24 Many have attributed the commodification of human life to the extensive inequality that we witness today. Human Trafficking & Contemporary Slavery Defined. 52 times greater for girls who experienced sexual abuse, and there was a 8. The human trafficking industry has more than doubled since 2009 with almost $80 billion in "sales" and is the second or third most profitable crime in the world. Guiding Principles for Technological Interventions in Human Trafficking 1) The ultimate beneficiaries of any technological intervention should be the victims and survivors of human trafficking. Worldwide, it is estimated that 20. J Midwifery Womens Health. Traffickers look for people who are susceptible to coercion into the human trafficking industry. January is National Slavery and Human. In June, The Joint Commission (TJC) released Quick Safety Issue 42 on identifying human trafficking victims. The other 627 (62%) incidents involved alle-gations of adult sex trafficking, such as forced prostitu-tion or other sex trafficking crimes. What is Human Trafficking? Human Trafficking in the US, defined: "Human trafficking is modern-day slavery and involves the use of force, fraud or coercion to obtain some type of labor or commercial sex act; or, commercial sex involving a person under 18 years of age. Asif Efrat, Global Efforts against Human Trafficking: The Misguided Conflation of Sex, Labor, and Organ Trafficking, International Studies Perspectives, (n/a), (2015). Although these characteristics don't necessarily mean there's trafficking taking place, the Cuyahoga Regional Human Trafficking Task Force believes any little tip can help rid the area of. •Traffickers are a diverse group and include a wide range of criminals working on many different levels, including individuals, small criminal groups or large. Like intimate partner violence, sexual assault, and stalking, human trafficking[i] has significant economic consequences for victims. According to Malacañan Palace's Presidential Museum and Library, there were datu (the ruling class), maharlikas (the notable persons and freemen), timawas (the commoners) and alipin (the slaves and dependents). Top tips for writing animals with human characteristics: Jennifer Gray Animal characters demonstrate the full range of human characteristics. "Human Trafficking" refers to both labor trafficking and sex trafficking. Department of Justice's Office for Victims of Crime (OVC) and Bureau of Justice Assistance (BJA), "Human Trafficking Task Force e-Guide" This Guide is a resource to support established task forces and provide guidance to agencies that are forming task forces. Texas ranks high in number of human trafficking victims. The real definition of human trafficking. Pimps, gangs, family members, labor brokers, employers of domestic servants, small business owners, and large factory owners have all been found guilty of human trafficking. THE MANY FACES AND ROUTES OF HUMAN TRAFFICKING Traffickers lure people from all backgrounds and walks of life into human trafficking using multiple forms of coercion. General Information & Research. Human traffickers know how to search out a needy child or teenagers who may be insecure and wanting attention. Hereafter, we refer to this feature space, as our first feature space and denote it with \({\mathcal {F}}_1\). Research on human trafficking frames in print media revealed that portrayals of human trafficking were for the most part oversimplified and inaccurate in terms of human trafficking being portrayed as innocent white female victims needing to be rescued from nefarious traffickers. alone, nearly one million people are trafficked across the borders each year, according to State Department reports. Characteristics of suspected human trafficking incidents, 2008–2010. Human Trafficking Laws and Investigations 3 Background on Human Trafficking The Human Trafficking Industry Human trafficking is a complex issue affecting children, youth, and adults, in communities throughout Washington. 140 • health and human rights volume 15, no. Department of Defense (DOD) Combating Trafficking in Persons (CTIP) Office. It is a common misconception that slavery and forced servitude are a thing of the past. Usually, that exploitation is sexual and the person being exploited is a woman or child. This report provides information about investigations, persons involved in suspected and confirmed incidents of human trafficking, and case outcomes. Traffickers, sometimes also known as pimps, use coercion, manipulation, threats of violence, and exert financial control over their victims in order to keep them trapped in a lifestyle of being. The Advocates for Human Rights has been working to promote and protect the human rights of sex trafficking victims since 2006. Human trafficking is, paradoxically, a single thing—the violent exploitation of another human being for profit or personal gain—and many different things. [13] Human trafficking is estimated to surpass the drug trade in less than five years. com Abstract The modus operandi is connected with the characteristics of the perpetrator of the crime; therefore it is a mean which through its analyses gives. Human Trafficking. Human Trafficking Human trafficking, or trafficking in persons, is one of the most heinous crimes imaginable, often described as a modern day form of slavery. Human trafficking is defined in the Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children (2000), supplementing the United Nations Convention against Transnational Organized Crime (the so-called Palermo Protocol). Collecting data from the shrouded industry of trafficking is an impossible task. Human Trafficking is a problem that has plagued humanity from its existence. In the process, Diane found a new passion -- Human Trafficing Awareness, Prevention, and Education. Human trafficking is a form of modern-day slavery that is a crime under international, federal and Michigan law. Characteristics of suspected human trafficking There are about 17 % more known labor trafficking victims than known sex Traffickers do not necessarily remain in the U. The social justice issue of human sex trafficking is a global form of oppression that places men, women and children at risk for sexual exploitation. As we recognize Human Trafficking Awareness Day,. If you see any of. Alternative forms of justice show promise for human trafficking survivors, who often do not find resolution (such as conviction and incarceration for their traffickers) through the traditional criminal justice system. " – Bill Foege, former director of the Center for Disease Control Although largely framed as a legal issue, a social issue, and sometimes a geo-political issue, human trafficking IS also a public health issue. AN INTRODUCTION TO HUMAN TRAFFICKING Human trafficking destroys a person's dignity and strips away an individual's humanity. Human trafficking is the recruitment and movement of people – most often by force, coercion or deception – for the purposes of exploitation []. 5 million people trapped in sexual exploitation. The goal of this course is to educate providers regarding Human Trafficking. November 2013 CHARACTERISTICS OF A STATE COURT FOCUSED APPROACH TO ADDRESSING HUMAN TRAFFICKING For references and additional resources, go to www. nations, and economic levels. assessment analyst and strategic intervention specialist with expertise in abusive and controlling behavior. Innovation, Monitoring, and Analysis of Trafficking Online: Primary Research. Emergency physicians are all too familiar with the important public service role we play in identifying child abuse, domestic violence, and elder abuse. Trafficking victims are kept in bondage through a combination of fear, intimidation, abuse, and psychological controls. Education on Human Trafficking in Medical Training: A Human Rights Framework Human trafficking is a human rights issue [11]. 9 million people populate the entire country of Sweden. Ending Human Trafficking through Education and Awareness. Language barriers, fear of their traffickers, and/or fear of law enforcement frequently keep victims from seeking help, making human trafficking a hidden crime. Concerns about trafficking began with anti-slavery and anti-prostitution movements in the 1800s, and moved into a "white slavery" panic, the fear for middle-class white women's innocence and place in society. TRAFFICKING IN PERSONS – The country was a source, transit, and destination for trafficking. Signs of Human Trafficking. Data and information about how people come to commit trafficking crimes,. Boys — the silent victims of sex trafficking. Here are 7 facts about human trafficking you may not know, plus 3 ways you can help. Department of. Human Trafficking (hereafter HT), either domestic or cross-national, is a gender issue []. https://twitter. For example, the Boston Police Department, as part of a federally funded human trafficking task force, developed protocols that included monitoring police reports to identify potential victims of commercial sexual exploitation and sex trafficking proactively and maintaining partnerships with social service and other law enforcement agencies to facilitate referrals when appropriate (Farrell et al. The signs of human trafficking could be all around Defense Department personnel: A subcontractor withholds passports and delays payment to its employees, or a company forces potential workers to pay a large fee to obtain a contract job on a DoD installation. Adam Kinzinger, (R-IL) Summary: This legislation would amend the Trafficking Victims Protection Act to allow the Attorney General to make grants to state and local law enforcement that are prioritizing their own efforts to reduce demand for human trafficking through the investigation and prosecution of persons who. Its emphasis is on understanding the scope of the problem and the legal framework in place to help address it. aces of Human Trafficking is a nine-video series created by the Office for Victims of Crime (OVC) that blends the experiences of a diverse group of human trafficking survivors and professionals from across the Nation to raise awareness of the seriousness of this crime, the many forms it can take, and the important role. This data hub, using machine learning and structured data from contributors (who include NGOs, law enforcement and financial institutions) to identify the characteristics of human trafficking incidents, is designed to more easily facilitate the exchange of information about human trafficking across organizations. The offense must also contain: use of force, threat of force, coercion, abduction, fraud,. Although the share of men among detected trafficking victims has been growing in the last decade, women and girls still make up more than 70% of the detected trafficking victims in the 2016 Global Report on Trafficking in Persons []. Traffickers are using the Internet as a way to target unsuspecting and vulnerable youth for their own personal financial gain as targets are seen as none other than a dollar sign. To fight human trafficking, entrepreneur Emily Kennedy helped create software that can trawl online ads to help law enforcement efficiently find victims. Often beginning as a voluntary action, human trafficking quickly turns into the recruitment, transport, and control of an individual. 8 million are. 3 million people worldwide, according to a 2017 study by the International Labor Organization. HTFL Human Trafficking for the purpose of forced labour or services, slavery or practices similar to slavery, and servitude HTRO Human Trafficking for the purpose of removal of organs HTSE Human trafficking for the purpose the exploitation of the prostitution of others or other forms of sexual exploitation. Data and Research on Human Trafficking: Bibliography of Research-Based Literature ELŻBIETA M. In a world where morality is often nothing but a figment of our imagination, the 10 black market organ trade and trafficking facts, statistics, and stories will chill you to the bone. v The "Test Your Knowledge About Trafficking" session can be used to introduce a training on trafficking in women. This year's report revealed that: The United States is a source, transit, and destination country for men, women, and children subjected to forced labor, debt bondage, document servitude, and sex trafficking. Human Trafficking Thesis Statement Examples The illegal trade and exploitation of human beings for forced labor, prostitution and reproductive favors is termed human trafficking. In general, allocation of organs based on social characteristics (such as race, socioeconomic class, gender) will conflict with the principle of justice, although there may be special cases such as the matching of skin-tone in face and hand transplant that call for exceptions in the allocation of vascularized composite allografts (VCAs), which. Human Trafficking, particularly that which is referred to as Domestic Minor Sex Trafficking (DMST) or the Commercial Sexual Exploitation of Children (CSEC), presents new and unique challenges to our Department as well as to our counterparts across the country. The United States is recognized primarily as. Human trafficking is a modern form of slavery, with illegal smuggling and trading of people (including minors), for forced labor or sexual exploitation. As we recognize Human Trafficking Awareness Day,. While the true prevalence of human trafficking is unknown, the International Labour Organization estimates that 20. Human trafficking is, paradoxically, a single thing—the violent exploitation of another human being for profit or personal gain—and many different things. Corinne Dettmeijer-Vermeulen 3 types of human trafficking structures. The Advocacy committee of the Zonta Club of Yakima Valley has been working to build awareness of Domestic Minor Trafficking (DMST) and Human Trafficking issues in the Yakima Valley as an ongoing advocacy project. Under the Victims of Trafficking and Violence Protection Act (VTVPA), a severe form of human trafficking is defined as. Characteristics of Human Trafficking Victims and Perpetrators Kelsey E. characteristics and traits of traffickers 5 gender of traffickers 5 nationality of traffickers 7. Trafficking migration patterns tend to flow from East to West, but women may be trafficked from any country to another country at any given time and trafficking victims exist everywhere. The Hotline annually receives multiple reports of human trafficking cases in each of the 50 states and D. 1)* 20 Federally-funded HT task forces opened 2,515 suspected incidents of HT for investigation, Jan. The course will become. Discuss Marketing and the 10 Characteristics of Human Behavior within the Articles !! forums, part of the Mirror View - Ebooks Links & Miscellenous Reading Material category;. State Courts Guide, Chapter 10: Labor Trafficking, identifying characteristics of cases in the State Courts by Steve Weller and John A. Data and information about how people come to commit trafficking crimes,. A comprehensive database of more than 11 human trafficking quizzes online, test your knowledge with human trafficking quiz questions. Victims of human trafficking may look the same like most other people who come to organization for assistance every day. alone, nearly one million people are trafficked across the borders each year, according to State Department reports. Last year saw a 35% increase in cases reported — for example, U. The reality of rural human sex trafficking crimes in Tennessee 4. Pushed primarily into forced labor and sex work, victims are recruited through a combination of coercion, force and fraud, often working right in front of our eyes in industries such as agriculture, construction and caretaking. It is a term for activities involving someone who "obtains or holds a person in compelled service. "Human Trafficking" refers to both labor trafficking and sex trafficking. For instance: If human trafficking is discovered during a recovery interview for a runaway/missing child that has returned to DFPS care, a human trafficking episode must be entered. But statistics also inspire our movement, providing us with a number-based sense of the scope and scale of the problem. The new legislation specifically deals with human trafficking for sexual objectives (South Africa; Human Trafficking Legislation to Be Gazetted for Public Comment). To foster more informed health sector responses to human trafficking, training sessions for health care providers were developed and pilot-tested in the Middle East, Central America and the Caribbean. assessment analyst and strategic intervention specialist with expertise in abusive and controlling behavior. Forced labor, exploitation of workers, sex trafficking, modern-day slavery, and all forms of human trafficking are violations of human dignity and so the Church has initiated strong efforts to oppose and end these practices. Human trafficking is a complex phenomenon that is multi-faceted and which involves several stakeholders both at the institutional as well as commercial level. The Formation of the Riverside County Anti Human Trafficking Task Force (cont. The human trafficking industry has more than doubled since 2009 with almost $80 billion in "sales" and is the second or third most profitable crime in the world. Human trafficking can be difficult to identify because victims rarely come forward due to language barriers, fear of traffickers, and fear of law enforcement. Read more Hotline statistics here. In the most widely accepted definition of human trafficking, a United Nations protocol 1 defines human trafficking as the use of force or coercion for the purposes of exploitation. It is in every state of this great country, from large cities to small towns to rural areas. nations, and economic levels. The crime of human trafficking includes labor and sex trafficking, and is well hidden. 5 Contact Hours for Nurses & 1. Traffickers disproportionately target at-risk populations including individuals who have experienced or been exposed to other forms of violence (child abuse and maltreatment, interpersonal violence and sexual assault, community and gang violence) and individuals disconnected from stable support. WELCH* The related global problems of migrant smuggling, trafficking in persons and clandestine terrorist travel are increasingly significant both in terms of the human. Data Collection & Analysis Because human trafficking is a dynamic and emerging crime, it is crucial for task forces to present and maintain a clear picture of how human trafficking affects their communities and how traffickers are changing their tactics. 19th Annual Conference Human Trafficking: Interrupting the Pathway to Victimization Friday, May 1, 2015, 8:30 a. Trafficking is a crime that can occur across. Since human trafficking is often a crime that is hidden in plain sight, it is important to be aware of its warning signs. Guide to Ethics and Human Rights in Counter-Trafficking 7 • Implement the Ethics and Human Rights in Counter-Trafficking set of seven guiding principles and practical tools, for mandatory use by all UNIAP-supported researchers and programmers interfacing with traffick persons and those affected by human trafficking. While these signs are not proof your child is being sex trafficked, it could be a sign of other risky behaviors that are leading to being trafficked. Research from Scotland's 2017 Trafficking and Exploitation Strategy revealed that 54% of Scots don't believe human trafficking is an issue in their local area, despite the fact that human trafficking victims have been identified in 27 of Scotland's 32…. Human trafficking is a complicated business whose influence leaves none untouched in South Asia, but luckily, the mending is close in sight. All staff must take the Human Trafficking eLearning course by September 30, 2018. of drug traffickers received a non-government sponsored below range sentence, with the remaining 1. The absence of entitlements and rights limits the ability to achieve a meaningful life. CHILDREN AT RISK leads educational presentations on human trafficking in Houston as well as Dallas & Fort Worth. Human trafficking in the United States has a racialized and classed history. Countries are ranked according to their potential to be a destination country based on various characteristics of the trafficking process. Box 1: UN definition of trafficking in persons. It has been accepted for inclusion in. It is estimated that 3 people out of every 1,000 on our planet today are trapped in a job where they were either coerced or deceived into working. This session is intended to be used to identify gaps in participants' general knowledge about trafficking in persons and as the basis for modifying subsequent training sessions. Do you know how many. Third, the words "slave trade and traffic in women" in section 6. Human Trafficking can be either a commercial sex act that is induced by force, fraud, or coercion, or in which the person induced to perform such an act has not attained 18 years of age. HUMAN TRAFFICKING INTO AND WITHIN THE UNITED STATES: A REVIEW OF THE LITERATURE. The Program will continue efforts to expand, gather, and make available information regarding human trafficking. The US rhetoric. In an effort to call even more attention to the issue, we're providing more detailed information about youth at risk for child sex trafficking, and would like to ask you to spread these warning signs for child sex trafficking around your network. Within the last decade it has become a global issue affecting nearly every country in the world.
CommonCrawl
Science Chemistry Limiting reagent Magnesium burns in excess oxygen to form magnesium oxide. The balanced equation for this reaction... Magnesium burns in excess oxygen to form magnesium oxide. The balanced equation for this reaction is: {eq}2Mg + O_2 \to 2MgO {/eq}. Starting with 1.35 g of magnesium, calculate the maximum mass of magnesium oxide that could be formed in this reaction. (relative atomic masses: {eq}O {/eq} = 16.0 g/mol, {eq}Mg {/eq} = 24.0 g/mol) Maximum Product Obtainable: The maximum product obtainable is the same quantity that we typically calculate in most limiting reagent problems in a chemical reaction. The maximum product obtainable is a theoretical construct that assumes 100 percent yield, which is an impossibility in the real world. First, we will calculate the number of moles of magnesium present: $$\begin{align}\\ \text{moles of Mg}&=\frac{\rm Mass\; of\; Mg}{\rm Atomic\;... Calculating Reaction Yield and Percentage Yield from a Limiting Reactant How to calculate the theoretical yield? Learn the definition and formula of percent yield. Use the theoretical yield equation to calculate theoretical yield. 2Mg + O2 arrow 2MgO If 5 moles of magnesium and 3 moles of oxygen gas are placed in a reaction vessel and allowed to react, the limiting chemical would be: A) Both Mg and O2; they both stop reacting when the reaction is over B) O2 C) Mg D) Neither reactan Consider the reaction: 2Mg + O2 arrow 2MgO. What is the empirical formula for magnesium oxide? How many moles of magnesium oxide are produced by the reaction of 1.82 g of magnesium nitride with of 17.73 g of water? Mg_3N_2 + 3H_2O \to 2NH_3 + 3MgO Phosphorus and oxygen combine to form tetraphosphorus decoxide. a) Write a balanced reaction and determine the limiting reactant and the mass of P4O10 formed if 15.0g of P4 and 30.0g of O2 are used. b For the following reaction, 20.1 grams of iron are allowed to react with 10.6 grams of oxygen gas. iron(s) + oxygen(g) iron(II) oxide(s) a) What is the maximum mass of iron(II) oxide that can be formed? b) What is the formula for the limiting reagent? c) Assume 171 grams of diborane, B_2H_6, reacts with excess oxygen O_2. Determine the mass of boron oxide, B_2O_3, formed from this reaction. B_2H_6 (g) + 3O_2(g) \to B_2O_3 (s) + 3 H_2O(l) (Round molar masses to the hundredths place (0.01 g)) What is the sum of all the stoichiometric coefficients in the balanced chemical equation for the reaction of magnesium with gold(III) nitrate? Solid gold and magnesium nitrate are produced. In using the thermite reaction, 144 grams of Al are reacted with 155 grams of iron (lll) oxide. How much of excess reagent is left at the end of the reaction? The molten metal reaction below begins with 27.0 g of manganese (IV) oxide (86.94 g/mol) and 11.6 g of aluminum (26.98 g/mol). Convert these quantities to moles and use the stoichiometric ratios (from the coefficients) to determine which reactant is limit In the chemical reaction: Mg+2HCL= MgCl_2+H_2, how does the mass of magnesium added effect the rate of reaction (is the mass of magnesium a factor and why?), and how is the mass of magnesium related to the "substrate"? Consider the following reaction and answer the following questions. 2Mg + O2 arrow 2MgO a. How many moles of oxygen gas are needed to react with 12.3 moles of magnesium? b. How many moles of magnesium are needed to react with 30.2 grams of O2? According to the reaction 2 H2(g) + O2(g) gives 2 H2O(l) . An unknown amount of H2 reacts with 64 g of O2 to produce about 72 g of H2O. Using addition and subtraction only. Calculate the amount of H2 (in grams) used up in this reaction. Balance the following combustion reaction of dimethyl sulfoxide by entering stoichiometric coefficients at each chemical. C 2 H 6 O S ( l ) + O 2 ( g ) C O 2 ( g ) + H 2 O ( l ) + S O 2 ( g ) Given the equation 2Mg + O2 --> 2MgO, how many moles of MgO are produced for every mole of O2 used up? In a lab, 6.59 g of Iron is combined with Oxygen to produce 6.93 grams of Iron Oxide. Using the two balanced equations for Iron Oxide, 2Fe + O_2 = 2FeO and 4Fe +3O_2 = 2Fe_2O_3, find the theoretical and percent yield for each equation. Which element is in excess when 3.00g of Mg reacts with 2.00g of pure oxygen? Chemical equation: 2Mg + O_2 rightarrow 2MgO. 6.32 g of potassium chlorate reacts according to the following incorrectly balanced reaction. (Make sure to properly balance the equation before completing your calculations.) 2 KClO3(s) 2 KCl(s) + O2(g) Determine the quantity of oxygen produced by the re For the reaction 2S (s) + 3O_2 (g) to 2SO_3 (g) If 6.3 g of S is reacted with 10.0 g of O_2, which one will be the limiting reactant? 9.10 grams of nitrogen gas are allowed to react with 5.17 grams of oxygen gas to form nitrogen monoxide. (a) Write a balanced chemical equation for this reaction. (b) What is the maximum mass of nitrogen monoxide that can be formed? (c) What is the for For decomposition chemistry, predict the products for the following decomposition reaction. Write the complete balanced molecular equation for the reaction. Magnesium hydroxide is heated in a vacuum. The equation for the reaction between magnesium and dilute sulfuric acid is shown below. Mg + H2SO4 arrow MgSO4 + H2 Which mass of magnesium sulfate will be formed if 12 grams of magnesium are reacted with sulfuric acid? a. 5 grams b. 10 grams c. 60 grams For the following reaction refer to the activity series. Determine whether the following equation can occur. If the reaction occurs, write the complete, balanced equation. If the reaction does not occur, explain why. Reaction: magnesium plus hydrochlori In the replacement reaction between tin and magnesium nitrate, how many grams of tin nitrate would be produced if 74.1g of magnesium nitrate were used? Consider the reaction 4Al + 3MnO2 rightarrow 3Mn + 2Al2O3. If 22 grams of Al react with 13 grams of manganese(IV) oxide, how much excess reactant remains after the reaction has run to completion? Identify the limiting reactant and determine the mass of CO2 that can be produced from the reaction of 25.0g of C3H8 with 75.0g of O2 according to the following equation. For decomposition chemistry, predict the products for the following decomposition reaction. Write the complete balanced molecular equation for the reaction. Tin (IV) oxide is strongly heated to form elements. Using the equation 4fe + 3O2 2Fe2O3, if 6 moles of oxygen and an excess of iron were available, how many moles of iron (III) oxide would be produced? Calculate the mass of Al2O3 formed when 4.7 grams of aluminum completely reacts with excess oxygen. Al + O2 arrow Al2O3 If 80 g of Al and 80 g of Fe2O3 are combined, what is the maximum number of moles of Fe can be produced given the reaction 2Al+Fe2O3 --> Al2O3+2Fe? A thermite reaction occurs between solid aluminum and iron III oxide (rust) to form molten iron metal and aluminum oxide. 2 Al (s) + Fe2O3 (s) → 2Fe (l) + Al2O3 (s) How many grams of iron are produced for every 1.50 moles of iron III oxide tha Write and balance the chemical equation upon the combustion of sucrose. Determine the moles of H_2 O produced from the combustion of 5 moles of sucrose and 39 moles of oxygen? Which chemical is in excess and how many moles remain after the reaction is com 448g of iron(III) oxide, Fe2O3, reacts with an excess of carbon monoxide, CO in the following reaction. Fe2O3(s)+3CO(g) yields 2Fe(s)+3CO2(g) a) Calculate the mass of iron solid, Fe, produced and use the following molar masses in your calculation: Fe2O3= If the molten metal reaction below begins with 25.0 grams of manganese(IV) oxide and 10.0 grams of aluminum, determine which reactant is limiting. Then determine the mass of aluminum used and the mass of manganese created. 3MnO2 + 4Al arrow 3Mn + 2Al2O3 If 16.3 grams of propane is combusted with 33.4 grams of oxygen, what would be the mass of excess reactant remaining at the end of the reaction? A 900. mg sample of magnesium was burned in an oxygen bomb calorimeter. The total heat capacity of the calorimeter plus water was 5,760 J/K If the enthalpy of combustion of magnesium is -603.2 Kj/mol, calculate the increase of temperature in the calorim During a complete combustion of C_4H_8 How many moles of oxygen are consumed for every one mole of C_4H_8? In one experiment 1 \ g of mg ribbon on burning in oxygen gave 1.66 \ g of magnesium oxide. In another experiment, magnesium oxide obtained on heating magnesium carbonate was found to contain 60% magnesium. Show that this data illustrates the law of const Balance the equation below and answer the question. Fe + O2 arrow Fe2O3 How many grams of Fe2O3 are produced when 2.93 x 1011 molecules of O2 are reacted? Iron(III) oxide reacts with carbon to give iron and carbon monoxide: Fe2O3(s) + 3C(s) -> 2Fe(s) + 3CO(g) How many grams of CO are produced when 34.0 g of C reacts? Bunsen burners use methane to produce a flame, described below in this equation: CH4 (g) + 2 O2 (g) arrow 2 H2O (l) + CO2 (g). a. In an experiment, 5.00 grams of methane were burned in air. Calculate the theoretical yield, in grams, of water. b. If 4.00 g According to the following reaction, how many moles of mercury will be formed upon the complete reaction of 23.2 grams of mercury(II) oxide? 2HgO(s) \to 2Hg(l) + O2(g) What mass of iron(III) oxide would be required to produce 2250g of iron metal? How many moles of carbon monoxide would be used during this reaction? Iron(III) oxide reacts with carbon to give iron and carbon monoxide: Fe2O3(s) + 3C(s) -> 2Fe(s) + 3CO(g) How many grams of C are required to react with 10.4 g of Fe2O3? Consider the balanced chemical reaction below. How many grams of carbon monoxide are required to react with 6.4 g of iron (III) oxide? Set up a table below to determine the minimum number of moles of CO required to react with the Fe_2 O_3 and then determ For the unbalanced equation, Al + O_2 \to Al_2O_3, a) If 21.0 moles of oxygen gas react, how many moles of aluminum will react with this oxygen? b) If we use 3.0 grams of aluminum, how many grams of aluminum oxide will be produced when enough oxygen ga Iron(III) oxide reacts with carbon to give iron and carbon monoxide: Fe2O3(s) + 3C(s) -> 2Fe(s) + 3CO(g) How many grams of CO when 34.0 g of C reacts? In the reaction: 2KClO3(s) 2KCl(s) + 3O2(g), what mass of potassium chlorate, KClO3, would be required to produce 957 L of oxygen, O2, measured at STP? What mass of O2 is consumed in the combustion of 454 g of C3H8? The equation for the decomposition of potassium chlorate is written as: 2KClO3(s) to 2KCl(s) + 3O2(g) When 48.2 g of KClO3 is completely decomposed, what is the theoretical yield, in grams, of O2? The thermite reaction generates a large amount of heat. The equation for it is shown below. 2 A l ( s ) + F e 2 ( s ) 2 F e ( s ) + A l 2 O 3 ( s ) + h e a t What is the maximum amount of aluminum oxide that can be produced if 41.6 g of Fe2O3 and 37.6 Consider that calcium metal reacts with oxygen gas in the air to form calcium oxide. Suppose we react 4.72 mol calcium with 4.00 mol oxygen gas. Determine the number of moles of calcium oxide produced after the reaction is complete. 2Ca(s)+O2(g) yields 2C If 7.340 g CO is mixed with 18.81 g O2, calculate the theoretical yield (g) of CO2 produced by the reaction. When reddish-brown mercury(II) oxide (HgO) is heated, it decomposes to its elements, liquid mercury metal and oxygen gas. If 3.23 grams of HgO is decomposed to Hg, calculate the mass of the pure Hg metal produced. 2HgO(s) arrow 2Hg(l) + O2(g) Calculate the mass of Al_2O_3 that can be produced if the reaction of 55.1 g of aluminum and sufficient oxygen has a 80.0% yield. Express your answer with the appropriate units. If 12 \ g of water reacted to form hydrogen gas and oxygen gas and 9.23 \ g of O_2 formed, how many grams of H_2 would also be formed? Use the chemical equation listed below. 2H_2O(l) \to 2H_2(g)+O_2(g) a. 2.77 \ g b. 0.277 \ g c. 21.23 \ g For the balanced equation shown below, if 7.15 grams of P4 were reacted with 16.6 grams of O2, how many grams of P4O10 would be produced? P4 + 5O2 P4O10 For the following reaction, 8.60 grams of propane (C_3H_8) are allowed to react with 16.5 grams of oxygen gas. propane (C_3H_8) (g) + oxygen (g) \rightarrow carbon dioxide (g) + water (g) a. What is the maximum amount of carbon dioxide that can be formed? Unbalanced reaction: Fe + O_{2} \rightarrow FE_{2}O_{3}. \Delta H = -1652 kJ a. How much heat is involved when 100. g of iron reacts? b. How much heat is involved when 100. g of Fe_{2}O_{3} is produced? c. How many grams of oxygen did you start with when In our bodies, sugar is broken down with oxygen to produce water and carbon dioxide. For the following reaction, 0.515 moles of glucose is mixed with 0.569 moles of oxygen gas. What is the formula for the limiting reagent? What is the maximum amount of ca For the following reaction, 27.9 grams of diphosphorus pentoxide are allowed to react with 8.89 grams of water. diphosphorus pentoxide(s) + water(l) phosphoric acid(aq) a. What is the maximum mass of phosphoric acid that can be formed? b. What is the for Iron(III) oxide reacts with carbon monoxide to produce iron and carbon dioxide. Fe2O3(s) + 3CO(g) arrow 2Fe(s) + 3CO2(g) What is the percent yield for iron if the reaction of 70.0 grams of iron(III) oxide produces 38.0 grams of iron? Suppose 4,000 grams of heptane is combusted with 7,000 grams of oxygen. a. What is the limiting reactant? b. How much CO2 is produced? c. How much excess reactant is there? 448g of iron(III) oxide, Fe2O3, reacts with an excess of carbon monoxide, CO in the following reaction. Fe2O3(s)+3CO(g) yields 2Fe(s)+3CO2(g). Find the percentage yield if only 243.5g of Fe is actually isolated. Use the following molar masses in your cal Use the following equation: 4Fe(g) + 3O_2 (g) \to 2Fe_2O_3(s) If 32 moles of Fe and 27 moles of O_2 are reacted, which is the limiting reagent? Use the following reaction with fictional elements and compounds to answer the questions below: X_2Z_3 + A\to X + AZ_2 a) Balance the equation. What type of reaction is this? b) How many grams of A are needed to react with 7300 mg of X_2Z_3? c) How... If 153 g of butane and 43.0 g of oxygen gas undergo a reaction that has a yield of 80% of carbon dioxide, what mass of carbon dioxide is produced? Explain your answer by determining the limiting reagent first. Calculate the mass of H2O in the following reaction, if 116.3 grams of Fe2O3 and 10.5g of 3H2 are used. Magnesium oxide is melted and electrolysed for 4 minutes 13 seconds using inert electrodes and a current of 5.24 A. What mass of each product is formed? What is the total number of grams of O2(g) needed to react completely with 0.50 moles of C2H2(g) for the balanced equation below? 2C2H2(g) + 5O2(g) arrow 4CO2(g) + 2H2O(g) Balance this equation: Solid iron(III) oxide and carbon monoxide gas yields iron metal and carbon dioxide gas. Potassium chlorate decomposes to form potassium chloride and oxygen. Write the balanced equation for this reaction. What is the theoretical yield when 15.7 g of O2 reacts with excess P4? How many grams of O2 are needed to produce 0.400 mole of Fe2O3? Determine the limiting reactant when 80.0 grams of Fe2O3 are reacted with 36.0 grams of CO in the following reaction. Fe2O3(s) + 3CO(g) arrow 2Fe(s) + 3CO2(g) In the reaction given below, for every two moles of iron(III) oxide consumed, how many moles of iron are produced? 2 Fe2O3 4 Fe + 3 O2 For the following reaction, 27.9 grams of diphosphorus pentoxide is allowed to react with 8.89 grams of water. diphosphorus pentoxide(s) + water(l) arrow phosphoric acid(aq) a. What is the maximum mass of phosphoric acid that can be formed? b. What is the Iron (III) oxide reacts with carbon to given iron and carbon monoxide: Fe_2O_3 (s) + 3C(s) to 2Fe(s) + 3CO(g). How many grams of C are required to react with 10.4 g of Fe_2O_3? What is the mass of aluminum that can be produced by the reaction of 60.0 g of aluminum oxide with enough amount of carbon according to the following chemical reaction? Al_2O_3 + 3 C \rightarrow 2 Al What is the theoretical yield in grams of aluminum oxide when 40.0 g of aluminum react with 50.5 g of oxygen gas? 2C6H6 + 15O2 arrow 12CO2 + 6H2O Suppose you have 2.92 g of C6H6 and 9.95 g of O2. What mass of the reactant is present in excess after the reaction is complete? Calculate the number of grams of Cl2 that are formed when 0.250 mole of HCl reacts with excess O2 in the following reaction. 4HCl + O2 2Cl2 + 2H2O If the reaction below causes 0.910 g of silver to be deposited, use atomic masses and the stoichiometric ratio (of coefficients) to determine the moles of silver and iron involved, and the mass of iron involved. (Show all equations with all units and all If the reaction below causes 0.910 g of silver to be deposited, use atomic masses and the stoichiometric ratio (of coefficients) to determine the moles of silver and iron involved, and the mass of iron involved. Show all equations with all units and all u For decomposition chemistry, predict the products for the following decomposition reaction. Write the complete balanced molecular equation for the reaction. Aluminum chlorate is strongly heated with a catalyst. What mass of H2O2 must decompose to produce 0.77 grams of H2O in this reaction? 2H2O2 arrow 2H2O + O2 For the reaction represented by the equation 2H2 + O2 --> 2H2Oo, what is the mole ratio of H2 to H2O? Iron(II) selenide reacts with oxygen gas to produce iron(III) oxide plus selenium dioxide. In the balanced reaction, what is the sum of the coefficients of the reactants? Consider the reaction of C4H10 with O2 to form CO2 and H2O. If 4.92 g C4H10 is reacted with excess O2 and 10.8 g of CO2 is ultimately isolated, what is the percent yield for the reaction? For the following reaction, find the following: 1) Write the balanced equation, including the phases (s) and (aq) for the products 2) Identify the excess and the limiting reactant 3) Calculate the maximum amount of precipitate that could be produced based Given 14.0 g of O2, how many grams of the product Cr2O3 could be produced? Balance the chemical equation representing the combustion reaction of butane with air and enter the coefficients (the number of molecules) of air (O2 + 3.76 N2), CO2, H2O, and N2. The complete combustion of 1.5 moles of methane (CH4) would require how much O2? a) Write a balanced equation for the reduction of iron ore (Fe_2O_3) to iron using hydrogen by applying the method of half-reactions. b) What mass of iron would you get from one tonne of ore? c) What is the oxidation state of the iron in Fe_2O_3? For the reaction ?FeCl2 + ?Na3PO4 \rightarrow Fe3(PO4)2 + ?NaCl what is the maximum mass of Fe3(PO4)2 that could be formed from 6.54 g of FeCl2 and 3.92 g of Na3PO4? Answer in units of g. Given the following balanced equation, what volume of hydrogen gas (in liters) would be produced under conditions of STP from the reaction of 0.082 grams of magnesium solid reacting with excess hydrochloric acid solution? Report the answer with the corre Consider the equation: 2H2O(l) 2H2(g) + O2(g) If 9 g of H2O decomposes according to the reaction shown above, how many moles of hydrogen gas are produced? a) 0.5 mol b) 1 mol c) 2 mol d) 4 mol What is the percent yield if 5.5 g Mg are recovered after burning 4.9 g Mg in sufficient O? Iron three oxide reacts with carbon to give iron and carbon monoxide. How many grams of Fe can be produced when 6.50 g of Fe_2O_3 reacts? For the following reaction, calculate how many moles of each product are formed when 0.356 moles of PbS completely reacts. Assume there is an excess of oxygen. 2PbS(s) + 3O2(g) arrow 2PbO(s) + 2SO2(g) Balance the following reaction, using the smallest integer coefficients that correctly represent the stoichiometric ratios in the balanced reaction. C4H8 + O2 arrow H2O + CO2
CommonCrawl
EXPRESS PAPER Igi Ardiyanto1 & Teguh Bharata Adji1 IPSJ Transactions on Computer Vision and Applications volume 9, Article number: 6 (2017) Cite this article This paper proposes a deep learning-based efficient and compact solution for road scene segmentation problem, named deep residual coalesced convolutional network (RCC-Net). Initially, the RCC-Net performs dimensionality reduction to compress and extract relevant features, from which it is subsequently delivered to the encoder. The encoder adopts the residual network style for efficient model size. In the core of each residual network, three different convolutional layers are simultaneously coalesced for obtaining broader information. The decoder is then altered to upsample the encoder for pixel-wise mapping from the input images to the segmented output. Experimental results reveal the efficacy of the proposed network over the state-of-the-art methods and its capability to be deployed in an average system. Unlike the traditional object detection and classification which globally works on an image or a patch, the scene segmentation is a pixel-wise classification which requires more accurate boundary localization of each object and area inside the images. For instance in case of the road scene segmentation, one needs to precisely separate the sidewalk for the pedestrian from the road body. The semantic road scene segmentation, which is the part of the general image segmentation problems, attracts a lot of researchers for providing the best solution. Early works mostly depend on the pixel-wise hand-crafted features (e.g., [1]) followed by conditional random field (e.g., [2, 3]), the usage of dense depth map [4], or exploitation of the spatio-temporal parsing [5] for achieving the best acccuracy. Since the rise of deep learning for object classification [6], several attempts were done for designing a deep network architecture for the image segmentation problem. Most of them follow the encoder-decoder architecture style (e.g., [7–9]). Another approach takes advantage of the image patch and spatial prior [10] for attaining better scene segmentation. Except [9] which tries to build a small model size network, all of the above works are suffered from either very large network size or slow inference time which make them inconvenient for practical applications. Here, we aim to establish a compact and effective network for segmenting the road scene. Our approach is inpired by ResNet [11] which utilizes residual blocks, allowing it to be stacked into a very deep architecture without huge degradation problem. In the heart of our proposed architecture, three different types of convolutional layers are simultaneously coalesced in a residual fashion and stacked it into an encoder-type network for altering the receptive field. Hence, more variational functions are enabled to obtain richer information from the images. Subsequently, a decoder with a lesser architecture followed by a fully connected convolutional (Full Conv.) layer is appended to upsample the encoder and fine-tune the output. Our contributions are twofold. First, we introduce a coalesced style of the convolutional layers with the residual-flavored network to build an efficient model for the semantic road segmentation. Subsequently, we exhibit an asymmetric encoder-decoder network for reducing the model size even more, unlike the conventional symmetric approach used by the previous methods, e.g., SegNet [8]. The rest of this paper is organized as follows. Section 2 explains the overall architecture of the proposed RCC-Net. Evaluations against several state-of-the-art methods are described in Section 3. We then conclude the paper and give some future directions of the research in Section 4. Proposed network architecture Our proposed RCC-Net is established in a deep encoder-decoder manner. The target is to create a pixel-wise classification which maps each pixel of the input images into corresponding semantic class of the road objects. Figure 1 expresses the full architecture of the RCC-Net. Architecture of RCC-Net Initial stage The idea of constructing small feature maps in the early stage of the network was heavily inspired by [9] and [12]. For this purpose, a max pooling, an average pooling, and 3×3 convolution layers with 13 filters are concatenated, creating a total 19 dimensional feature maps. Figure 2 represents the initial stage of the RCC-Net. Using these settings, the first stage of the RCC-Net is expected to reduce the dimensionality of the input images while extracts the relevant features for the next stages. Initial stage of RCC-Net Residual coalesced convolutional blocks As the core of our network, we introduce the residual coalesced convolutional (RCC) block which is intemperately instigated by Inception [13] and ResNet [11] architectures. The RCC module is composed by projection-receptive-projection sequences with skip connection. The projection parts are realized by 1×1 convolution, while the receptive section consists of a coalesced three different convolutional layers. The 1×1 convolution is meant to aggregate the activation of each feature in the previous layer. It is eminent for infering the networks with different input size. An ordinary, an asymmetric [12], and a dilated [14] convolution layers are subsequently appended in a parallel fashion. This coalesced style is motivated by an assumption that each type of convolution layer contributes different receptive field. By coalescing them, it is expected to have a wider function to be learned, thus increasing the amount of feature information. Let \(X\in \mathbb {R}^{n}\) be the n-dimensional input of the coalesced convolution and \(w^{i}_{j}\in \mathbb {R}^{i\times j}\) is the i×j convolution kernel. The corresponding feature output for the coalesced convolution can be denoted by $$ M = \left[\underbrace{w^{i}_{j} \ast X}_{\text{ordinary}} \bigcup \underbrace{w^{1}_{j} \ast X}_{\text{asymmetric}} \bigcup \underbrace{w^{i}_{j} \ast_{d} X}_{\text{dilated}} \right] $$ where the last term is the dilated convolution with the dilation factor d. Figure 3 shows the RCC module formation. Residual coalesced convolutional modules Actually, it is interesting to investigate the proper way to combine the convolutional layers. In the experimental section, we will show how the change on its combination, by summing and concatenating them, will affect the entire network results. The entire encoder contains three stages, where each stage is made from five RCC modules. The ordinary convolution uses 3×3 kernel. Dilation factor of the dilated convolutions is arranged from 2 to 32, while the asymmetric kernels are set to 5 and 7. In between the convolutional operation inside the RCC modules, a parametric rectified linear unit (PReLU) activation layer and a batch n are added. We then place a drop out layer at the end of RCC modules for regularization. A skip connection imitating the ResNet [11] is coupled for one RCC module. A max-pooling layer is subsequently appended between each stage for downsampling the input. The decoder is constructed by stacking the same RCC modules as the encoder, except the coalesced convolutional part is now replaced by a deconvolutional layer and the number of stages is decreased. This setting is motivated by [9], where the role of the pixel recognition should be done mostly by the encoder. The task of the decoder is merely to upsample the output of the encoder and adjust the details. A fully connected convolutional (Full Conv.) layer is thus appended behind the decoder for performing pixel-wise mapping. As summary of the proposed network, Table 1 exhibits the configuration of the RCC-Net, with 3-channel input images and 11 classes of the road scenes. Table 1 Configuration of RCC-Net In this section, the efficacy of our proposed architecture is demonstrated against several state-of-the-art methods on the road scene segmentation problems. All implementations of the proposed algorithm were done on a Linux PC (Ubuntu 16.04, Core i7, 32 GB RAM), with a GTX 1080 GPU and Torch7. Training was performed using Adam optimization [15] for 200 epoch with learning rate 10e-3, momentum 0.9, and batch size 8. CamVid dataset benchmark The performance of the proposed RCC-Net architecture is benchmarked on CamVid road scene dataset [16], which consists of 367 training and 233 testing images with the resolution of 480×360. The CamVid dataset has 11 classes depicting different objects which frequently appear on the street, such as road, cars, pedestrian, and building. Table 2 shows the comparison of several state-of-the-art methods on the CamVid road scene dataset. Table 2 Comparison on the CamVid dataset [16] using 11 road scene categories (in percent) From Table 2, the proposed RCC-Net (concatenated version) exceeds the existing state-of-the-art methods in four different class categories and the overall class average accuracy. Three-out-four winning categories constitute the small area and objects with lesser training data. It means our proposed method is capable for capturing objects which are difficult to segment. The best class average accuracy and a comparable intersection-over-union (IoU) imply the RCC-Net has a high consistency for achieving good results in each category. One notable result is that the RCC-Net has the best capability for recognizing the sidewalk. It is very important for an autonomous car to differentiate the road and the sidewalk, so that the safety of the pedestrians are guaranteed. Figure 4 depicts some examples of the RCC-Net prediction output on the test set of the CamVid dataset. RCC-Net results on CamVid [16] test set. (color code: red road, yellow sidewalk, purple car, cyan pedestrian, blue building, green tree, white sky, and black void) As we have noted in the previous section, it is intriguing to examine different ways of coalescing the convolutional layers. Both summing and concatenated convolutional layers of the RCC-Net surpass the other methods. Nevertheless, the concatenated version of RCC-Net has advantages over the summing one. One interesting result is the pedestrian segmentation of the summing version of the RCC-Net achieves the highest accuracy (70.6%). This fact may lead to a promising application in the future research, e.g., to determine the salient regions for the pedestrian detection. Test on wild scene To conceive the RCC-Net capabilities, we subsequently feed the network with some difficult road scenes taken from the Internet without re-train the network model obtained from the previous CamVid benchmark. From Fig. 5, the RCC-Net produces qualitatively good segmentations, even for the scenes which are heavily cluttered. It also means the proposed network is able to transfer the model information to the new environment. RCC-Net results on wild scenes Computation time and model size On GTX 1080, the RCC-Net took 25.5 ms for the forward inference of 480×360 images, including fetching and displaying the image. It is also able to run one inference on a car-deployable mini PC Zotac EN-761 in 67.5 ms with the network size of 4.9 MB, which draws out the power consumption around 62.4 watt. It means the proposed network is fast and small enough to enable the Advanced Driver Assistance System (ADAS). We plan to run the network on a GPU-based embedded system, such as NVIDIA Jetson TK1 for further investigation1. An efficient and compact solution for solving the semantic road segmentation problem has been presented. By coalescing different types of convolutional layers and stacking them in a deep residual network style, we achieve the high-quality results on the semantic road segmentation with relatively small model size, surpassing the existing state-of-the-art methods. In the future, we would like to examine the performance of our RCC-Net on the boarder problems, such as medical images and other challenging image segmentation dataset, for understanding its capabilities to solve more general segmentation applications. 1 The progress of RCC-Net performance on the embedded system can be seen at http://te.ugm.ac.id/~igi/?page_id=826 Yang Y, Li Z, Zhang L, Murphy C, Hoeve JV, Jiang H (2012) Local label descriptor for example based semantic image labeling In: Proc. of European Converence on Computer Vision (ECCV), 361–375. Sturgess P, Alahari K, Ladicky L, H.S.Torr P (2009) Combining appearance and structure from motion features for road scene understanding In: Proc. of British Machine Vision Conferenve (BMVC). Ladicky L, Sturgess P, Alahari K, Russell C, Torr PHS (2010) What, where and how many? Combining object detectors and CRFs In: Proc. of European Converence on Computer Vision (ECCV), 424–437. Zhang C, Wang L, Yang R (2010) Semantic segmentation of urban scenes using dense depth maps In: Proc. of European Converence on Computer Vision (ECCV), 708–721. Tighe J, Lazebnik S (2013) Superparsing. Int J Comput Vision (IJCV) 101(2): 329–349. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks In: Proc. of NIPS, 1097–1105. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation In: Proc. of IEEE Computer Vision and Pattern Recognition (CVPR), 3431–3440. Badrinarayanan V, Kendall A, Cipolla R (2015) Segnet: a deep convolutional encoder-decoder architecture for image segmentation. arXiv: 1511.00561. Paszke A, Chaurasia A, Kim S, Culurciello E (2016) Enet: a deep neural network architecture for real-time semantic segmentation. arXiv: 1606.02147v1. Brust CA, Sickert S, Simon M, Rodner E, Denzler J (2015) Convolutional patch networks with spatial prior for road detection and urban scene understanding In: Proc. of VISAPP. He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. arXiv: 1512.03385. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2015) Rethinking the inception architecture for computer vision. arXiv: 1512.00567. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions In: Proc. of IEEE Computer Vision and Pattern Recognition (CVPR), 1–9. Yu F, Koltun V (2015) Multi-scale context aggregation by dilated convolutions. arXiv: 1511.07122. Kingma D, Ba J (2014) Adam: a method for stochastic optimization. arXiv: 1412.6980. Brostow GJ, Shotton J, Fauqueur J, Cipolla R (2008) Segmentation and recognition using structure from motion point clouds In: Proc. of European Converence on Computer Vision (ECCV), 44–57. This work was supported by AUN-SEED Net Collaborative Research for Alumni (CRA) Project. IA performed the primary development and analysis for this work and the initial drafting of the manuscript. TBA played an essential role in development of this work and editing the paper. All authors read and approved the final manuscript. Department of Electrical Engineering and Information Technology, Universitas Gadjah Mada, Jl. Grafika No.2, Yogyakarta, Indonesia Igi Ardiyanto & Teguh Bharata Adji Search for Igi Ardiyanto in: Search for Teguh Bharata Adji in: Correspondence to Igi Ardiyanto. Ardiyanto, I., Adji, T.B. Deep residual coalesced convolutional network for efficient semantic road segmentation. IPSJ T Comput Vis Appl 9, 6 (2017) doi:10.1186/s41074-017-0020-9 Residual coalesced network Road segmentation
CommonCrawl
Exercises - Optimization A cylindrical container is to be constructed to hold $32\pi$ cm3 of flammable material. The cost per cm2 of constructing the top is three times that of the sides and bottom, since the top must contain a safety valve. What dimensions will minimize cost of construction? The surface area is given by the sum of the areas of the top, bottom, and sides: $$A = \pi r^2 + \pi r^2 + 2\pi rh$$ The cost per unit area of the top is three times that of the sides and bottom, so we have a cost function expressible as $$C = 3\pi r^2 + \pi r^2 + 2\pi rh = 4\pi r^2 + 2\pi rh$$ The given volume tells us $\pi r^2 h = 32\pi$, which we can solve for $h$ to find $h = 32r^{-2}$. This can be substituted into the above equation to get the cost in terms of a single variable $r$: $$C(r) = 4\pi r^2 + 64 \pi r^{-1}$$ Finding $C'(r) = 8\pi r - 64\pi r^{-2}$ leads to an absolute minimum when $r=2$ cm and $h = 8$ cm. Find two positive numbers whose product is $1$ such that the sum of the first number plus the square of the second is a minimum. If $x,y$ are the two numbers, they're related by $xy=1$ and we want to minimize $x+y^2=x+1/x^2$ over $(0,\infty)$. The min is at $x=\sqrt[3]{2}$ (the limits of the endpoints both give $+\infty$). Find the point of $y=\sqrt{x}$ closest to $(c,0)$ if $c \ge \frac{1}{2}$ and if $c \lt \frac{1}{2}$. (Hint: minimize the square of the distance.) We're trying to minimize the function $(x-c)^2 - (0-\sqrt{x})^2$ over $[0,\infty)$ (the domain of $\sqrt{x}$). The function has a min at $x=c-\frac{1}{2}$, but this point must be in our domain, so if $c \gt 1/2$ the only minimum is the point $\left(c-\frac{1}{2},\sqrt{c-\frac{1}{2}}\right)$; if $c=\frac{1}{2}$ we have two minima at $x=c-\frac{1}{2}$ and $x=0$ (in both cases the distance squared is $1/4$); finally if $c \lt 1/2$ the minimum would be negative, so the closest point on the function is the origin. Farmer Brown is standing in a field, and wants to get home as soon as possible. Along the field runs a straight road, that leads directly to his house. Farmer Brown is 1 mi from the road and 5/3 mi from his house, and can walk through the field at 2 mi/h and on the road at 4 mi/h. What's the soonest that Farmer Brown can be home? Let $A$ be the point on the road closest to Farmer Brown, and $C$ be his house. A simple pythagorean theorem computation shows that the distance from $A$ to $C$ is $4/3$. Suppose that Farmer Brown crosses through the field until a point $B$ on the road that's $x$ miles from $A$, and walks on the road the remaining time. Since time is distance over speed, this path takes a time of $$\frac{\sqrt{1+x^2}}{2} + \frac{(\frac{4}{3} - x)}{4} \textrm{ hours}$$ Minimizing this over the interval $[0,4/3]$ we get that the min is at $x = 1/\sqrt{3}$ mi, where it takes Farmer Brown $\frac{\sqrt{3}}{4}+\frac{1}{3}$ hours to get home, as opposed to the value $5/6$ at both endpoints. A rectangle is inscribed between the $x$-axis and the parabola $y=a^2-x^2$, one side of the rectangle lying on the $x$-axis. Find the rectangle of this kind with the largest area. If you draw the problem you quickly realize that, by symmetry, if $l$ is half the length of the side on the $x$-axis (so the point ($l$, 0) is a vertex of the rectangle), the area is given by $2l(a^2-l^2)$, with $l$ in $[0,a]$. You can easily maximize this with the value $l = a/\sqrt{3}$, in which case the height is $\frac{2}{3}a^2$ and the area is $\frac{4a^3}{3\sqrt{3}}$. An open rectangular box is formed by taking a square sheet of cardboard and removing identical squares from each of the corners, then folding it up. What's the largest volume the box can reach? If the big square has side $L$ (a constant), and the small square has side $x$, the resulting box has height $x$ and a square base of side $L-2x$, so its volume is $x(L-2x)^2$, which we need to maximize on $[0,1/2]$. Taking the derivative and solving the quadratic equation yields the max $x=L/6$, where the volume is $V = \frac{2}{27}L^3$. A silo is in the shape of a cylinder (with no base) topped by a hemisphere. The cost per square foot of constructing the hemisphere is twice as big as the cost per square foot of constructing the cylinder. If the volume of the silo is fixed, what are the dimensions of the silo whose construction cost is minimal? If $h$ is the height of the cylinder and $r$ is the radius of both the cylinder and the hemisphere (they must match), then the volume is $v = \pi r^2 h + \frac{2}{3} \pi r$ and the cost (which is the cost per square foot times the area) is given by $C = 2 \cdot 2\pi r^2 + 1 \cdot (2\pi r h + \pi r^2)$. Solving the volume for $h$ and plugging into $C$, we want to minimize $C(r) = \frac{11}{3} \pi r^2 + 2V/r$ over $(0,(\frac{3}{2}V)^{1/3})$. The min is at $r = \sqrt[3]{3V/(8\pi)}$. Geometrically, this solution gives $h = 2r$, so there's some nice symmetry.). A box with no top has base length twice its width. Find the box of maximum volume made from 24 square feet of material. Dimensions: $2 \times 4 \times \frac{4}{3}$ (in feet) Let $x$ and $y$ be legs of a right triangle whose hypotenuse is 2. For what values of $x$ and $y$ is $2x+y^2$ a maximum? $x = 1, y = \sqrt{3}$ A rectangle box is to have a square base and top and is to hold 1000 in3 of material. What are the dimensions of the box if it is to have a minimum surface area? $10$ inches $\times$ $10$ inches A window in the shape of a rectangle capped by a semicircle is to be surrounded by $p$ inches of metal border. Find the radius of a semicircular part if the total area of the window is to be a maximum. $\displaystyle{r = \frac{p}{4+\pi}}$ A rectangular garden is to be laid out along a neighbor's property line and is to contain 432 yds2. If the neighbor pays for half of the dividing fence, what are the dimensions so that the cost to the owner is minimized? $18$ yds $\times$ $24$ yds A rectangular box with top has base length three times its width. Find the dimensions of such a box of maximum volume that can be constructed from 200 in2 of material. $10$ inches $\times$ $\frac{10}{3}$ inches $\times$ $5$ inches A rectangle has an area of $8$ in2. Find the dimensions so that the distance from one corner to the midpoint of the opposite side is minimized. ($x=2$ in, $y=4$ in) A museum display case in the shape of a rectangular box with square base and a volume of 54 ft3 is to be built. The front, back and sides are to be made of glass, which costs $10$ dollars per square foot; the top and bottom are to be made of fine wood, which costs $20$ dollars per square foot. Find the dimensions of the least expensive case that can be constructed. Find the cost of the least expensive case. $3$ ft $\times$ $3$ ft $\times$ $6$ ft $\$1080$ Find two positive numbers whose product is 100 and whose sum is a minimum. Find the volume of the largest right circular cylinder that can be inscribed in a sphere of radius $r$. Two hallways of widths a feet and b feet intersect at right angles. What is the longest ladder that can be carried horizontally around the corner? A metal trough is formed from a metal sheet by folding the sheet lengthwise down the middle (so that the cross-section of the trough is a triangle) and capping both ends. At what angle should the sheet be folded to form the trough of maximum volume?
CommonCrawl
Personal Skills for the Mind Stress and Stress Management Anger and Aggression Living Well, Living Ethically Caring for Your Body Our eBooks: The Skills You Need Guide to Life The Skills You Need Guide to Personal Development The Skills You Need Guide to Stress and Stress Management Conflict Resolution and Mediation Skills Team-Working, Groups and Meetings Decision-Making and Problem-Solving Negotiation and Persuasion Skills Personal and Romantic Relationship Skills Take the: Interpersonal Skills Self-Assessment The Skills You Need Guide to Interpersonal Skills Guide to Personal and Romantic Relationships Understanding Leadership Planning and Organising Skills What Sort of Leader are You? Quiz The Skills You Need Guide to Leadership Business Strategy and Analysis Self-Employment and Running Your Own Business Writing a Dissertation or Thesis Teaching, Coaching, Mentoring and Counselling The Skills You Need Guide for Students What is a Presentation? Top Tips for Effective Presentations Coping with Presentation Nerves Giving a Speech Grammar: An Introduction Common Mistakes in Writing Writing a CV or Résumé Writing a Covering Letter NUMERACY SKILLS Real-World Maths Introduction to Geometry Managing Money – Budgeting Toddlers and Young Children Learning and Developing Point Coordinates: An Introduction to Polar, Cylindrical and Spherical Coordinates Search SkillsYouNeed: Numeracy Skills: A - Z List of Numeracy Skills Numbers | An Introduction Special Numbers and Mathematical Concepts Systems of Measurement Common Mathematical Symbols and Terminology Apps to Help with Maths Arithmetic: Addition + Subtraction - Multiplication × Division ÷ Ordering Mathematical Operations - BODMAS Mental Arithmetic – Basic Mental Maths Hacks Essentials of Numeracy: Percentages % Percentage Calculators Percentage Change | Increase and Decrease Calculating with Time Estimation, Approximation and Rounding Introduction to Geometry: Points, Lines and Planes Introduction to Cartesian Coordinate Systems Polar, Cylindrical and Spherical Coordinates Properties of Polygons Simple Transformations of 2-Dimensional Shapes Circles and Curved Shapes Perimeter and Circumference Calculating Area Three-Dimensional Shapes Net Diagrams of 3D Shapes Calculating Volume Area, Surface Area and Volume Reference Sheet Data Analysis: Averages (Mean, Median & Mode) Simple Statistical Analysis Statistical Analysis: Types of Data Understanding Correlations Understanding Statistical Distributions Significance and Confidence Intervals Developing and Testing Hypotheses More Advanced Mathematical Concepts: Introduction to Algebra Simultaneous and Quadratic Equations Introduction to Trigonometry Introduction to Probability Managing Money: Loans and Savings Understanding Interest Subscribe to our FREE newsletter and start improving your life in just 5 minutes a day. You'll get our 5 free 'One Minute Life Skills' and our weekly newsletter. We'll never share your email address and you can unsubscribe at any time. See also: Cartesian Coordinates Our page on Cartesian Coordinates introduces the simplest type of coordinate system, where the reference axes are orthogonal (at right angles) to each other. In most everyday applications, such as drawing a graph or reading a map, you would use the principles of Cartesian coordinate systems. In these situations, the exact, unique position of each data point or map reference is defined by a pair of (x,y) coordinates (or (x,y,z) in three dimensions). The coordinates are the point's 'address', its location relative to a known position called the origin, within a two- or three-dimensional grid on a flat surface or rectangular 3D space. However, some applications involve curved lines, surfaces and spaces. Here, Cartesian coordinates are difficult to use and it becomes necessary to use a system derived from circular shapes, such as polar, spherical or cylindrical coordinate systems. Why are Polar, Spherical and Cylindrical Coordinates Important? In everyday situations, it is much more likely that you will encounter Cartesian coordinate systems than polar, spherical or cylindrical. However, two-dimensional polar coordinates and their three-dimensional relatives are used in a wide range of applications from engineering and aviation, to computer animation and architecture. You may need to use polar coordinates in any context where there is circular, spherical or cylindrical symmetry in the form of a physical object, or some kind of circular or orbital (oscillatory) motion. Physically curved forms or structures include discs, cylinders, globes or domes. These could be anything from pressure vessels containing liquified gases to the many examples of dome structures in ancient and modern architectural masterpieces. Physicists and engineers use polar coordinates when they are working with a curved trajectory of a moving object (dynamics), and when that movement is repeated back and forth (oscillation) or round and round (rotation). Examples include orbital motion, such as that of the planets and satellites, a swinging pendulum or mechanical vibration. In an electrical context, polar coordinates are used in the design of applications using alternating current; audio technicians use them to describe the 'pick-up area' of microphones; and they are used in the analysis of temperature and magnetic fields. An Emphasis on Exploration The most familiar use in an everyday context is perhaps in navigation. Explorers throughout history have relied on an understanding of polar coordinates. Ships and aircraft navigate using compasses that indicate the direction of travel (known as a heading) relative to a known direction, which is magnetic North. The heading is measured as an angle from due North (0°), clockwise around the compass, so due East is 90°, South 180° and West 270°. GPS satellites can pinpoint the position of a vessel with great accuracy in today's world, but even now seafarers and aviators need to understand the principles of classic navigation. How are Polar, Spherical and Cylindrical Coordinates Defined? In these cases and many more, it is more appropriate to use a measurement of distance along a line oriented in a radial direction (with its origin at the centre of the circle, sphere or arc) combined with an angle of rotation, than it is to use an orthogonal (Cartesian) coordinate system. Trigonometry can then be used to convert between the two types of coordinate system. For more about this and the theory behind it, have a look at our pages on curved shapes, three-dimensional shapes and trigonometry. Polar Coordinates In mathematical applications where it is necessary to use polar coordinates, any point on the plane is determined by its radial distance \(r\) from the origin (the centre of curvature, or a known position) and an angle theta \(\theta\) (measured in radians). The angle \(\theta\) is always measured from the \(x\)-axis to the radial line from the origin to the point (see diagram). In the same way that a point in Cartesian coordinates is defined by a pair of coordinates (\(x,y\)), in radial coordinates it is defined by the pair (\(r, \theta\)). Using Pythagoras and trigonometry, we can convert between Cartesian and polar coordinates: $$r^2=x^2+y^2 \quad \text{and} \quad \tan\theta=\frac{y}{x}$$ And back again: $$x=r\cos\theta \quad \text{and} \quad y=r\sin\theta$$ Spherical and Cylindrical Coordinate Systems These systems are the three-dimensional relatives of the two-dimensional polar coordinate system. Cylindrical coordinates are more straightforward to understand than spherical and are similar to the three dimensional Cartesian system (x,y,z). In this case, the orthogonal x-y plane is replaced by the polar plane and the vertical z-axis remains the same (see diagram). The conversion between cylindrical and Cartesian systems is the same as for the polar system, with the addition of the z coordinate, which is the same for both: $$r^2=x^2+y^2,\quad \tan\theta=\frac{y}{x}\quad \text{and} \quad z=z$$ $$x=r\cos\theta,\quad y=r\sin\theta\quad \text{and} \quad z=z$$ Surfaces in the Cylindrical System: If you make \(z\) a constant, you have a flat circular plane. If you make \(\theta\) a constant, you have a vertical plane. If you make \(r\) constant, you have a cylindrical surface. The spherical coordinate system is more complex. It is very unlikely that you will encounter it in day-to-day situations. It is primarily used in complex science and engineering applications. For example, electrical and gravitational fields show spherical symmetry. Spherical coordinates define the position of a point by three coordinates rho (\(\rho\)), theta (\(\theta\)) and phi (\(\phi\)). \(\rho\) is the distance from the origin (similar to \(r\) in polar coordinates), \(\theta\) is the same as the angle in polar coordinates and \(\phi\) is the angle between the \(z\)-axis and the line from the origin to the point. In the same way as converting between Cartesian and polar or cylindrical coordinates, it is possible to convert between Cartesian and spherical coordinates: $$x = \rho\sin\phi\cos\theta,\quad y=\rho\sin\phi\sin\theta\quad\text{and}\quad z=\rho\cos\phi$$ $$p^2=x^2+y^2+z^2,\quad\tan\theta =\frac{y}{x}\quad\text{and}\quad\tan\phi=\frac{\sqrt{x^2+y^2}}{z}$$ Surfaces in a Spherical System: If you make \(\rho\) a constant, you have a sphere. If you make \(\phi\) a constant, you have a horizontal plane (or a cone). Latitude and Longitude, Maps and Navigation The most familiar application of spherical coordinates is the system of latitude and longitude that divides the Earth's surface into a grid for navigational purposes. The distances between lines on the grid are not measured in miles or kilometres, but in degrees and minutes. Lines of latitude are horizontal slices through the globe. The slice at the Equator is at 0° latitude and the poles are at ±90°. These lines are called parallels. Lines of longitude are like the wedges of an orange, measured radially from a vertical line of symmetry connecting the poles. These lines are called meridians. The reference line of 0° longitude is known as the Greenwich Meridian, which passes through the Royal Observatory in Greenwich, London. To use this 3D system for navigation however, the curved grid needs to be transferred onto flat 'charts' (maps of coastlines and the ocean floor for seafarers) using a projection. In this way, charts can be used like conventional maps with an orthogonal grid system, and the rules of Cartesian coordinates can be applied. First imagine wrapping a piece of paper around a globe, making a cylinder. The image on the chart is projected from the three-dimensional sphere onto the two-dimensional sheet of paper. This is a specific method used by cartographers called the Mercator Projection. The grid lines on a nautical chart are still in degrees and minutes and distances are measured in nautical miles. One nautical mile is the same as one minute of latitude. It is unlikely that you will need to use polar or spherical coordinates unless you work in a role that specifically requires it, but it is helpful to be aware of what they are and how they are used. It is also fascinating to understand how a map of a 3D shape like the globe can be translated into flat charts that have enabled seafarers to travel the globe for hundreds of years. Three-Dimensional Shapes | Calculating Area Subscribe to our Newsletter | Contact Us | About Us Search for more SkillsYouNeed: © 2011 - 2021 SkillsYouNeed.com The use of material found at skillsyouneed.com is free provided that copyright is acknowledged and a reference or link is included to the page/s where the information was found. Material from skillsyouneed.com may not be sold, or published for profit in any form without express written permission from skillsyouneed.com. For information on how to reference correctly please see our page on referencing.
CommonCrawl
Ge/Si multilayer epitaxy and removal of dislocations from Ge-nanosheet-channel MOSFETs Improving carrier mobility of polycrystalline Ge by Sn doping Kenta Moto, Ryota Yoshimine, … Kaoru Toko High-electron-mobility (370 cm2/Vs) polycrystalline Ge on an insulator formed by As-doped solid-phase crystallization M. Saito, K. Moto, … K. Toko Solution-Based Synthesis of Few-Layer WS2 Large Area Continuous Films for Electronic Applications Omar A. Abbas, Ioannis Zeimpekis, … Pier Sazio Spontaneously Conversion from Film to High Crystalline Quality Stripe during Molecular Beam Epitaxy for High Sn Content GeSn Nan Wang, Chunlai Xue, … Qiming Wang WSe2/SnSe2 vdW heterojunction Tunnel FET with subthermionic characteristic and MOSFET co-integrated on same WSe2 flake Nicolò Oliva, Jonathan Backman, … Adrian M. Ionescu Heterogeneous integration of single-crystalline rutile nanomembranes with steep phase transition on silicon substrates Dong Kyu Lee, Yunkyu Park, … Junwoo Son Doping-Free Arsenene Heterostructure Metal-Oxide-Semiconductor Field Effect Transistors Enabled by Thickness Modulated Semiconductor to Metal Transition in Arsenene Dongwook Seo & Jiwon Chang Ledge-directed epitaxy of continuously self-aligned single-crystalline nanoribbons of transition metal dichalcogenides Areej Aljarb, Jui-Han Fu, … Vincent Tung Lithographically patterned metallic conduction in single-layer MoS2 via plasma processing Michael G. Stanford, Yu-Chuan Lin, … Philip D. Rack Chun-Lin Chu1, Jen-Yi Chang2, Po-Yen Chen4, Po-Yu Wang4, Shu-Han Hsu3 & Dean Chou4,5 Scientific Reports volume 12, Article number: 959 (2022) Cite this article Horizontally stacked pure-Ge-nanosheet gate-all-around field-effect transistors (GAA FETs) were developed in this study. Large lattice mismatch Ge/Si multilayers were intentionally grown as the starting material rather than Ge/GeSi multilayers to acquire the benefits of the considerable difference in material properties of Ge and Si for realising selective etching. Flat Ge/Si multilayers were grown at a low temperature to preclude island growth. The shape of Ge nanosheets was almost retained after etching owing to the excellent selectivity. Additionally, dislocations were observed in suspended Ge nanosheets because of the absence of a Ge/Si interface and the disappearance of the dislocation-line tension force owing to the elongation of misfit dislocation at the interface. Forming gas annealing of the suspended Ge nanosheets resulted in a significant increase in the glide force compared to the dislocation-line tension force; the dislocations were easily removed because of this condition and the small size of the nanosheets. Based on this structure, a new mechanism of dislocation removal from suspended Ge nanosheet structures by annealing was described, which resulted in the structures exhibiting excellent gate control and electrical properties. Microcircuits are ubiquitous in the modern world owing to their applicability in several fields. These complex systems consist of dozens of simple but extraordinary devices known as transistors. The ever-changing nature of technology has necessitated improvements in semiconductor efficacy to facilitate their applications in various areas. Therefore, different processing methods have been recently developed to improve the performance and boost the density of transistors, which consequently improves the performance of microprocessors. The development of fin field-effect transistors (FinFETs) by Xuejue et al.1 was revolutionary in this regard. This method overcame the difficulty involving the size reduction of traditional planar transistors. Efforts that have been made since then to reduce the size of transistors further have plateaued. However, the development of gate-all-around field-effect transistors (GAAFETs) has enabled the nanometre-level processing of transistors. Recently, various novel structures have been derived based on GAAFETs, such as GAAFETs with a stack of lateral nanowires and horizontally stacked nanosheet GAAFETs. However, these new structures are challenging to manufacture and have issues regarding the elimination of crystal defects. Therefore, the appropriate selection of materials and removal of crystal defects is vital. The type of material selected is known to affect transistor efficiency due to electron mobility. For decades, silicon materials have been extensively used in epitaxial wafer layers; however, they encounter physical limitations and cause a decrease in the minification efficiency after entering the 10-nm node. Therefore, major semiconductor manufacturers have worked on developing alternative materials with more excellent stability and efficiency. Among them, germanium and III–V compounds are known to effectively improve the electron mobility of transistor channels, increase chip effectiveness, and enhance power-saving benefits. Consequently, they have become the premier options for use in new semiconductor manufacturing methods. Crystal defects, on the other hand, are known to affect the water quality and must therefore be eliminated. These defects not only cause material deterioration of the epitaxial layer but can also readily form electron–hole recombination centres in the active layer, which can seriously affect the operating performance of the components. Consequently, it is essential to employ substrates and epitaxial materials with mismatched lattice constants. The lattice mismatch between the epitaxial materials and substrates assists in the release of cumulative stress through dislocations and surface roughness during epitaxy. Previous studies by Frank et al.2,3,4 and Matthews et al.5,6,7 have described a relationship between the lattice mismatch and the thickness of the epitaxial layer. For a given lattice mismatch factor (m) of an epitaxial layer/substrate structure, a specific thickness known as the critical thickness (tc) exists and is defined as the following: $$ t_{c} = \frac{{b\left( {1 - \nu cos^{2} \theta } \right)}}{{8\pi \left( {1 + \nu } \right)mcos\lambda }}\left( {ln\frac{{t_{c} }}{b} + 1} \right) $$ where ν is the Poisson ratio, b is the Burgers vector, θ is the angle between the dislocation line and its Burgers vector, and λ is the angle between the direction of slip and the direction of the film plane that is perpendicular to the line of intersection between the slip plane and the interface. Furthermore, the dislocation theory suggests that the dislocation line cannot end within the lattice; it must either form a dislocation loop or extend to the grain boundary. Therefore, the misfit dislocation generated on the heterostructure junction, which extends from the junction to the interface, increases as the thickness of the epitaxial layer increases. This extended dislocation that penetrates the interface is denoted as a threading dislocation. These defects cause the degradation of components; therefore, reducing the number of dislocations is necessary for the semiconducting process. Currently, single and stacked GAA nanowire/nanosheet structures are attracting attention for developing transistors at sub-5-nm technology nodes8,9. GAA structures offer excellent electrostatic and short-channel control, and the stacking of nanowires/nanosheets increases the total drive current per footprint. Si, SiGe, GeSn, and InGaAs stacked-channel FETs have been reported in the literature10,11,12,13,14,15,16. Si-on-insulator (SOI) wafers with a 70-nm-thick Si top layer (p-type, 9–18 Ω∙cm) were employed as the starting substrates. The wafers were cleaned using the RCA standard cleaning 1 and 2 methods (SC1 and SC2) for removing organic materials, certain metals, and particles from the Si substrates; the wafers were subsequently rinsed in deionised water and dried in N2. To prepare the starting material, three periods of Ge(40 nm)/Si(25 nm) epitaxial layers were grown on SOI(100) with a 40-nm Si layer using a low-pressure chemical vapour deposition (LPCVD) system with GeH4 and SiH4 gases. The Ge/Si epitaxial multilayers were used as the starting material instead of conventional Ge/Si layers owing to better etching selectivity between Ge and Si in our etching process; this will be discussed in detail later. However, the large lattice mismatch between Ge and Si prevents Ge/Si epitaxial layers from growing in a 2D model. Therefore, the temperatures during epitaxy of Ge and Si were maintained at 400 °C and under 500 °C, respectively, to avoid island growth. The thickness of the deposited Ge film was determined by transmission electron microscopy (TEM), and no dislocations were detected in the Ge nanosheets during TEM observations conducted in parallel with photoluminescence (PL) analysis. The crystallisation of the Ge film was examined by X-ray diffraction (XRD; Cu Kα, λ = 1.5408 Å). All etchings were performed in Lam Research reactors (TCP 9600), which are transformer-coupled plasma (TCP) reactors that facilitate separate control of the coil (top electrode) power and substrate (lower electrode) bias. Backside cooling using He was carried out to enable more effective management of the substrate temperature. Samples were mounted on a 6-in Si carrier wafer coated with vacuum grease before being introduced into the etching chamber. Stacking Ge/Si epitaxial layers on silicon Ge(40 nm)/Si(25 nm) layers were epitaxially grown on 40-nm-thick Si substrates and a buried oxide (BOX) thickness of 150 nm by LPCVD using GeH4 and SiH4 gases. SiGe is treated as sacrificial layers to build Ge stacked layers and form the multilayers of Ge/SiGe as starting materials in general. However, since the lattice mismatch of the multilayer is tiny, it is not easy to etch SiGe away from Ge selectively. Therefore, this study proposed using Si as sacrificial layers and forming multilayers of Ge/Si as starting materials. And then, using an alternative selected etching process that we proposed, we can easily remove Si away from Ge. Additionally, a large lattice mismatch between Ge and Si will prevent the epitaxial layers of Ge/Si from growing in a two-dimensional (2D) model17,18. To avoid island growth, the layer growth of Ge and Si was maintained at 400 and under 500 °C, respectively, and the growth process was not interrupted during the period of temperature ramping (Fig. 1). The XRD pattern of the sample is shown in Fig. 2, which indicates that the Ge layers are relaxed and flat in morphology because the Ge peak would otherwise be considerably dispersed in shape. High-resolution XRD analysis was conducted with (004) Ω–2θ scans and (224) reciprocal space mapping (RSM) to confirm the degree of strain relaxation and the lattice constant of Ge, as shown in Fig. 2. The position of the Ge peak coincides with that of ideal Ge (004) with a perpendicular lattice constant of ~ 5.66742 Å, implying that the epitaxial Ge film contains a small amount of compressive strain. The (224) RSM reveals that the 40-nm Ge layer grown on the wafer is in a nearly relaxed state. SEM images of Ge/Si multilayers. Low-temperature growth of the Si layer and ensuring that the growth process is not interrupted in the switching period between Ge and Si assist in transforming the layers from (a) island growth to the (b) 2D growth mode, despite the large Ge/Si mismatch. (a) XRD analysis and reciprocal space mapping (RSM) of the formation of a relaxed single-crystalline Ge (100) layer on the wafers and (b) RSM pattern of the Ge/Si multilayers grown two-dimensionally on the wafers. Dislocation removal from suspended Ge nanosheets After LPCVD-based epitaxy, the active region of the device was defined by electron-beam lithography. The active region was subsequently isolated by etching Ge/Si stacked layers using Cl2/HBr-related etching processes. The fin structures formed in the central area of the active region are ~ 90-nm wide (Fig. 3a). A close-up micrograph (Fig. 3b) shows Ge/SOI heterostructures and 60° misfit dislocations resulting from lattice mismatch along with the Ge/Si interface. For every misfit dislocation, two threading dislocations at the ends of the misfit exist, which must thread to the surface or form a loop through end-to-end joining. The crystal is highly strained near the dislocations. The atomic bonds surrounding the defects do not normally connect the neighbouring atoms. (a) TEM images showing misfit dislocations along the Ge/Si interface and the resulting threading dislocations across the Ge film; (b) enlarged micrographs with misfit dislocations at the Ge/Si interface showing the formation of a relaxed single-crystalline Ge(100) layer on the wafers. The shape of Ge nanosheets is almost retained after etching owing to the excellent selective etching. Additionally, the dislocations in suspended Ge sheets are more easily removable than from samples with Ge layers tied to Si layers. Regarding selective etching, this study demonstrates that at a suitable temperature and with the assistance of megasonic agitation, Si layers can be readily etched away by Ge layers with good selectivity using tetramethylammonium hydroxide (TMAH) water solution, which helps retain the Ge layers. The solution must be at ~ 60 °C and employ megasonic agitation to achieve adequate. selective etching between Ge and Si19. The remaining Ge sheets appear perfect. The dislocations in Ge layers are detrimental to the device performance owing to the large lattice mismatch between Ge and Si; however, the dislocations in the suspended Ge structures can be easily removed by forming gas annealing (FGA; Fig. 4a), possibly because the dislocations tend to glide more without dragging the Ge/Si interface (Fig. 4b). The reduction of dislocation density in suspended Ge was also confirmed by PL measurements (Fig. 5). Two band-edge PL peaks corresponding to recombination through the direct band-gap of Ge at 0.75 eV are observed in increasing order of wavelength. The PL peak positions are obtained as a function of excitation laser power for both samples, which shows that the PL peak energy of the Ge nanosheets agrees with the direct band-gap PL peak energy of Ge. This agreement suggests that the observed PL from the Ge nanosheets originates from direct band-gap recombination. The observed PL peak shifts towards longer wavelengths with increasing excitation laser power owing to laser-induced heating and subsequent trapping of heat within the dense array of Ge nanosheets. Additionally, the PL peak intensity shows a nearly quadratic dependence on excitation laser power and increases with increasing temperature, similar to the direct band-gap PL behaviour of bulk Ge crystals. These observations indicate that efficient direct band-gap recombination is responsible for the observed PL from the Ge nanosheets. Cross-sectional TEM images of stacked Ge nanosheets (a) before and (b) after annealing. The latter show that the dislocations can be easily removed by forming gas annealing (600 °C/5 min) of the suspended sheets. PL emission from an area with intense multiple lines of stacked Ge nanosheets. The suspended Ge nanosheets exhibit enhanced PL emission after dislocation removal by annealing. Device fabrication and characterisation Figure 6 shows three horizontally-stacked pure-Ge-nanosheet GAA FETs; the remaining Ge sheets appear perfect. The dislocations in the suspended Ge structures can be easily removed by FGA (600 °C/5 min), possibly because the dislocations tend to glide more without dragging the Ge/Si interface. The removal of dislocations from suspended Ge layers is easier because the line tension force (FH) exerted on dislocation lines at the Ge/Si interface10 is absent, and consequently, the net glide force (FD) can drive a more efficient gliding of the dislocations. A 3-nm-thick high-κ dielectric Y2O3 layer was conformally deposited around the stacked channels by atomic layer deposition (ALD) as the gate dielectric. Cross-sectional TEM images of the stacked Ge nanosheets (a) before and (b) after annealing. The latter show that the dislocations can be easily removed by annealing (FGA, 600 °C/5 min) the suspended sheets. The capacitance–voltage (C-V) curves in Fig. 7 reveal the process control of this gate stack. The C-V characteristics are obtained as a function of frequency for TiN/Y2O3/p-Ge structures at different cycles, and 300 K. Typical C-V behaviours with three distinct regions of accumulation, depletion, and inversion is observed in all the tested structures. A common feature of all the C-V profiles involves the stretching out that appears in the depletion region, suggesting that interfacial traps are distributed at the interface of the dielectric film and the semiconductor. The capacitance equivalent thickness (CET) is ~ 1.4 nm. Preliminary results on the TiN/Y2O3/p-Ge structures that undergo FGA reveal a decrease in the equivalent oxide thickness (EOT) due to a substantial reduction in the C-V profiles, indicating that ALD is a promising Y2O3 candidate for metal-oxide/Ge gate stacks. C–V characteristics of capacitors fabricated using the TiN/Y2O3 gate stack process. Stacked Ge nanosheets with lengths of 1 µm and 180 nm obtained after etching away the Si inter-layers are shown in Fig. 8. After gate formation, a three-step B implantation with doses and energies of 2 × 1015/cm2 and 30 keV, 1 × 1015/cm2 and 20 keV, and 1 × 1015/cm2 and 10 keV was performed sequentially for p-FET source/drain (S/D) uniform doping; a similar process was employed for n-FET S/D doping. Both types of doping activation were performed by two-step rapid thermal annealing: 400 °C/5 min followed by 550 °C/30 s. Figure 9 shows the drain current–gate voltage (Id–Vg) characteristics of the p- and n-type Ge- nanosheet FETs; both devices are noted to exhibit decent transistor characteristics. Figure 9a shows the Id–Vg curves of a three stacked Ge-nanosheet p-FET and a single Ge-nanosheet p-FET with a short channel of 180 nm. The three-sheet device exhibits an ION current of 1650 μA/m at a gate voltage of − 1 V in Fig. 9a, which is ~ 2.3 times that of the single-sheet device. Figure 9b shows Id–Vg curves of three stacked Ge-nanosheet n-FETs. Figure 10 shows data corresponding to devices with a long channel of Lch = 1.0 µm. Because of the large series resistance, all these devices exhibit smaller ION values than those of devices with the short channel Lch of 180 nm, which suggests the importance of channel length for controlling the ION of a nanowire or nanosheet device. Uniaxial compressive and tensile stresses were applied to the p- and n-FETs, respectively, along the [110] channel direction by bending the sample to further enhance the ION. The applied strain was adjusted using the wafer curvature. With a strain of 0.06%, the ION of p-FETs and n-FETs can be boosted by ~ 12% and 7%, respectively (Fig. 11). The Id–Vg curves roughly correspond to the linear regime because the S/D contacts in these devices are not optimised, and the resulting serial resistance is slightly large. Finally, the Ge-nanosheet FETs were theoretically modelled by a technology computer-aided design (TCAD) simulation, as shown in Lch = 1.0 µm and 180 nm. The device operation was simulated in a size-and-field regime where carrier conduction occurred on the surface of the device; the simulated dopant concentration is shown in Fig. 12a. The simulations in Fig. 12 also indicate that the long-channel device exhibits serious current degradation; the doping concentration in the S/D region (blue part in the figures) is noted to be 2 × 1019/cm3. The upper part shows their structures, and the lower part shows the carrier distribution (inset rep resents the bar scale). The currents flowing through the Ge channels in Ge-nanosheet p-FETs with Lch = 1.0 µm and 180 nm are shown in Fig. 12b. The stacked Ge-nanosheet architecture is noted to provide optimal electrostatic confinement with the associated benefits of the short-channel effect, and the low levels of doping can reduce the corner effect at the threshold voltage. Compared to the bottom-up process employed for Ge-nanowire formation, the important advantages of the fabrication herein include excellent 2D Ge/Si multilayer epitaxy, novel selective Si etching over Ge using TMAH at 60 °C with megasonic agitation, and a new mechanism of dislocation removal from suspended Ge-nanosheet structures or from bulk Si substrates by annealing to meet the sub-3-nm node-related targets of the International Technology Roadmap for Semiconductors. Therefore, an entirely realisable, nearly defect-free, and relatively high-ION Ge nanosheet was developed in this study. SEM tilting-view images of the stacked Ge nanosheets: (a) long structure of 1.0 μm and (b) short structure of 180 nm. (a) Id—Vg curves of a three stacked Ge nanosheet and single-nanosheet P-FETs with Lch = 180 nm; the former has an enhanced ON current of ~ 2.3 times that of the latter owing to the stacked channels. (b) Id—Vg curves of a three stacked Ge nanosheet n-FET device. Id—Vg curves of the three stacked Ge nanosheet and single Ge nanosheet p-FETs with Lch = 1.0 µm; the former has an enhanced ON current of ~ 2.0 times that of the latter. Enhancement of Id—Vg profiles of stacked Ge-nanosheet GAAFETs (Lch = 180 nm) by wafer bending. Current enhancement of 12% for p-FET and 7% for n-FET are obtained with compressive and tensile strains of 0.06%, respectively. (a) TCAD simulation of the stacked Ge-nanosheet p-FETs with Lg = 80 nm and different Lch of 1.0 μm and 180 nm. (b) Simulated Id–Vg curves (linear scale) of the stacked Ge nanosheet p-FETs with Lch = 1.0 μm and 180 nm. Misfit dislocations are known to be mostly concentrated near the Ge/Si interfacial region; those generated on the heterostructure junction extend through the Ge layer and form threading dislocations. Moreover, recombination surrounds these defects, which can be easily eliminated by etching. In this study, large mismatch Ge/Si multilayers rather than Ge/GeSi multilayers were intentionally grown as the starting material for improved selective etching between Ge and Si to realise the formation of well-defined stacked Ge nanosheets. Ge/Si multilayers were grown at a low temperature to avoid island growth. The Si layers were found to be readily etched away over the Ge layers at an appropriate temperature with good selectivity using a TMAH solution. Additionally, the dislocations in suspended Ge sheets were more easily removed than from samples with Ge layers tied to Si layers. Finally, fully gated p-type and n-type transistors with three vertically-stacked Ge nanosheets as channels were developed. GAAFETs: Gate-all-around field-effect transistors SOI: Silicon-on-insulator SC1&SC2: Standard cleaning 1 & 2 LPCVD: Low-pressure chemical vapour deposition TEM: Photoluminescence XRD: TCP: Transfer-coupled plasma Buried oxide RSM: Reciprocal space mapping TMAH: FGA: Forming gas annealing ALD: FD : Net glide force Capacitance–voltage curve Capacitance equivalent thickness S/D: Source/drain Id/Vg : Drain current—gate voltage TCAD: Technology computer-aided design EOT: Equivalent oxide thickness Xuejue, H., et al. Sub 50-nm FinFET: PMOS. In International Electron Devices Meeting 1999. Technical Digest (Cat. No.99CH36318) (1999). Frank, F. C., van der Merwe, J. H. & Mott, N. F. One-dimensional dislocations. I. Static theory. Proc. R. Soc. Lond. Ser. A. Math. Phys. Sci. 198(1053), 205–216 (1949). ADS CAS MATH Google Scholar Frank, F. C., van der Merwe, J. H. & Mott, N. F. One-dimensional dislocations—III. Influence of the second harmonic term in the potential representation, on the properties of the model. Proc. R. Soc. Lond. Ser. A. Math. Phys. Sci. 200(1060), 125–134 (1949). Frank, F. C., Van Der Merwe, J. H. & Mott, N. F. One-dimensional dislocations. II. Misfitting monolayers and oriented overgrowth. Proc. R. Soc. Lond. Ser. A. Math. Phys. Sci. 198(1053), 216–225 (1949). Matthews, J. W. & Blakeslee, A. E. Defects in epitaxial multilayers: I. Misfit dislocations. J. Cryst. Growth 27, 118–125 (1974). Matthews, J. W. & Blakeslee, A. E. Defects in epitaxial multilayers: II. Dislocation pile-ups, threading dislocations, slip lines and cracks. J. Cryst. Growth 29(3), 273–280 (1975). Article ADS CAS Google Scholar Matthews, J. W. & Blakeslee, A. E. Defects in epitaxial multilayers: III. Preparation of almost perfect multilayers. J. Cryst. Growth 32(2), 265–273 (1976). Lei, D., et al., The first GeSn FinFET on a novel GeSnOI substrate achieving lowest S of 79 mV/decade and record high Gm, int of 807 ¼S/¼m for GeSn P-FETs. In 2017 Symposium on VLSI Technology T198–T199 (2017). Witters, L., et al. Strained germanium gate-all-around PMOS device demonstration using selective wire release etch prior to replacement metal gate deposition. In 2017 Symposium on VLSI Technology (2017). Bera, L.K., et al. Three dimensionally stacked SiGe nanowire array and gate-all-around p-MOSFETs. In 2006 International Electron Devices Meeting (2006). Dupre, C., et al. 15nm-diameter 3D stacked nanowires with independent gates operation: ΦFET. In 2008 IEEE International Electron Devices Meeting. (2008). Gu, J.J., et al. III-V gate-all-around nanowire MOSFET process technology: From 3D to 4D. In 2012 International Electron Devices Meeting (2012). Mertens, H., et al. Gate-all-around MOSFETs based on vertically stacked horizontal Si nanowires in a replacement metal gate process on bulk Si substrates. In 2016 IEEE Symposium on VLSI Technology (2016). Huang, Y., et al. First vertically stacked GeSn nanowire pGAAFETs with Ion = 1850μA/μm (Vov = Vds = −1V) on Si by GeSn/Ge CVD epitaxial growth and optimum selective etching. In 2017 IEEE International Electron Devices Meeting (IEDM) (2017). Loubet, N., et al., Stacked nanosheet gate-all-around transistor to enable scaling beyond FinFET. In 2017 Symposium on VLSI Technology T230–T231 (2017). Chu, C. et al. Stacked Ge-nanosheet GAAFETs fabricated by Ge/Si multilayer epitaxy. IEEE Electron Device Lett. 39(8), 1133–1136 (2018). Luo, Y. et al. Strong electro-absorption in GeSi epitaxy on silicon-on-insulator (SOI). Micromachines 3(2), 345–363 (2012). Chu, C.L., et al. The first Ge nanosheets GAAFET CMOS inverters fabricated by 2D Ge/Si multilayer epitaxy, Ge/Si selective etching. In 2021 International Symposium on VLSI Technology, Systems and Applications (VLSI-TSA) (2021). Chu, C.-L., Lu, T.-Y. & Fuh, Y.-K. The suitability of ultrasonic and megasonic cleaning of nanoscale patterns in ammonia hydroxide solutions for particle removal and feature damage. Semicond. Sci. Technol. 35(4), 045001 (2020). The authors appreciated the financial support from the Ministry of Science and Technology (MOST 109-2622-E-492-001). Taiwan Semiconductor Research Institute, NARL, Hsinchu, Taiwan Chun-Lin Chu General Education Centre, Tainan University of Technology, Tainan, Taiwan Jen-Yi Chang Department of Common and Graduate Studies, Sirindhorn International Institute of Technology, Pathum Thani, Thailand Shu-Han Hsu Department of Biomedical Engineering, National Cheng Kung University, 1 University Road, Tainan, 701, Taiwan Po-Yen Chen, Po-Yu Wang & Dean Chou Medical Device Innovation Center, National Cheng Kung University, 1 University Road, Tainan, 701, Taiwan Dean Chou Po-Yen Chen Po-Yu Wang C.L.C. designed experiments. P.Y.C. and P.Y.W. performed the experiments. D.C. did the data analysis. J.Y.C. and S.H.H. provided valuable suggestions to this work. C.L.C. and D.C. wrote the manuscript. D.C. designed the project. Correspondence to Dean Chou. Chu, CL., Chang, JY., Chen, PY. et al. Ge/Si multilayer epitaxy and removal of dislocations from Ge-nanosheet-channel MOSFETs. Sci Rep 12, 959 (2022). https://doi.org/10.1038/s41598-021-04514-y Received: 04 August 2021
CommonCrawl
How Big A Table Can The Carpenter Build? Sep. 23, 2016 , at 8:00 AM Edited by Oliver Roeder Filed under The Riddler Illustration by Guillaume Kurkdjian Welcome to The Riddler. Every week, I offer up a problem related to the things we hold dear around here: math, logic and probability. These problems, puzzles and riddles come from many top-notch puzzle folks around the world — including you! Last week, we started something new: Riddler Express problems. These will be bite-size puzzles that don't take as much fancy math or computational power to solve. For those of you in the slow-puzzle movement, worry not — we still feature our classic, more challenging Riddler. You can mull both over on your commute, dissect them on your lunch break and argue about them with your friends and lovers. When you're ready, submit your answer(s) using the links below. I'll reveal the solutions next week, and a correct submission (chosen at random) will earn a shoutout in this column.1 Before we get to the new puzzles, let's reveal the winners of last week's. Congratulations to 👏 Katie Andrews 👏 of Great Falls, Virginia, and 👏 Ben Sokolowsky 👏 of Stony Brook, New York, our respective Express and Classic winners. You can find solutions to the previous Riddlers at the bottom of this post. First up, Riddler Express, inspired by Eric Beck: Suppose the NCAA College Football Playoff (a single-elimination tournament) expanded to include not just four, but all 128 Division I Football Bowl Subdivision teams. How many individual games would be played in that playoff? (Hint: You probably don't even need a piece of paper for this one.) Submit your answer If you need a hint you can try asking me nicely. Want to submit a new Riddler Express puzzle or problem? Email me. And now, for Riddler Classic, a handy puzzle from Eric Valpey: You're on a DIY kick and want to build a circular dining table which can be split in half so leaves can be added when entertaining guests. As luck would have it, on your last trip to the lumber yard, you came across the most pristine piece of exotic wood that would be perfect for the circular table top. Trouble is, the piece is rectangular. You are happy to have the leaves fashioned from one of the slightly-less-than-perfect pieces underneath it, but there's still the issue of the main circle. You devise a plan: cut two congruent semicircles from the perfect 4-by-8-foot piece and reassemble them to form the circular top of your table. What is the radius of the largest possible circular table you can make? Extra credit: What is the largest circular table that can be made from N congruent pieces? If you need a hint you can try asking me nicely. Want to submit a new Riddler? Email me. Here's the solution to last week's Riddler Express, which asked how high Count Von Count can count on Twitter, given the 140-character limit. If he spells out the numbers, without commas or "and"s, plus an exclamation point, the highest he'll get is the number 1,111,373,373,372. It fits nice and snug in a tweet! one trillion one hundred eleven billion three hundred seventy three million three hundred seventy three thousand three hundred seventy two! — Oliver Roeder (@ollie) September 20, 2016 There's no silver bullet for arriving at this answer. Some trial and error is eventually required after you recognize that spelling out "one," "two" and "ten" is relatively short compared to "three," "seven" and "eight," for example. Eventually, you'll hit the character limit. Tim Supinie graphed the length of the count's tweets as the numbers being counted crept up: @hypergeometricx @ollie My puny laptop won't make it to 1 trillion this decade. (Good call on the log scale, tho.) pic.twitter.com/BLimA9ASZo — Tim Supinie (@plustssn) September 16, 2016 But worry not, Count fans. At his current rate of about two tweets per day, he won't hit the limit for another 1.5 billion years, at which point the oceans will have evaporated and only single-celled organisms and fictional nobles will have survived. And here's the solution to last week's Riddler Classic, concerning an experimental sports league draft. Suppose a 30-team league switches from a system in which the previous year's worst teams draft first to a system in which teams are randomly assigned to two groups via a series of coin flips, with all of the teams in one group drafting first, but the order within those groups still being sorted out by which teams were worst last season. If your team's draft position was 10th in the old system, its expected draft position is 12.75 under the new system. Why? Half of the time, your team wins its coin flip, putting it in the first group. In this case, you expect to draft after 9/2 other teams. The '9' represents the nine teams with worse records than yours, and the '2' represents the 50-50 chance any of those teams ends up in your group. Therefore, in this case, you expect to draft in (9/2)+1, or 11/2, position. The other half of the time, your team loses its coin flip, putting it in the second group. In this case, you expect to draft before 20/2, or 10, teams. The 20 is the 20 teams with better records than yours, and the 2 represents the chance that their coin flips place them in the first group. Therefore, in this case, you expect to draft in 30-10=20th position. Combining these two cases, your team expects to draft in \(1/2\cdot (11/2)+1/2\cdot (20)=12.75\) position. Turns out that the league's rule change does help in alleviating the incentive for tanking at the end of a season. For extra credit, I asked what a team's expected draft position would be if a league of N teams was divided randomly into T groups. Per the puzzle's submitter, Stephen Penrice, the expected draft position of the ith best team is $$\frac{i}{t}+\frac{(t-1)(N+1)}{2T}$$ The math gets a little messy there, but the ideas are just the same as in the 30-team, 10th place case. For more, as ever, Laurent Lessard provides a lucid explanation. And while the average pick becomes 12.75 rather than 10th under the new system, the distribution of that new pick is bimodal, and has a high variance, as this animated simulation from Russell Maier shows: My ans to this wknds @FiveThirtyEight riddler @ollie. Nice bimodal dist. Avg pick in new sys is 13th for 10th worst pic.twitter.com/vhfw4JSA9G — Russell Maier (@MaierRussell) September 19, 2016 Elsewhere in the puzzling world: The world's greatest detective [The New York Times] More mystery puzzles [Expii] A "September" puzzle [NPR] Slate now has a crossword puzzle [Slate] Better know a puzzle master: David Kwong [NBC News] Have a wonderful weekend! May you win all of your coin tosses. Important small print: To be eligible, I need to receive your correct answer before 11:59 p.m. EDT on Sunday. Have a great weekend! The Riddler (180 posts)
CommonCrawl
Predefined systems exist in the Systems submodule in the form of functions that return a DynamicalSystem. They are accessed like: using DynamicalSystems # or DynamicalSystemsBase ds = Systems.lorenz(ρ = 32.0) So far, the predefined systems that exist in the Systems sub-module are: DynamicalSystemsBase.Systems.antidots — Function antidots([u]; B = 1.0, d0 = 0.3, c = 0.2) An antidot "superlattice" is a Hamiltonian system that corresponds to a smoothened periodic Sinai billiard with disk diameter d0 and smooth factor c [1]. This version is the two dimensional classical form of the system, with quadratic dynamical rule and a perpendicular magnetic field. Notice that the dynamical rule is with respect to the velocity instead of momentum, i.e.: \[\begin{aligned} \dot{x} &= v_x \\ \dot{y} &= v_y \\ \dot{v_x} &= B v_y - U_x \\ \dot{v_y} &= -B v_x - U_y \\ \end{aligned}\] with $U$ the potential energy: \[U = \left(\tfrac{1}{c^4}\right) \left[\tfrac{d_0}{2} + c - r_a\right]^4\] if $r_a = \sqrt{(x \mod 1)^2 + (y \mod 1)^2} < \frac{d_0}{2} + c$ and 0 otherwise. That is, the potential is periodic with period 1 in both $x, y$ and normalized such that for energy value of 1 it is a circle of diameter $d_0$. The magnetic field is also normalized such that for value B = 1 the cyclotron diameter is 1. [1] : G. Datseris et al, New Journal of Physics 2019 DynamicalSystemsBase.Systems.arnoldcat — Function arnoldcat(u0 = [0.001245, 0.00875]) \[f(x,y) = (2x+y,x+y) \mod 1\] Arnold's cat map. A chaotic map from the torus into itself, used by Vladimir Arnold in the 1960s. [1] [1] : Arnol'd, V. I., & Avez, A. (1968). Ergodic problems of classical mechanics. DynamicalSystemsBase.Systems.betatransformationmap — Function betatransformationmap(u0 = 0.25; β=2.0)-> ds The beta transformation, also called the generalized Bernoulli map, or the βx map, is described by \[\begin{aligned} x_{n+1} = \beta x (\mod 1). \end{aligned}\] The parameter β controls the dynamics of the map. Its Lyapunov exponent can be analytically shown to be λ = ln(β) [Ott2002]. At β=2, it becomes the dyadic transformation, also known as the bit shift map, the 2x mod 1 map, the Bernoulli map or the sawtooth map. The typical trajectory for this case is chaotic, though there are countably infinite periodic orbits [Ott2002]. DynamicalSystemsBase.Systems.chua — Function chua(u0 = [0.7, 0.0, 0.0]; a = 15.6, b = 25.58, m0 = -8/7, m1 = -5/7) \[\begin{aligned} \dot{x} &= a [y - h(x)]\\ \dot{y} &= x - y+z \\ \dot{z} &= b y \end{aligned}\] where $h(x)$ is defined by \[h(x) = m_1 x + \frac 1 2 (m_0 - m_1)(|x + 1| - |x - 1|)\] This is a 3D continuous system that exhibits chaos. Chua designed an electronic circuit with the expressed goal of exhibiting chaotic motion, and this system is obtained by rescaling the circuit units to simplify the form of the equation. [1] The parameters are $a$, $b$, $m_0$, and $m_1$. Setting $a = 15.6$, $m_0 = -8/7$ and $m_1 = -5/7$, and varying the parameter $b$ from $b = 25$ to $b = 51$, one observes a classic period-doubling bifurcation route to chaos. [2] The parameter container has the parameters in the same order as stated in this function's documentation string. [1] : Chua, Leon O. "The genesis of Chua's circuit", 1992. [2] : Leon O. Chua (2007) "Chua circuit", Scholarpedia, 2(10):1488. DynamicalSystemsBase.Systems.coupled_roessler — Function coupled_roessler(u0=[1, -2, 0, 0.11, 0.2, 0.1]; ω1 = 0.18, ω2 = 0.22, a = 0.2, b = 0.2, c = 5.7, k1 = 0.115, k2 = 0.0) Two coupled Rössler oscillators, used frequently in the study of chaotic synchronization. The parameter container has the parameters in the same order as stated in this function's documentation string. The equations are: \[\begin{aligned} \dot{x_1} &= -\omega_1 y_1-z_1 \\ \dot{y_1} &= \omega_1 x+ay_1 + k_1(y_2 - y_1) \\ \dot{z_1} &= b + z_1(x_1-c) \\ \dot{x_2} &= -\omega_2 y_2-z_2 \\ \dot{y_2} &= \omega_2 x+ay_2 + k_2(y_1 - y_2) \\ \dot{z_2} &= b + z_2(x_2-c) \\ \end{aligned}\] DynamicalSystemsBase.Systems.coupledstandardmaps — Function coupledstandardmaps(M::Int, u0 = 0.001rand(2M); ks = ones(M), Γ = 1.0) \[\begin{aligned} \theta_{i}' &= \theta_i + p_{i}' \\ p_{i}' &= p_i + k_i\sin(\theta_i) - \Gamma \left[ \sin(\theta_{i+1} - \theta_{i}) + \sin(\theta_{i-1} - \theta_{i}) \right] \end{aligned}\] A discrete system of M nonlinearly coupled standard maps, first introduced in [1] to study diffusion and chaos thresholds. The total dimension of the system is 2M. The maps are coupled through Γ and the i-th map has a nonlinear parameter ks[i]. The first M parameters are the ks, the M+1th parameter is Γ. The first M entries of the state are the angles, the last M are the momenta. [1] : H. Kantz & P. Grassberger, J. Phys. A 21, pp 127–133 (1988) DynamicalSystemsBase.Systems.double_pendulum — Function double_pendulum(u0 = [π/2, 0, 0, 0.5]; G=10.0, L1 = 1.0, L2 = 1.0, M1 = 1.0, M2 = 1.0) Famous chaotic double pendulum system (also used for our logo!). Keywords are gravity (G), lengths of each rod (L1 and L2) and mass of each ball (M1 and M2). Everything is assumed in SI units. The variables order is $[θ₁, ω₁, θ₂, ω₂]$ and they satisfy: \[\begin{aligned} θ̇₁ &= ω₁ \\ ω̇₁ &= [M₂ L₁ ω₁² \sin φ \cos φ + M₂ G \sin θ₂ \cos φ + M₂ L₂ ω₂² \sin φ - (M₁ + M₂) G \sin θ₁] / (L₁ Δ) \\ θ̇₂ &= ω₂ \\ ω̇₂ &= [-M₂ L₂ ω₂² \sin φ \cos φ + (M₁ + M₂) G \sin θ₁ \cos φ - (M₁ + M₂) L₁ ω₁² \sin φ - (M₁ + M₂) G \sin Θ₂] / (L₂ Δ) \end{aligned}\] where $φ = θ₂-θ₁$ and $Δ = (M₁ + M₂) - M₂ \cos² φ$. Jacobian is created automatically (thus methods that use the Jacobian will be slower)! (please contribute the Jacobian in LaTeX :smile:) DynamicalSystemsBase.Systems.duffing — Function duffing(u0 = [0.1, 0.25]; ω = 2.2, f = 27.0, d = 0.2, β = 1) The (forced) duffing oscillator, that satisfies the equation \[\ddot{x} + d \dot{x} + β x + x^3 = f \cos(\omega t)\] with f, ω the forcing strength and frequency and d the damping. DynamicalSystemsBase.Systems.fitzhugh_nagumo — Function fitzhugh_nagumo(u = 0.5ones(2); a=3.0, b=0.2, ε=0.01, I=0.0) Famous excitable system which emulates the firing of a neuron, with rule \[\begin{aligned} \dot{v} &= av(v-b)(1-v) - w + I \\ \dot{w} &= \varepsilon(v - w) \end{aligned}\] More details in the Scholarpedia entry. DynamicalSystemsBase.Systems.forced_pendulum — Function forced_pendulum(u0 = [0.1, 0.25]; ω = 2.2, f = 27.0, d = 0.2) The standard forced damped pendulum with a sine response force. duffing oscillator, that satisfies the equation \[\ddot{x} + d \dot{x} + \sin(x) = f \cos(\omega t)\] DynamicalSystemsBase.Systems.gissinger — Function gissinger(u0 = [3, 0.5, 1.5]; μ = 0.119, ν = 0.1, Γ = 0.9) \[\begin{aligned} \dot{Q} &= \mu Q - VD \\ \dot{D} &= -\nu D + VQ \\ \dot{V} &= \Gamma -V + QD \end{aligned}\] A continuous system that models chaotic reversals due to Gissinger [1], applied to study the reversals of the magnetic field of the Earth. [1] : C. Gissinger, Eur. Phys. J. B 85, 4, pp 1-12 (2012) DynamicalSystemsBase.Systems.grebogi_map — Function grebogi_map(u0 = [0.2, 0.]; a = 1.32, b=0.9, J₀=0.3) \[\begin{aligned} \theta_{n+1} &= \theta_n + a\sin 2 \theta_n -b \sin 4 \theta_n -x_n\sin \theta_n\\ x_{n+1} &= -J_0 \cos \theta_n \end{aligned}\] This map has two fixed point at (0,-J_0) and (π,J_0) which are attracting for |1+2a-4b|<1. There is a chaotic transient dynamics before the dynamical systems settles at a fixed point. This map illustrate the fractalization of the basins boundary and its uncertainty exponent α is roughly 0.2. DynamicalSystemsBase.Systems.henon — Function henon(u0=zeros(2); a = 1.4, b = 0.3) \[\begin{aligned} x_{n+1} &= 1 - ax^2_n+y_n \\ y_{n+1} & = bx_n \end{aligned}\] The Hénon map is a two-dimensional mapping due to Hénon [1] that can display a strange attractor (at the default parameters). In addition, it also displays many other aspects of chaos, like period doubling or intermittency, for other parameters. According to the author, it is a system displaying all the properties of the Lorentz system (1963) while being as simple as possible. Default values are the ones used in the original paper. [1] : M. Hénon, Commun.Math. Phys. 50, pp 69 (1976) DynamicalSystemsBase.Systems.henonheiles — Function henonheiles(u0=[0, -0.25, 0.42081,0]) \[\begin{aligned} \dot{x} &= p_x \\ \dot{y} &= p_y \\ \dot{p}_x &= -x -2 xy \\ \dot{p}_y &= -y - (x^2 - y^2) \end{aligned}\] The Hénon–Heiles system [1] is a conservative dynamical system and was introduced as a simplification of the motion of a star around a galactic center. It was originally intended to study the existence of a "third integral of motion" (which would make this 4D system integrable). In that search, the authors encountered chaos, as the third integral existed for only but a few initial conditions. The default initial condition is a typical chaotic orbit. The function Systems.henonheiles_ics(E, n) generates a grid of n×n initial conditions, all having the same energy E. [1] : Hénon, M. & Heiles, C., The Astronomical Journal 69, pp 73–79 (1964) DynamicalSystemsBase.Systems.hindmarshrose — Function hindmarshrose(u0=[-1.0, 0.0, 0.0]; a=1, b=3, c=1, d=5, r=0.001, s=4, xr=-8/5, I=2.0) -> ds \[\begin{aligned} \dot{x} &= y - ax^3 + bx^2 +I - z, \\ \dot{y} &= c - dx^2 -y, \\ \dot{z} &= r(s(x - x_r) - z) \end{aligned}\] The Hindmarsh-Rose model reproduces the bursting behavior of a neuron's membrane potential, characterized by a fast sequence of spikes followed by a quiescent period. The x variable describes the membane potential, whose behavior can be controlled by the applied current I; the y variable describes the sodium and potassium ionic currents, and z describes an adaptation current [HindmarshRose1984]. The default parameter values are taken from [HindmarshRose1984], chosen to lead to periodic bursting. [HindmarshRose1984] : J. L. Hindmarsh and R. M. Rose (1984) "A model of neuronal bursting using three coupled first order differential equations", Proc. R. Soc. Lond. B 221, 87-102. DynamicalSystemsBase.Systems.hindmarshrose_two_coupled — Function hindmarshrose_two_coupled(u0=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6]; a = 1.0, b = 3.0, d = 5.0, r = 0.001, s = 4.0, xr = -1.6, I = 4.0, k1 = -0.17, k2 = -0.17, k_el = 0.0, xv = 2.0) \[\begin{aligned} \dot x_{i} = y_{i} + bx^{2}_{i} - ax^{3}_{i} - z_{i} + I - k_{i}(x_{i} - v_{s})\Gamma(x_{j}) + k(x_{j} - x_{i})\\ \dot y_{i} = c - d x^{2}_{i} - y_{i}\\ \dot z_{i} = r[s(x_{i} - x_{R}) - z_{i}]\\ \i,j=1,2 (i\neq j).\\ \end{aligned}\] The two coupled Hindmarsh Rose element by chemical and electrical synapse. it is modelling the dynamics of a neuron's membrane potential. The default parameter values are taken from article "Dragon-king-like extreme events in coupled bursting neurons", DOI:https://doi.org/10.1103/PhysRevE.97.062311. DynamicalSystemsBase.Systems.hodgkinhuxley — Function hodgkinhuxley(u0=[-60.0, 0.0, 0.0, 0.0]; I = 12.0, Vna = 50.0, Vk = -77.0, Vl = -54.4, gna = 120.0,gk = 36.0, gl = 0.3) -> ds \[\begin{aligned} C_m \frac{dV_m}{dt} = -\overline{g}_\mathrm{K} n^4 (V_m - V_\mathrm{K}) - \overline{g}_\mathrm{Na} m^3 h(V_m - V_\mathrm{Na}) - \overline{g}_l (V_m - Vl) + I\\ \dot{n} &= \alpha_n(V_m)(1-n) - \beta_n(V_m)n \\ \dot{m} &= \alpha_m(V_m)(1-m) - \beta_m(V_m)m \\ \dot{h} &= \alpha_h(V_m)(1-h) - \beta_h(V_m)h \\ \alpha_n(V_m) = \frac{0.01(V+55)}{1 - \exp(\frac{1V+55}{10})} \quad \alpha_m(V_m) = \frac{0.1(V+40)}{1 - \exp(\frac{V+40}{10})} \quad \alpha_h(V_m) = 0.07 \exp(-\frac{(V+65)}{20}) \\ \beta_n(V_m) = 0.125 \exp(-\frac{V+65}{80}) \quad \beta_m(V_m) = 4 \exp(-\frac{V+65}{18}) \quad \beta_h(V_m) = \frac{1}{1 + \exp(-\frac{V+35}{10})} \end{aligned}\] The Nobel-winning four-dimensional dynamical system due to Hodgkin and Huxley [HodgkinHuxley1952], which describes the electrical spiking activity (action potentials) in neurons. A complete description of all parameters and variables is given in [HodgkinHuxley1952], [Ermentrout2010], and [Abbott2005]. The equations and default parameters used here are taken from [Ermentrout2010][Abbott2005]. They differ slightly from the original paper [HodgkinHuxley1952], since they were changed to shift the resting potential to -65 mV, instead of the 0mV in the original paper. Varying the injected current I from I = -5 to I = 12 takes the neuron from quiescent to a single spike, and to a tonic (repetitive) spiking. This is due to a subcritical Hopf bifurcation, which occurs close to I = 9.5. [HodgkinHuxley1952] : A. L. Hodgkin, A.F. Huxley J. Physiol., pp. 500-544 (1952). [Ermentrout2010] : G. Bard Ermentrout, and David H. Terman, "Mathematical Foundations of Neuroscience", Springer (2010). [Abbott2005] : L. F. Abbott, and P. Dayan, "Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems", MIT Press (2005). DynamicalSystemsBase.Systems.ikedamap — Function ikedamap(u0=[1.0, 1.0]; a=1.0, b=1.0, c=0.4, d =6.0) -> ds \[\begin{aligned} t &= c - \frac{d}{1 + x_n^2 + y_n^2} \\ x_{n+1} &= a + b(x_n \cos(t) - y\sin(t)) \\ y_{n+1} &= b(x\sin(t) + y \cos(t)) \end{aligned}\] The Ikeda map was proposed by Ikeda as a model to explain the propagation of light into a ring cavity [Skiadas2008]. It generates a variety of nice-looking, interesting attractors. The default parameters are chosen to give a unique chaotic attractor. A double attractor can be obtained with parameters [a,b,c,d] = [6, 0.9, 3.1, 6], and a triple attractor can be obtained with [a,b,c,d] = [6, 9, 2.22, 6] [Skiadas2008]. [Skiadas2008] : "Chaotic Modelling and Simulation: Analysis of Chaotic Models, Attractors and Forms", CRC Press (2008). DynamicalSystemsBase.Systems.kuramoto — Function kuramoto(D = 20, u0 = range(0, 2π; length = D); K = 0.3, ω = range(-1, 1; length = D) The Kuramoto model[Kuramoto1975] of D coupled oscillators with equation \[\dot{\phi}_i = \omega_i + \frac{K}{D}\sum_{j=1}^{D} \sin(\phi_j - \phi_i)\] DynamicalSystemsBase.Systems.logistic — Function logistic(x0 = 0.4; r = 4.0) \[x_{n+1} = rx_n(1-x_n)\] The logistic map is an one dimensional unimodal mapping due to May [1] and is used by many as the archetypal example of how chaos can arise from very simple equations. Originally intentend to be a discretized model of polulation dynamics, it is now famous for its bifurcation diagram, an immensely complex graph that that was shown be universal by Feigenbaum [2]. [1] : R. M. May, Nature 261, pp 459 (1976) [2] : M. J. Feigenbaum, J. Stat. Phys. 19, pp 25 (1978) DynamicalSystemsBase.Systems.lorenz — Function lorenz(u0=[0.0, 10.0, 0.0]; σ = 10.0, ρ = 28.0, β = 8/3) -> ds \[\begin{aligned} \dot{X} &= \sigma(Y-X) \\ \dot{Y} &= -XZ + \rho X -Y \\ \dot{Z} &= XY - \beta Z \end{aligned}\] The famous three dimensional system due to Lorenz [1], shown to exhibit so-called "deterministic nonperiodic flow". It was originally invented to study a simplified form of atmospheric convection. Currently, it is most famous for its strange attractor (occuring at the default parameters), which resembles a butterfly. For the same reason it is also associated with the term "butterfly effect" (a term which Lorenz himself disliked) even though the effect applies generally to dynamical systems. Default values are the ones used in the original paper. [1] : E. N. Lorenz, J. atmos. Sci. 20, pp 130 (1963) DynamicalSystemsBase.Systems.lorenz84 — Function lorenz84(u = [0.1, 0.1, 0.1]; F=6.846, G=1.287, a=0.25, b=4.0) Lorenz-84's low order atmospheric general circulation model \[\begin{aligned} \dot x = − y^2 − z^2 − ax + aF, \\ \dot y = xy − y − bxz + G, \\ \dot z = bxy + xz − z. \\ \end{aligned}\] This system has interesting multistability property in the phase space. For the default parameter set we have four coexisting attractors that gives birth to interesting fractalized phase space as shown in [Freire2008]. One can see this by doing: ds = Systems.lorenz84(rand(3)) xg = yg = range(-1.0, 2.0; length=300) zg = range(-1.5, 1.5; length=30) bsn, att = basins_of_attraction((xg, yg, zg), ds; mx_chk_att=4) lorenz96(N::Int, u0 = rand(M); F=0.01) \[\frac{dx_i}{dt} = (x_{i+1}-x_{i-2})x_{i-1} - x_i + F\] N is the chain length, F the forcing. Jacobian is created automatically. (parameter container only contains F) DynamicalSystemsBase.Systems.lorenzdl — Function lorenzdl(u = [0.1, 0.1, 0.1]; R=4.7) Diffusionless Lorenz system: it is probably the simplest rotationnaly invariant chaotic flow. \[\begin{aligned} \dot x = y − x, \\ \dot y = -xz, \\ \dot z = xy - R. \\ \end{aligned}\] For R=4.7 this system has two coexisting Malasoma strange attractors that are linked together as shown in [Sprott2014]. The fractal boundary between the basins of attractor can be visualized with a Poincaré section at z=0: ds = Systems.lorenzdl() xg = yg = range(-10.0, 10.0; length=300) pmap = poincaremap(ds, (3, 0.), Tmax=1e6; idxs = 1:2) bsn, att = basins_of_attraction((xg, yg), pmap) DynamicalSystemsBase.Systems.lotkavolterra — Function lotkavolterra(u0=[10.0, 5.0]; α = 1.5, β = 1, δ=1, γ=3) -> ds \[\begin{aligned} \dot{x} &= \alpha x - \beta xy, \\ \dot{y} &= \delta xy - \gamma y \end{aligned}\] The famous Lotka-Volterra model is a simple ecological model describing the interaction between a predator and a prey species (or also parasite and host species). It has been used independently in fields such as epidemics, ecology, and economics [Hoppensteadt2006], and is not to be confused with the Competitive Lotka-Volterra model, which describes competitive interactions between species. The x variable describes the number of prey, while y describes the number of predator. The default parameters are taken from [Weisstein], which lead to typical periodic oscillations. [Hoppensteadt2006] : Frank Hoppensteadt (2006) "Predator-prey model", Scholarpedia, 1(10):1563. [Weisstein] : Weisstein, Eric W., "Lotka-Volterra Equations." From MathWorld–A Wolfram Web Resource. https://mathworld.wolfram.com/Lotka-VolterraEquations.html DynamicalSystemsBase.Systems.magnetic_pendulum — Function magnetic_pendulum(u=[0.7,0.7,0,0]; d=0.3, α=0.2, ω=0.5, N=3, γs=fill(1.0,N)) Create a pangetic pendulum with N magnetics, equally distributed along the unit circle, with dynamical rule \[\begin{aligned} \ddot{x} &= -\omega ^2x - \alpha \dot{x} - \sum_{i=1}^N \frac{\gamma_i (x - x_i)}{D_i^3} \\ \ddot{y} &= -\omega ^2y - \alpha \dot{y} - \sum_{i=1}^N \frac{\gamma_i (y - y_i)}{D_i^3} \\ D_i &= \sqrt{(x-x_i)^2 + (y-y_i)^2 + d^2} \end{aligned}\] where α is friction, ω is eigenfrequency, d is distance of pendulum from the magnet's plane and γ is the magnetic strength. DynamicalSystemsBase.Systems.manneville_simple — Function manneville_simple(x0 = 0.4; ε = 1.1) \[x_{n+1} = [ (1+\varepsilon)x_n + (1-\varepsilon)x_n^2 ] \mod 1\] A simple 1D map due to Mannevile[Manneville1980] that is useful in illustrating the concept and properties of intermittency. DynamicalSystemsBase.Systems.more_chaos_example — Function more_chaos_example(u = rand(3)) A three dimensional chaotic system introduced in [Sprott2020] with rule \[\begin{aligned} \dot{x} &= y \\ \dot{y} &= -x - \textrm{sign}(z)y \\ \dot{z} &= y^2 - \exp(-x^2) \end{aligned}\] It is noteworthy because its strange attractor is multifractal with fractal dimension ≈ 3. DynamicalSystemsBase.Systems.nld_coupled_logistic_maps — Function nld_coupled_logistic_maps(D = 4, u0 = range(0, 1; length=D); λ = 1.2, k = 0.08) A high-dimensional discrete dynamical system that couples D logistic maps with a strongly nonlinear all-to-all coupling. For the default parameters it displays several co-existing attractors. The equations are: \[u_i' = \lambda - u_i^2 + k \sum_{j\ne i} (u_j^2 - u_i^2)\] Here the prime $'$ denotes next state. DynamicalSystemsBase.Systems.nosehoover — Function nosehoover(u0 = [0, 0.1, 0]) \[\begin{aligned} \dot{x} &= y \\ \dot{y} &= yz - x \\ \dot{z} &= 1 - y^2 \end{aligned}\] Three dimensional conservative continuous system, discovered in 1984 during investigations in thermodynamical chemistry by Nosé and Hoover, then rediscovered by Sprott during an exhaustive search as an extremely simple chaotic system. [1] See Chapter 4 of "Elegant Chaos" by J. C. Sprott. [2] [1] : Hoover, W. G. (1995). Remark on ''Some simple chaotic flows''. Physical Review E, 51(1), 759. [2] : Sprott, J. C. (2010). Elegant chaos: algebraically simple chaotic flows. World Scientific. DynamicalSystemsBase.Systems.pomeau_manneville — Function pomaeu_manneville(u0 = 0.2; z = 2.5) The Pomeau-Manneville map is a one dimensional discrete map which is characteristic for displaying intermittency [1]. Specifically, for z > 2 the average time between chaotic bursts diverges, while for z > 2.5, the map iterates are long range correlated [2]. Notice that here we are providing the "symmetric" version: \[x_{n+1} = \begin{cases} -4x_n + 3, & \quad x_n \in (0.5, 1] \\ x_n(1 + |2x_n|^{z-1}), & \quad |x_n| \le 0.5 \\ -4x_n - 3, & \quad x_n \in [-1, 0.5) \end{cases}\] [1] : Manneville & Pomeau, Comm. Math. Phys. 74 (1980) [2] : Meyer et al., New. J. Phys 20 (2019) DynamicalSystemsBase.Systems.qbh — Function qbh([u0]; A=1.0, B=0.55, D=0.4) A conservative dynamical system with rule \[\begin{aligned} \dot{q}_0 &= A p_0 \\ \dot{q}_2 &= A p_2 \\ \dot{p}_0 &= -A q_0 -3 \frac{B}{\sqrt{2}} (q_2^2 - q_0^2) - D q_0 (q_0^2 + q_2^2) \\ \dot{p}_2 &= -q_2 [A + 3\sqrt{2} B q_0 + D (q_0^2 + q_2^2)] \end{aligned}\] This dynamical rule corresponds to a Hamiltonian used in nuclear physics to study the quadrupole vibrations of the nuclear surface [1,2]. \[H(p_0, p_2, q_0, q_2) = \frac{A}{2}\left(p_0^2+p_2^2\right)+\frac{A}{2}\left(q_0^2+q_2^2\right) +\frac{B}{\sqrt{2}}q_0\left(3q_2^2-q_0^2\right) +\frac{D}{4}\left(q_0^2+q_2^2\right)^2\] The Hamiltonian has a similar structure with the Henon-Heiles one, but it has an added fourth order term and presents a nontrivial dependence of chaoticity with the increase of energy [3]. The default initial condition is chaotic. [1]: Eisenberg, J.M., & Greiner, W., Nuclear theory 2 rev ed. Netherlands: North-Holland pp 80 (1975) [2]: Baran V. and Raduta A. A., International Journal of Modern Physics E, 7, pp 527–551 (1998) [3]: Micluta-Campeanu S., Raportaru M.C., Nicolin A.I., Baran V., Rom. Rep. Phys. 70, pp 105 (2018) DynamicalSystemsBase.Systems.riddled_basins — Function riddled_basins(u0=[0.5, 0.6, 0, 0]; γ=0.05, x̄ = 1.9, f₀=2.3, ω =3.5, x₀=1, y₀=0) → ds \[\begin{aligned} \dot{x} &= v_x, \quad \dot{y} = v_z \\ \dot{v}_x &= -\gamma v_x - ( -4x(1-x^2) +y^2) + f_0 \sin(\omega t)x_0 \\ \dot{v}_y &= -\gamma v_y - (2y(x+\bar{x})) + f_0 \sin(\omega t)y_0 \end{aligned}\] This 5 dimensional (time-forced) dynamical system was used by Ott et al [OttRiddled2014] to analyze riddled basins of attraction. This means nearby any point of a basin of attraction of an attractor A there is a point of the basin of attraction of another attractor B. DynamicalSystemsBase.Systems.rikitake — Function rikitake(u0 = [1, 0, 0.6]; μ = 1.0, α = 1.0) \[\begin{aligned} \dot{x} &= -\mu x +yz \\ \dot{y} &= -\mu y +x(z-\alpha) \\ \dot{z} &= 1 - xz \end{aligned}\] Rikitake's dynamo is a system that tries to model the magnetic reversal events by means of a double-disk dynamo system. [1] : T. Rikitake Math. Proc. Camb. Phil. Soc. 54, pp 89–105, (1958) DynamicalSystemsBase.Systems.roessler — Function roessler(u0=[1, -2, 0.1]; a = 0.2, b = 0.2, c = 5.7) \[\begin{aligned} \dot{x} &= -y-z \\ \dot{y} &= x+ay \\ \dot{z} &= b + z(x-c) \end{aligned}\] This three-dimensional continuous system is due to Rössler [1]. It is a system that by design behaves similarly to the lorenz system and displays a (fractal) strange attractor. However, it is easier to analyze qualitatively, as for example the attractor is composed of a single manifold. Default values are the same as the original paper. [1] : O. E. Rössler, Phys. Lett. 57A, pp 397 (1976) DynamicalSystemsBase.Systems.rulkovmap — Function rulkovmap(u0=[1.0, 1.0]; α=4.1, β=0.001, σ=0.001) -> ds \[\begin{aligned} x_{n+1} &= \frac{\alpha}{1+x_n^2} + y_n \\ y_{n+1} &= y_n - \sigma x_n - \beta \end{aligned}\] The Rulkov map is a two-dimensional phenomenological model of a neuron capable of describing spikes and bursts. It was described by Rulkov [Rulkov2002] and is used in studies of neural networks due to its computational advantages, being fast to run. The parameters σ and β are generally kept at 0.001, while α is chosen to give the desired dynamics. The dynamics can be quiescent for α ∈ (0,2), spiking for α ∈ (2, 2.58), triangular bursting for α ∈ (2.58, 4), and rectangular bursting for α ∈ (4, 4.62) [Rulkov2001][Cao2013]. The default parameters are taken from [Rulkov2001] to lead to a rectangular bursting. [Rulkov2002] : "Modeling of spiking-bursting neural behavior using two-dimensional map", Phys. Rev. E 65, 041922 (2002). [Rulkov2001] : "Regularization of Synchronized Chaotic Bursts", Phys. Rev. Lett. 86, 183 (2001). [Cao2013] : H. Cao and Y Wu, "Bursting types and stable domains of Rulkov neuron network with mean field coupling", International Journal of Bifurcation and Chaos,23:1330041 (2013). DynamicalSystemsBase.Systems.shinriki — Function shinriki(u0 = [-2, 0, 0.2]; R1 = 22.0) Shinriki oscillator with all other parameters (besides R1) set to constants. This is a stiff problem, be careful when choosing solvers and tolerances. DynamicalSystemsBase.Systems.sprott_dissipative_conservative — Function sprott_dissipative_conservative(u0 = [1.0, 0, 0]; a = 2, b = 1, c = 1) An interesting system due to Sprott[Sprott2014b] where some initial conditios such as [1.0, 0, 0] lead to quasi periodic motion on a 2-torus, while for [2.0, 0, 0] motion happens on a (dissipative) chaotic attractor. The equations are: \[\begin{aligned} \dot{x} &= y + axy + xz \\ \dot{y} &= 1 - 2x^2 + byz \\ \dot{z_1} &= cx - x^2 - y^2 \end{aligned}\] In the original paper there were no parameters, which are added here for exploration purposes. DynamicalSystemsBase.Systems.standardmap — Function standardmap(u0=[0.001245, 0.00875]; k = 0.971635) \[\begin{aligned} \theta_{n+1} &= \theta_n + p_{n+1} \\ p_{n+1} &= p_n + k\sin(\theta_n) \end{aligned}\] The standard map (also known as Chirikov standard map) is a two dimensional, area-preserving chaotic mapping due to Chirikov [1]. It is one of the most studied chaotic systems and by far the most studied Hamiltonian (area-preserving) mapping. The map corresponds to the Poincaré's surface of section of the kicked rotor system. Changing the non-linearity parameter k transitions the system from completely periodic motion, to quasi-periodic, to local chaos (mixed phase-space) and finally to global chaos. The default parameter k is the critical parameter where the golden-ratio torus is destroyed, as was calculated by Greene [2]. The e.o.m. considers the angle variable θ to be the first, and the angular momentum p to be the second, while both variables are always taken modulo 2π (the mapping is on the [0,2π)² torus). [1] : B. V. Chirikov, Preprint N. 267, Institute of Nuclear Physics, Novosibirsk (1969) [2] : J. M. Greene, J. Math. Phys. 20, pp 1183 (1979) DynamicalSystemsBase.Systems.stommel_thermohaline — Function stommel_thermohaline(u = [0.3, 0.2]; η1 = 3.0, η2 = 1, η3 = 0.3) Stommel's box model for Atlantic thermohaline circulation \[\begin{aligned} \dot{T} &= \eta_1 - T - |T-S| T \\ \dot{S} &= \eta_2 - \eta_3S - |T-S| S \end{aligned}\] Here $T, S$ denote the dimensionless temperature and salinity differences respectively between the boxes (polar and equatorial ocean basins) and $\eta_i$ are parameters. DynamicalSystemsBase.Systems.stuartlandau_oscillator — Function stuartlandau_oscillator(u0=[1.0, 0.0]; μ=1.0, ω=1.0, b=1) -> ds The Stuart-Landau model describes a nonlinear oscillation near a Hopf bifurcation, and was proposed by Landau in 1944 to explain the transition to turbulence in a fluid [Landau1944]. It can be written in cartesian coordinates as [Deco2017] \[\begin{aligned} \dot{x} &= (\mu -x^2 -y^2)x - \omega y - b(x^2+y^2)y \\ \dot{y} &= (\mu -x^2 -y^2)y + \omega x + b(x^2+y^2)x \end{aligned}\] The dynamical analysis of the system is greatly facilitated by putting it in polar coordinates, where it becomes the normal form of the supercritical Hopf bifurcation) [Strogatz2015]. \[\begin{aligned} \dot{r} &= \mu r - r^3, \\ \dot{\theta} &= \omega +br^2 \end{aligned}\] The parameter \mu serves as the bifurcation parameter, \omega is the frequency of infinitesimal oscillations, and b controls the dependence of the frequency on the amplitude. Increasing \mu from negative to positive generates the supercritical Hopf bifurcation, leading from a stable spiral at the origin to a stable limit cycle with radius \sqrt(\mu). [Landau1944] : L. D. Landau, "On the problem of turbulence, In Dokl. Akad. Nauk SSSR (Vol. 44, No. 8, pp. 339-349) (1944). [Deco2017] : G. Deco et al "The dynamics of resting fluctuations in the brain: metastability and its dynamical cortical core", Sci Rep 7, 3095 (2017). [Strogatz2015] : Steven H. Strogatz "Nonlinear dynamics and chaos : with applications to physics, biology, chemistry, and engineering", Boulder, CO :Westview Press, a member of the Perseus Books Group (2015). DynamicalSystemsBase.Systems.tentmap — Function tentmap(u0 = 0.2; μ=2) -> ds The tent map is a piecewise linear, one-dimensional map that exhibits chaotic behavior in the interval [0,1] [Ott2002]. Its simplicity allows it to be geometrically interpreted as generating a streching and folding process, necessary for chaos. The equations describing it are: \[\begin{aligned} x_{n+1} = \begin{cases} \mu x, \quad &x_n < \frac{1}{2} \\ \mu (1-x), \quad &\frac{1}{2} \leq x_n \end{cases} \end{aligned}\] The parameter μ should be kept in the interval [0,2]. At μ=2, the tent map can be brought to the logistic map with r=4 by a change of coordinates. [Ott2002] : E. Ott, "Chaos in Dynamical Systems" (2nd ed.) Cambridge: Cambridge University Press (2010). DynamicalSystemsBase.Systems.thomas_cyclical — Function thomas_cyclical(u0 = [1.0, 0, 0]; b = 0.2) \[\begin{aligned} \dot{x} &= \sin(y) - bx\\ \dot{y} &= \sin(z) - by\\ \dot{z} &= \sin(x) - bz \end{aligned}\] Thomas' cyclically symmetric attractor is a 3D strange attractor originally proposed by René Thomas[Thomas1999]. It has a simple form which is cyclically symmetric in the x,y, and z variables and can be viewed as the trajectory of a frictionally dampened particle moving in a 3D lattice of forces. For more see the Wikipedia page. Reduces to the labyrinth system for b=0, see See discussion in Section 4.4.3 of "Elegant Chaos" by J. C. Sprott. DynamicalSystemsBase.Systems.towel — Function towel(u0 = [0.085, -0.121, 0.075]) \[\begin{aligned} x_{n+1} &= 3.8 x_n (1-x_n) -0.05 (y_n +0.35) (1-2z_n) \\ y_{n+1} &= 0.1 \left[ \left( y_n +0.35 \right)\left( 1+2z_n\right) -1 \right] \left( 1 -1.9 x_n \right) \\ z_{n+1} &= 3.78 z_n (1-z_n) + b y_n \end{aligned}\] The folded-towel map is a hyperchaotic mapping due to Rössler [1]. It is famous for being a mapping that has the smallest possible dimensions necessary for hyperchaos, having two positive and one negative Lyapunov exponent. The name comes from the fact that when plotted looks like a folded towel, in every projection. Default values are the ones used in the original paper. DynamicalSystemsBase.Systems.ueda — Function ueda(u0 = [3.0, 0]; k = 0.1, B = 12.0) \[\ddot{x} + k \dot{x} + x^3 = B\cos{t}\] Nonautonomous Duffing-like forced oscillation system, discovered by Ueda in It is one of the first chaotic systems to be discovered. The stroboscopic plot in the (x, ̇x) plane with period 2π creates a "broken-egg attractor" for k = 0.1 and B = 12. Figure 5 of [1] is reproduced by using Plots ds = Systems.ueda() a = trajectory(ds, 2π*5e3, dt = 2π) scatter(a[:, 1], a[:, 2], markersize = 0.5, title="Ueda attractor") For more forced oscillation systems, see Chapter 2 of "Elegant Chaos" by J. C. Sprott. [2] [1] : Ruelle, David, 'Strange Attractors', The Mathematical Intelligencer, 2.3 (1980), 126–37 DynamicalSystemsBase.Systems.ulam — Function ulam(N = 100, u0 = cos.(1:N); ε = 0.6) A discrete system of N unidirectionally coupled maps on a circle, with equations \[x^{(m)}_{n+1} = f(\varepsilon x_n^{(m-1)} + (1-\varepsilon)x_n^{(m)});\quad f(x) = 2 - x^2\] DynamicalSystemsBase.Systems.vanderpol — Function vanderpol(u0=[0.5, 0.0]; μ=1.5, F=1.2, T=10) -> ds \[\begin{aligned} \ddot{x} -\mu (1-x^2) \dot{x} + x = F \cos(\frac{2\pi t}{T}) \end{aligned}\] The forced van der Pol oscillator is an oscillator with a nonlinear damping term driven by a sinusoidal forcing. It was proposed by Balthasar van der Pol, in his studies of nonlinear electrical circuits used in the first radios [Kanamaru2007][Strogatz2015]. The unforced oscillator (F = 0) has stable oscillations in the form of a limit cycle with a slow buildup followed by a sudden discharge, which van der Pol called relaxation oscillations [Strogatz2015][vanderpol1926]. The forced oscillator (F > 0) also has periodic behavior for some parameters, but can additionally have chaotic behavior. The van der Pol oscillator is a specific case of both the FitzHugh-Nagumo neural model [Kanamaru2007]. The default damping parameter is taken from [Strogatz2015] and the forcing parameters are taken from [Kanamaru2007], which generate periodic oscillations. Setting \mu=8.53 generates chaotic oscillations. [Kanamaru2007] : Takashi Kanamaru (2007) "Van der Pol oscillator", Scholarpedia, 2(1):2202. [Strogatz2015] : Steven H. Strogatz (2015) "Nonlinear dynamics and chaos : with applications to physics, biology, chemistry, and engineering", Boulder, CO :Westview Press, a member of the Perseus Books Group. [vanderpol1926] : B. Van der Pol (1926), "On relaxation-oscillations", The London, Edinburgh and Dublin Phil. Mag. & J. of Sci., 2(7), 978–992. Grebogi1983C. Grebogi, S. W. McDonald, E. Ott and J. A. Yorke, Final state sensitivity: An obstruction to predictability, Physics Letters A, 99, 9, 1983 Kuramoto1975Kuramoto, Yoshiki. International Symposium on Mathematical Problems in Theoretical Physics. 39. Freire2008J. G. Freire et al, Multistability, phase diagrams, and intransitivity in the Lorenz-84 low-order atmospheric circulation model, Chaos 18, 033121 (2008) Sprott2014J. C. Sprott, Simplest Chaotic Flows with Involutional Symmetries, Int. Jour. Bifurcation and Chaos 24, 1450009 (2014) Manneville1980Manneville, P. (1980). Intermittency, self-similarity and 1/f spectrum in dissipative dynamical systems. Journal de Physique, 41(11), 1235–1243 Sprott2020Sprott, J.C. 'Do We Need More Chaos Examples?', Chaos Theory and Applications 2(2),1-3, 2020 OttRiddled2014Ott. et al., The transition to chaotic attractors with riddled basins Sprott2014bJ. C. Sprott. Physics Letters A, 378 Stommel1961Stommel, Thermohaline convection with two stable regimes of flow. Tellus, 13(2) Thomas1999Thomas, R. (1999). International Journal of Bifurcation and Chaos, 9(10), 1889-1905. « Dynamical System DefinitionNumerical Data » This document was generated with Documenter.jl version 0.27.23 on Thursday 10 November 2022. Using Julia version 1.8.2.
CommonCrawl